id
stringlengths
9
16
pid
stringlengths
11
18
input
stringlengths
5.27k
352k
output
stringlengths
399
9.26k
gao_GAO-18-407
gao_GAO-18-407_0
Background The goal of federal government industrial security is to ensure that contractors’ security programs detect, deter, and counter the threat posed by adversaries seeking classified information. The National Industrial Security Program was established by executive order in 1993 to replace industrial security programs operated separately by various federal agencies and ensure that contractors, among others, were adequately protecting classified information. For the purposes of this report, we will use “contractor” to refer to any party that the program applies to, including contractors, grantees, licensees, certificate holders, and their respective employees. DSS Responsibilities DSS is responsible for administering the National Industrial Security Program on behalf of the Department of Defense and, by mutual agreement, 32 other federal departments and agencies. Headquartered in Quantico, Virginia, and with staff in 26 field offices across four regions, DSS provides oversight, advice, and assistance to more than 12,000 U.S. facilities that are cleared for access to classified information under the program. Facilities can range in size and be located anywhere in the United States, and include manufacturing plants, laboratories, and universities. In addition, they can also include contractor personnel who travel to U.S. government sites to access classified information but do not store any classified information at their facility. There are multiple reasons why a contractor may need access to classified government information. For example, a factory may produce parts for a major weapons system using a production process that is classified, or a contractor may have employees who deliver their technical expertise in a classified environment at a military installation. National Industrial Security Program Operating Manual As part of the facility clearance process, DSS is responsible for ensuring that cleared contractors safeguard classified information under the program by meeting requirements, which are outlined in the National Industrial Security Program Operating Manual. The Secretary of Defense, in consultation with all affected agencies and with the concurrence of the Secretary of Energy, the Nuclear Regulatory Commission, the Director of National Intelligence, and the Secretary of Homeland Security, issues and maintains the operating manual. The operating manual addresses the contractors’ key responsibilities such as reporting incidents of suspected loss of classified information. The Information Security Oversight Office of the National Archives and Records Administration, an agency separate from the Department of Defense, monitors the National Industrial Security Program and issues implementing directives for agencies. The Information Security Oversight Office also chairs the program’s policy advisory council, which is comprised of government and industry representatives who recommend changes to industrial security policy. The Department of Defense, including DSS, has periodically issued information for contractors in the program, such as industrial security letters, to clarify the operating manual. The operating manual states that a contractor or prospective contractor is eligible for a facility clearance if it has a need for access to classified information in connection with a legitimate U.S. government contracting requirement. A facility clearance is an administrative determination that, from a national security standpoint, a contractor or prospective contractor is eligible to access classified information at a specified level. A contractor’s employees cannot begin accessing classified information until the facility clearance has been granted, even if that results in delayed performance of a contract. Facility Clearance Process According to the operating manual, in order for a contractor or prospective contractor to enter the program, it may be sponsored by an already cleared contractor or the government contracting activity. DSS requires information about the contract, subcontract, or solicitation that necessitates a clearance, such as level of safeguarding required and a brief description of the procurement. Within the government contracting activity, the information may be provided by the contracting office, program office, or security office. DSS begins its facility clearance process once it receives the information and assigns the case to an industrial security representative at a local DSS field office. The industrial security representative serves as the primary point of contact for the sponsored facility during the clearance process and once the contractor is eligible to access classified information. Across DSS field offices and headquarters, multiple people are involved in the facility clearance process, including those who specialize in information systems or others who have experience with analyzing contractors for indicators of foreign influence. See figure 1 for more details about how DSS processes a facility clearance. As shown in the figure above, DSS also reviews the contractor’s ownership and business structure to assess whether foreign interests indicate a contractor is under foreign influence, which could lead to disclosure of classified information to foreign nationals. Contractors are required to answer questions about whether there is foreign involvement in their ownership, board composition, debt, source of revenues, and any other situations where foreign nationals might be in a position to influence their operations. If DSS determines that there is a risk for foreign influence, the contractor is ineligible for a facility clearance unless, and until, security measures are put in place, such as negotiating a mitigation agreement with DSS. As of June 2017, approximately 630 of the over 12,000 cleared facilities in the program have mitigation agreements in place to address foreign influence. As part of the facility clearance process, certain personnel, such as the facility security officer, must receive personnel clearances to the level of the facility clearance. In the personnel clearance process, specialists at DSS headquarters grant interim clearances to U.S. citizens based on national security standards and information from background investigations conducted by the Office of Personnel Management, if there is no adverse information of material significance. Before the facility clearance can be granted, a DSS industrial security representative verifies that the key management personnel have received their permanent clearance. Contractor Responsibilities for Cleared Facilities After DSS completes the facility clearance process and determines that a contractor is eligible to access classified information and grants the facility security clearance, the cleared contractor officially enters the National Industrial Security Program. Once in the program, contractors establish a security program at cleared facilities or implement security measures required by the Department of Defense security agreement, as well as any elements required by DSS. Depending on the facility, security measures may address a variety of industrial security issues. For example, a contractor may be required to start using visitor logs or badges to track every person with physical access to a facility or establish separate computer systems for the sole purpose of storing classified information. In addition, contractors are required to implement insider threat programs, which are meant to prevent persons with approved access to classified information, such as contractor employees, from causing harm to national security through unauthorized disclosures. The insider threat programs may include activities such as training programs about reporting requirements or monitoring classified information systems. DSS monitors cleared contractor facilities to determine their compliance with the program’s requirements for protecting classified information by conducting periodic security reviews. DSS determines the frequency of these reviews, although they generally cannot take place more than once in a 12-month period, according to the operating manual. The duration of security reviews and the size of the team conducting them vary by facility. For example, a single industrial security representative can perform a review of a small facility with no classified information stored on site in one day. By comparison, a large facility may require a lengthier review that involves additional DSS officials, such as information system security professionals who review a facility’s information systems if they are needed to store or process classified information. Moreover, counterintelligence officials may also participate and provide threat information about the facility. Security reviews are generally led by staff located in DSS’s 26 field offices across the country. A contractor’s facility clearance may be subject to invalidation or revocation if DSS identifies certain vulnerabilities, among other things. See figure 2 for more information about DSS’s process for monitoring contractor facilities in the program. In addition to administering the facility clearance process and conducting security reviews at cleared facilities, DSS also collects information from cleared contractors about suspicious contacts, which may involve efforts by an individual to obtain illegal or unauthorized access to classified information, among other things. DSS aggregates this information to identify counterintelligence trends among cleared contractors and refers cases to the relevant agency for further investigation or action. We last issued reports about the National Industrial Security Program in 2004 and 2005. In 2004, we made eight recommendations for DSS to improve its processes for conducting security reviews, such as taking steps to quickly notify government contracting activities when classified information has been lost or compromised. The Department of Defense agreed with our recommendations. In 2005, we made eight recommendations about DSS’s oversight of contractors under foreign influence. For example, we recommended that DSS collect and analyze data about foreign business transactions in order to improve its oversight of contractors under foreign influence. The Department of Defense partially agreed with our recommendations and subsequently took action to address them. As of April 2018, 13 of 16 of the recommendations have been implemented. For more detail on our prior recommendations, please see appendix II. DSS Upgraded Capabilities for the National Industrial Security Program but Faces Challenges Monitoring Contractors Streamlined Clearance Processes Since 2005, when we last reviewed how DSS administered the National Industrial Security Program, it has streamlined its facility clearance process in order to make it more efficient. DSS has also strengthened the process to analyze contractors for foreign influence and the Department of Defense issued a rule to clarify policies and procedures for mitigating foreign influence concerns. Despite upgrading its capabilities, DSS continues to face challenges in monitoring cleared contractors with access to classified information. In 2004 and 2005, we reported that DSS did not collect and analyze data on contractors operating in the National Industrial Security Program. For example, DSS was not able to analyze data to make informed resource decisions or track key changes that affect contractors operating under foreign influence. In our 2005 report, we recommended that DSS collect and analyze data about foreign business transactions, among other things. As a result, DSS streamlined its facility clearance process by developing two electronic systems for tracking the facility clearance requests and maintaining information on cleared facilities. 1. The Electronic Facility Clearance System is a web-based system that contractors or prospective contractors use to submit their required information, such as key management personnel and other staff who need to be cleared for access as well as business-related items like articles of incorporation, bylaws, and other supporting documentation. 2. The Industrial Security Facilities Database is another web-based system that serves as a repository for information about cleared facilities. DSS field office and contractor officials we spoke with noted that the web- based systems help them do their job more efficiently. For example, DSS’s industrial security representatives stated that these systems make the facility clearance and monitoring process more efficient because it is easier to track the status of documentation received. Industrial security representatives also track conditions that may require changes to their monitoring process through this database, such as a change in ownership or key management personnel. Industrial security representatives noted that being able to track this information electronically is helpful because the facility clearance and monitoring processes involve numerous officials within DSS, as well as other parties, such as the government contracting activity and the contractor. For example, a government contracting activity can use the database to check whether a facility has been cleared to store classified information onsite before sending materials to them. In 2017, DSS started the process of modernizing these systems by developing two new systems. DSS officials stated that these two new systems will provide additional automation that can be used in the facility clearance and monitoring processes. The new systems are: National Industrial Security Program Contracts Classification System. This system collects detailed information about classified contract(s) a facility will support during the initial clearance process as well as throughout the duration of the facility’s clearance, to include the facility’s assets (e.g. technology produced or expertise provided), and enables the government contracting activity to gain visibility into the subcontractors performing work for each classified contract. National Industrial Security System. This system will be the official repository for data on cleared facilities. DSS officials noted that the system will help identify foreign influence concerns, such as changes in a contractor’s ownership, because they will be more centrally tracked. Further, in 2017, DSS also issued a manual to reflect an updated process for assessing and authorizing cleared contractors’ information systems that process classified information. DSS changed its process to align with the intelligence community, the Department of Defense, and other federal government agencies’ standards. DSS previously reviewed systems on regular cycles and is shifting to reflect practices in the intelligence community that are based on assessed threats and target the information systems that pose the most significant risk of losing information. DSS information security system professionals told us that this new authorization process is helping them clarify and communicate the nature of security risks to the contractor. The updated process is intended to include the identification of cybersecurity concerns earlier than the prior approach and enables DSS’s information system security professionals to adjust their monitoring to meet emerging cyber threats. Centralized Support and Strengthened Its Process to Identify Foreign Influence In response to recommendations we made in 2005, DSS has centralized its support related to identifying and mitigating foreign influence and strengthened its process, including issuing a rule to make the process of mitigation of foreign influence clearer to contractors. Since our last review of the program in 2005, DSS has centralized staff expertise in headquarters to improve the identification and mitigation of foreign influence concerns. Whereas DSS used to rely primarily on field staff to negotiate and oversee individual facilities in their respective regions, it now has staff in headquarters, including specialists in law and other areas, who have an agency-wide view of threats and who understand the portfolio of contractors that may be at risk of foreign influence. DSS officials said that this is important because a contractor may have multiple cleared facilities across several regions. They noted that an agency-wide view helps DSS identify trends across facilities that may be tied to a single contractor. The headquarters staff: negotiate and put in place mitigation agreements that require contractors under foreign influence to acknowledge and mitigate foreign influence risks, including the development of protective measures to reduce the risk of foreign interests gaining access to classified information; identify foreign influence within cleared contractors and provide written analysis to DSS field offices when foreign influence concerns are identified, such as when a foreign contractor acquires a majority or substantial minority position in a U.S. contractor with a cleared facility; and provide subject matter expertise in the areas of business, acquisition, intelligence, and international law to develop a comprehensive understanding of companies, their industries and technologies, as well as the regulatory environments in foreign countries. DSS officials acknowledged that the establishment of a headquarters division in 2008 focused on analyzing foreign influence and issuing related publications was in response to recommendations we made in 2005. DSS’s field office industrial security representatives said that the written products and specialized foreign influence analysis prepared and disseminated by DSS headquarters has resulted in more timely identification and mitigation of these issues. Examples include: NISP in the News, an internal weekly publication that provides a summary of business transactions that may result in the need for a mitigation agreement to address foreign influence. Industrial security representatives we spoke with said this publication helps them identify and proactively address issues with their contractors. DSS officials told us the publication is helpful because it can result in more timely identification and initiate the process for negotiating a mitigation agreement, particularly in cases where a foreign company acquires a facility previously owned by a U.S. contractor. Copies of NISP in the News that we reviewed also included information that may affect contractors that are not under a mitigation agreement for foreign influence, such as changes in key management personnel. We previously reported that DSS had challenges identifying these transactions or facility security officers would neglect to report them, which led to delays in putting protective measures in place to prevent unauthorized access to classified information. Assessments of new contractors that have been sponsored for clearances, which are used to identify and mitigate foreign influence. DSS industrial security representatives stated that this analysis used to be performed in the field but now they can use time previously spent preparing analysis of foreign influence to work with contractors to implement security measures. Further, the assessments help them work more effectively with contractors because they draw upon expertise across different disciplines. For example, 7 of the 13 facility case files we reviewed contained a summary of analysis conducted by specialists in DSS headquarters. The summaries also noted that the specialists reviewed classified and unclassified information on the contractor, including counterintelligence information and other U.S. government information, as applicable. In April 2014, the Department of Defense issued a rule about policies and procedures for mitigating foreign ownership, control, or influence. This rule was issued in order to ensure maximum uniformity and effectiveness in the Department of Defense implementation of the National Industrial Security Program. The rule detailed specific mitigation approaches for addressing concerns about foreign ownership, control, or influence, which we cover in detail in appendix I. The rule clarified the role of DSS, the government contracting activity, and the contractor during the process when DSS determines that the contractor needs to mitigate potential foreign ownership, control, or influence. The rule also documented policies and procedures regarding how decisions will be made on the appropriate method to mitigate foreign ownership, control, or influence. These include the timing of agency and contractor actions involved in mitigation of foreign ownership, control, or influence and how to proceed in cases where the contractor had not worked out a mitigation agreement with DSS before changed conditions (e.g. indebtedness, ownership, or foreign intelligence threat) occurred, among other things. The rule further stated that DSS, in consultation with the government contracting activity, has discretion to modify or reject the contractor’s outlined action plan to mitigate foreign ownership, control, or influence. Challenges Remain Despite upgrading its capabilities, DSS officials indicated that they face resource constraints, such as an inability to manage workloads and complete training necessary to stay informed on current threats and technologies. DSS’s current resource challenges include: Managing staff workloads. DSS field officials acknowledged that they have historically faced workload challenges. DSS officials said that their limited staff carry heavy workloads and, according to DSS’s most recent biennial report to Congress, were unable to conduct security reviews at about 60 percent of cleared facilities in fiscal year 2016. In addition to their official security reviews, industrial security representatives also conduct informal “advise and assist” efforts when facility officials inquire about a range of security issues, from preparing employees for overseas travel to providing training on reporting suspicious contacts. In fiscal year 2016, industrial security representatives conducted about 22,000 “advise and assist” efforts. DSS officials attribute the heavy workload to the current staffing levels of their field offices and frequent turnover among the industrial security representatives. DSS officials noted that both hiring and retention are difficult and that these challenges are exacerbated by the fact that it is a relatively small agency with field offices with limited staff. For example, an average field office oversees about 470 facilities and has about 8 industrial security representatives on staff. As a result, if a person leaves, it adds strain to the remaining staff. Most of the contractors’ facility security officers we spoke with noted that DSS field officials have heavy workloads that could affect their ability to respond to threats at cleared facilities. Further, DSS indicated that it has limited resources to analyze, process, and distribute counterintelligence to the cleared facilities. For example, DSS received more than 46,000 reports from cleared contractors about suspicious contacts in fiscal year 2016, which was an almost 18 percent increase over the prior year. In comparison, during the same time period, DSS’s counterintelligence directorate, which analyzes suspicious contact reports, grew by 7 percent. In addition, DSS’s ability to distribute counterintelligence is limited by the geographic distribution of over 12,000 cleared facilities and each facility’s capability to receive or store classified communication. Developing foreign influence mitigation agreements. Multiple DSS industrial security representatives and contractors’ facility security officers stated that mitigation agreements to address the risk of foreign influence, including supplemental plans, have become more detailed and the process to develop and implement them has required additional time and resources. For example, DSS may require a contractor to develop an electronic communications plan, which must include details about which networks will be protected from access by a foreign parent contractor, including monitoring, maintaining, and establishing separate email servers, as appropriate. DSS reported in its 2015 biennial report to Congress that the average amount of time to approve and implement a foreign influence mitigation plan was 93 days. The length of time to approve and implement a foreign influence mitigation plan more than doubled to 204 days, according to the 2017 biennial report. DSS officials stated that this increase is due, in part, to increased complexity of the agreements and the amount of coordination required between the government contracting activity, DSS, and the contractor. A DSS official also noted that over time, the agency has incorporated more information in its analysis and sometimes needs more time to review all the information that may be relevant. Attending relevant trainings. DSS officials in three of four regions noted that staffing challenges affect their ability to take training, even though industrial security matters continue to become more sophisticated. Information system security professionals said they face challenges in learning technology that continues to evolve. For example, they cited the multiple software products such as operating systems and configurations of information networks that are used in a facility’s daily operations. In addition, they need to understand other technologies that can pose risks to industrial security, such as devices that are capable of transmitting data, like cellular phones, and therefore might need to be prohibited from areas where classified information is discussed. As a result, the lack of expertise in multiple technologies hampers their ability to identify vulnerabilities that might leave a facility at risk for loss of classified information. We have previously reported that training staff in new skills, such as cybersecurity, remains an ongoing challenge for the federal government. For example, in 2016, we found that chief information officers throughout the government identified difficulties related to recruiting, hiring, and retaining qualified personnel, as well as ensuring they have the appropriate skills and expertise. DSS Has Not Determined How It Will Collaborate with Stakeholders As It Pilots a New Approach In 2017, DSS announced its plans to transition to a new approach to monitoring cleared facilities in order to address emerging threats to classified information. DSS faces challenges as it pilots its new approach—DSS in Transition. DSS has taken steps to begin addressing challenges, including scheduling training for its staff, but has not documented how it will collaborate with its stakeholders or identified the resources needed to monitor cleared facilities. New Approach to Monitoring Cleared Facilities In 2017, DSS announced that it would begin transitioning to an asset-and- threat-based monitoring approach. DSS has reported that the United States is facing the most significant foreign intelligence threat it has ever encountered and adversaries are attacking cleared facilities at unprecedented rates. In fact, adversaries are varying their methods and adjusting their priorities based on the targeted information they need. The new approach is expected to involve DSS working collaboratively with contractors and government contracting activities to design a customized security plan for each facility based on threats specific to its assets rather than using a standardized worksheet to perform security reviews. DSS officials said that customized security plans will be developed based on assets at the specific facilities. For example, a contractor providing information technology services may need the latest software to thwart cyberattacks while a contractor that engineers weapons systems may need additional secure storage facilities and work areas to ensure an adversary cannot physically extract classified information or technology. As a result, according to agency officials, these customized security plans represent a departure from a “one size fits all” or schedule-driven approach to overseeing contractors’ protection of classified information. According to DSS officials, this new approach, DSS in Transition will use the Department of Defense’s list of critical technologies and programs, along with counterintelligence, to prioritize facilities for security reviews based on their assets and the severity of the threats to them. See table 1 for more information about the monitoring approaches. After announcing DSS in Transition in 2017, DSS began taking steps to develop its methodology for the new approach, including prioritization of facilities and developing procedures for executing customized security plans. In a January 2018 letter to industry, DSS stated that it plans to pilot the new approach by working with one facility in each of its four regions to develop a customized security plan and use the lessons learned to refine the process. While it is piloting the approach at four facilities, DSS notified contractors not participating in the pilot that DSS would partner with selected facilities to identify and document their critical assets. Industry, including contractors and prospective contractors that are interested in U.S. government contract awards in the future, are awaiting more details on how DSS plans to implement DSS in Transition, including who would be responsible for the costs of additional security requirements, according to a March 2018 statement from the industry spokesperson of the National Industrial Security Program Policy Advisory Committee. Collaboration Needed with Stakeholders As It Pilots the New Approach Although DSS began piloting DSS in Transition in January 2018, it has not determined how it will collaborate with government contracting activities, the intelligence community, other federal agencies, and contractors. In particular, DSS has not identified its stakeholders’ roles and responsibilities in terms of who needs to communicate and coordinate with whom and when, which is necessary to successfully implement the new approach. For example, DSS needs to establish agreed-upon criteria for what information a government contracting activity would need to provide to DSS in order to develop a customized security plan for a facility. GAO’s Federal Internal Control Standards establish the need to coordinate with stakeholders and clearly define roles and responsibilities, among other things. In addition, our leading practices for interagency collaboration state that successful collaborative working relationships require organizations to agree on roles and responsibilities and identify the resources necessary to accomplish objectives. For example, GAO found it is unclear how DSS will determine what resources it needs as it has not identified the necessary roles and responsibilities. DSS has taken steps to begin addressing these challenges by establishing an office dedicated to documenting processes and procedures for how DSS in Transition will be implemented, providing a concept of operations, and scheduling training for its staff. However, to monitor cleared facilities, DSS needs information from the various National Industrial Security Program stakeholders, including: Government contracting activity. DSS officials stated that, under the new approach to monitoring cleared facilities, they will need to better communicate and coordinate with the government contracting activity. For example, in some circumstances, DSS officials will have to collaborate with government contracting activities to determine when a security plan is no longer sufficient as threats and mitigation methods evolve. DSS officials stated that communication and coordination with government contracting activities has been a challenge because industrial security is often considered an added duty on top of their contract management responsibilities. Further, DSS officials indicated that staff turnover at government contracting activities and the lack of clear roles and responsibilities have led to delays in resolving a facility’s vulnerabilities. According to DSS officials, it is difficult to determine whether they have the correct point of contact at the government contracting activity to discuss vulnerabilities at a facility, which, if left unaddressed, can leave classified information at risk for loss. There are no formal agreements about how a case should be elevated and resolved if DSS identifies vulnerabilities and is unable to elicit a response from the government contracting activity about further action, according to DSS officials. In addition, DSS officials stated that they will need to work with the government contracting activity to assess the risk of security vulnerabilities that involve subcontractors working on contracts containing classified information. In the past, a government contracting activity might not know the identities of subcontractors if they were sponsored by a cleared contractor. Given the adversaries’ ability to vary its methods to target information it needs, the government needs to know who—regardless of subcontracting tier—is accessing classified information. Government intelligence community. We found DSS has not established how it will collaborate with the intelligence community, including formalizing roles and responsibilities for its new approach. A DSS counterintelligence official told us that DSS currently relies on a combination of its own counterintelligence staff and informal coordination with other agencies, such as the Federal Bureau of Investigation. Another DSS official stated that they have worked with the Federal Bureau of Investigation to deliver counterintelligence when a facility does not have the capacity to receive classified information electronically. According to DSS field officials, the current process is handled on a case-by-case basis, depending on the availability of resources. Although DSS’s Counterintelligence Directorate recently became part of the intelligence community and will potentially have greater access to counterintelligence data, it will need to determine how to regularly communicate with the intelligence community to fully understand their products and share current threats and vulnerabilities with certain contractors under DSS in Transition. Other government agencies. DSS relies on collaborating with other cognizant security agencies to develop a complete picture of the threats to contractors. In addition to the Department of Defense, there are four other federal agencies that have authority to inspect and monitor facilities to ensure the protection of classified information. DSS may only conduct security reviews for facilities performing contracts awarded by these agencies if the Department of Defense is the cognizant security agency for that facility. Contracts where another agency is the cognizant security agency may involve information coveted by adversaries, but DSS industrial security representatives have acknowledged that they may not know why the information is coveted or that it exists. Given the new approach to develop a complete picture of threats to a facility, DSS will need additional information from other cognizant security agencies that it may not have sought in the past. As a result, DSS needs to establish how best to collaborate with other agencies, such as identifying appropriate points of contacts and specific time frames to conduct outreach, to effectively implement DSS’s new approach to monitor contractors. Cleared contractors. DSS officials said that DSS in Transition will require contractors to identify assets in a greater level of detail than what was previously expected of them. In order to develop a security plan unique to the facility, the contractor’s facility security officers will need to understand these assets and why adversaries would want to target them in order to develop and implement specific security measures. Since DSS officials cannot be onsite every day, they have to rely on the contractor’s facility security officer or other key management personnel at cleared facilities to identify and report potential problems. However, DSS officials noted that convincing facility staff to spend more time on security-related matters may be difficult at facilities where one employee may serve as the contractor’s facility security officer in addition to having other responsibilities. In addition, DSS officials stated that contractors’ security costs are typically not profit-generators and realize that DSS in Transition may require the contractor to expend more time, money, and energy to address vulnerabilities or enact policies to safeguard against adversaries. DSS recognizes the need to keep industry informed and its implementation plans for DSS in Transition need to address what level of communication and coordination is required. For example, DSS currently uses a rating system to indicate how well a contractor is meeting the requirements of the operating manual as a metric for how it is protecting classified information. As DSS moves toward developing customized security plans that are unique to each facility’s threats and assets, it needs to formalize new approaches to communicate with contractors how well they are protecting classified information. In addition to piloting DSS in Transition, DSS is reassuming responsibility for conducting background and security investigations for the Department of Defense, which could potentially magnify its workload challenges. DSS previously held these responsibilities but they were transitioned to the Office of Personnel Management in 2005. The National Defense Authorization Act of Fiscal Year 2018 required DSS to reassume this background and security investigations mission by implementing a phased transition by October 1, 2020. This phased transition will overlap with DSS’s piloting of DSS in Transition and may create disruptions as an agency of over 700 employees assumes responsibility for a background and security investigations mission that currently has more than 7,000 employees and contractors. In January 2018, we added personnel security clearances to our high-risk list, a list of federal areas in need of either broad-based transformation or specific reforms to prevent waste, fraud, and abuse. This issue is on the list because we identified: (1) a significant backlog of background investigations; (2) a lack of long-term goals for increasing federal and contractor-provided investigator capacity to address the backlog; and (3) delays in the timely processing of security clearances among other factors. DSS officials have identified potential benefits and challenges with reassuming the background investigations mission, and, in August 2017, the Department of Defense submitted a plan for a 3-year phased transition to assume the background and security investigations mission. In March 2018, we reported that this transition could potentially affect the timely processing of personnel security clearances, the backlog, and other reform initiatives but the effect is unknown at this time. Conclusions Given the changing nature of threats to classified information, DSS needs to ensure that classified information is protected from unauthorized access. While DSS has upgraded its capabilities for identifying foreign influence, DSS officials acknowledged that adversaries continue to evolve, and classified information and technologies remain vulnerable to exploitation. In response, in 2017, DSS launched a new approach (DSS in Transition) to change how it oversees contractors with access to classified information. As DSS pilots the new approach, it will need to work with government contracting activities, the intelligence community, other agencies, and cleared contractors to determine their roles and responsibilities in protecting classified information at every facility in the program. Without the necessary information—that is gained through communicating and coordinating with stakeholders—to assess the threats to the nation’s most critical technologies and programs, DSS will be unable to provide appropriate oversight that addresses the most significant threats to industrial security. Also, DSS has not identified the resources necessary, including the number of personnel needed to implement its new approach, which will add pressure to an agency accepting a background and security investigations mission that has significant backlog and timeliness challenges. Until DSS identifies roles and responsibilities and determines how it will collaborate with stakeholders for the pilot, it will be difficult to assess whether the new approach is effective in protecting classified information. Recommendation for Executive Action We are making one recommendation to the Director of the Defense Security Service: Determine how it will collaborate with stakeholders as it pilots a new approach to overseeing contractors with cleared facilities (DSS in Transition), including identifying roles and responsibilities and the related resources needed. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report for review and comment to DSS. DSS provided written comments, which are reproduced in appendix III. In its comments, DSS concurred with the recommendation and summarized actions it is taking to pilot its new approach (DSS in Transition). DSS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, Director of DSS, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or makm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Methods for Mitigating Foreign Influence Since our July 2005 report, the Defense Security Service (DSS) has taken additional steps to address oversight of contractors with foreign influence. In April 2014, the Department of Defense issued a rule that clarified policies for oversight of contractors under foreign ownership, control, or influence. The rule detailed specific mitigation approaches for addressing foreign ownership, control, or influence concerns. The rule provided detail regarding the terms of each of these types of foreign ownership, control, or influence mitigation agreements and the circumstances under which each may be appropriate. The types of mitigation specified in the rule are: Board Resolution. The board resolution may be used when a foreign entity does not own voting interests sufficient to elect a representative to the company’s governing board. Security Control Agreement. The security control agreement is a tailored foreign ownership, control, or influence mitigation agreement, often used when a foreign interest does not effectively own or control a company or corporate family but the foreign interest is entitled to representation on the company’s board. Special Security Agreement. The special security agreement may be used when a company is effectively owned or controlled by a foreign interest. Access to certain proscribed classified information by a company cleared under this agreement may require that the government contracting activity complete a National Interest Determination to determine that the release of proscribed information to the company is consistent with the national security interests of the United States. Voting Trust Agreement and Proxy Agreement. These foreign ownership, control, or influence mitigation agreements may be used when a foreign interest effectively owns or controls a company or corporate family. Under these agreements, the foreign owner relinquishes most rights associated with ownership of the company to cleared United States citizens approved by the U.S. government. DSS has clarified the types of supplemental plans that companies must submit to document specific steps that it will take to mitigate foreign influence. Table 2 describes the types of plans and provides examples of how they mitigate foreign influence. Appendix II: Status of Prior GAO Recommendations Related to the National Industrial Security Program In 2004 and 2005, GAO issued reports about the National Industrial Security Program and made 16 recommendations. Prior to the start of our review, the Department of Defense, through the Defense Security Service (DSS), implemented two of the recommendations. Below is our assessment of whether DSS addressed the remaining 14 recommendations that had been previously recorded as “closed – not implemented”. Table 3 provides a summary of those recommendations and the actions that DSS has taken in response to the recommendations. A number of the recommendations we made were aimed at clarifying policies related to contractors under foreign influence that were part of the National Industrial Security Program. The primary evidence to support our conclusions is cited in the last column. In GAO-04-332, we made eight recommendations that were recorded as closed not implemented prior to the start of this review. Based on information obtained during this review, seven of the recommendations will be closed as implemented. The recommendation that remains closed as not implemented was outside of the scope of the current review. In GAO-05-681, we made eight recommendations and two of the recommendations were closed as implemented prior to this review. Based on information obtained during this review, four of the remaining six recommendations will be closed as implemented. We were unable to close two of the six recommendations as implemented based on the information provided during this review. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Marie A. Mak, (202) 512-4841 or makm@gao.gov. Staff Acknowledgments In addition to the contact named above, Penny Berrier (Assistant Director), Lorraine Ettaro, Gina Flacco, Stephanie Gustafson, John Rastler, Sylvia Schatz, Roxanna Sun, Alyssa Weir, and Jocelyn Yin made key contributions to this report.
Why GAO Did This Study Industrial security addresses the information systems, personnel, and physical security of facilities and their cleared employees who have access to or handle classified information. The National Industrial Security Program was established in 1993 to safeguard federal government classified information that may be or has been released to contractors, among others. GAO last reported on this program in 2005 and the Department of Defense has since implemented 13 of the 16 related recommendations. GAO was asked to examine how DSS administers the program. This report assesses to what extent DSS: 1) changed how it administers the program since GAO's last report; and 2) addressed challenges as it pilots a new approach to monitoring contractors with access to classified information. GAO reviewed guidance and regulations since 2005, including the program's operating manual. GAO analyzed data from DSS's electronic databases and also selected a non-generalizable sample of contractor facilities based on clearance level, geographic location, and type of agreement to address foreign influence. We also reviewed documents and interviewed relevant government and contractor officials. What GAO Found The Defense Security Service (DSS) has upgraded its capabilities but also faces challenges in administering the National Industrial Security Program, which applies to all executive branch departments and agencies, and was established to safeguard federal government classified information that current or prospective contractors may access. Since we last reported on this program in 2005, DSS has: streamlined facility clearance and monitoring processes, and strengthened the process for identifying contractors with potential foreign influence. However, under its current approach, DSS officials indicated that they face resource constraints, such as an inability to manage workloads and complete training necessary to stay informed on current threats and technologies. In its most recent report to Congress, DSS stated that it was unable to conduct security reviews at about 60 percent of cleared facilities in fiscal year 2016. Further, DSS recently declared that the United States is facing the most significant foreign intelligence threat it has ever encountered. As a result, in 2017, DSS announced plans to transition to a new monitoring approach to address emerging threats at facilities in the program. For a comparison of the current and new approaches, see below. DSS has not addressed immediate challenges that are critical to piloting this new approach. For example, GAO found it is unclear how DSS will determine what resources it needs as it has not identified roles and responsibilities. Moreover, DSS has not established how it will collaborate with stakeholders—government contracting activities, the government intelligence community, other government agencies, and contractors—under the new approach. Federal Internal Control Standards establish the importance of coordinating with stakeholders, including clearly defining roles and responsibilities. In addition, GAO's leading practices for interagency collaboration state that it is important for organizations to identify the resources necessary to accomplish objectives. Until DSS identifies roles and responsibilities and determines how it will collaborate with stakeholders for the piloting effort, it will be difficult to assess whether the new approach is effective in protecting classified information. What GAO Recommends GAO recommends DSS determine how it will collaborate with stakeholders, including identifying roles and responsibilities and related resources, as it pilots a new approach. DSS concurred with the recommendation.
gao_GAO-17-790T
gao_GAO-17-790T_0
Background In 1990, GAO began a program to report on government operations that we identified as “high risk.” Since then, generally coinciding with the start of each new Congress, we have reported on the status of progress addressing previously identified high-risk areas and have updated the High-Risk List to add new high-risk areas. Our most recent high-risk update in February 2017 identified 34 high-risk areas. Overall, our high-risk program has served to identify and help resolve serious weaknesses in areas that involve substantial resources and provide critical services to the public. Since the program began, the federal government has taken high-risk problems seriously and has made long-needed progress toward correcting them. In a number of cases, progress has been sufficient for us to remove the high-risk designation. To determine which federal government programs and functions should be designated high risk, we use our guidance document, Determining Performance and Accountability Challenges and High Risks. In making this determination, we consider whether the program or function is of national significance or is key to the performance and accountability of the federal government, among other things. Our experience has shown that the key elements needed to make progress in high-risk areas are top-level attention by the administration and agency leaders grounded in the five criteria for removal from the High-Risk List, as well as any needed congressional action. The five criteria for removal that we identified in November 2000 are listed in table 1 below. In each of our high-risk updates, we have assessed agencies’ progress to address the five criteria for removing a high-risk area from the list using the following definitions: Met. Actions have been taken that meet the criterion. There are no significant actions that need to be taken to further address this criterion. Partially Met. Some, but not all, actions necessary to meet the criterion have been taken. Not Met. Few, if any, actions towards meeting the criterion have been taken. Figure 1, which is based on a general example, shows a visual representation of varying degrees of progress in each of the five criteria for a high-risk area. We use this system to assess and track the progress of all agencies with areas on our High Risk list. When we rate Interior and HHS’s progress on Improving Federal Management of Programs that Serve Tribes and Their Members for the first time in our 2019 High Risk report, we will provide similar information. Status of GAO’s Recommendations on Indian Education As we have previously reported, the Office of the Assistant Secretary- Indian Affairs (Indian Affairs), through BIE, is responsible for providing quality education opportunities to Indian students and oversees 185 elementary and secondary schools that serve approximately 41,000 students on or near Indian reservations in 23 states, often in rural areas and small towns. About two-thirds of BIE schools are operated by tribes, primarily through federal grants, and about one-third are operated directly by BIE. BIE’s Indian education programs originate from the federal government’s trust responsibility to Indian tribes. It is the policy of the United States to fulfill this trust responsibility for educating Indian children by working with tribes to ensure that education programs are of the highest quality and, in accordance with this policy, Interior is responsible for providing children a safe and healthy environment in which to learn. All BIE schools—both tribally- and BIE-operated—receive almost all of their operational funding from federal sources—namely, Interior and the Department of Education (Education)—totaling about $1.2 billion in 2016. Indian Affairs considers many BIE schools to be in poor condition. BIE is primarily responsible for its schools’ educational functions, while their administrative functions—such as safety, facilities, and property management—are divided mainly between two other Indian Affairs’ offices: BIA and the Office of the Deputy Assistant Secretary of Management. As discussed below, we have made 23 recommendations to Interior on Indian education—including recommendations cited in GAO’s 2017 High Risk report and included in two late May reports. Interior generally agreed with our recommendations. However, none have been fully implemented. Indian Affairs’ Management and Accountability for BIE Schools In our 2017 High Risk report, we cited 3 recommendations from a 2013 report on management challenges facing Indian Affairs, with which Interior agreed, and these recommendations remain unimplemented as of late August 2017. These recommendations were based on our findings of Indian Affairs’ poor management and lack of accountability for BIE schools. In particular, we found that BIE did not have procedures in place specifying who should be involved in making key decisions, resulting in inaccurate guidance provided to some BIE schools about the appropriate academic assessments required by federal law. We also found that Indian Affairs had not developed a strategic plan with specific goals and measures for itself or BIE or conducted workforce analysis to ensure it has the right people in place with the right skills to effectively meet the needs of BIE schools. Further, we found that fragmented administrative services for BIE schools and a lack of clear roles for BIE and Indian Affairs’ Office of the Deputy Assistant Secretary for Management contributed to delays in BIE schools acquiring needed materials, such as textbooks. As a result, we recommended that Indian Affairs develop decision-making procedures and a strategic plan for BIE and revise its workforce plan, among other areas. Of the 3 unimplemented recommendations we made to Interior on Indian Affairs’ management and accountability for BIE schools, agency officials reported that they have taken several actions to address them, including drafting written procedures for BIE decision-making; starting to develop a strategic plan for BIE; and conducting workforce planning. Indian Affairs’ actions to implement our recommendations to develop decision-making procedures and a strategic plan for BIE had not been completed as of late August. Indian Affairs officials told us they believed they had fully implemented our recommendation on strategic workforce planning. However, in reviewing their supporting documentation, we determined that their actions did not address our recommendation to ensure that the staff who are responsible for providing administrative support to BIE schools have the requisite skills and knowledge and are placed in the appropriate offices. For a full description of the agency’s actions and our evaluation of these actions, see recommendations in table 2 in appendix I. Oversight of BIE School Spending We made 4 recommendations in a 2014 report on BIE’s oversight of school spending, none of which have been implemented. These recommendations were based on our findings of key weaknesses in Indian Affairs’ oversight of BIE school spending. In particular, we found that BIE lacked sufficient staff with expertise to oversee school expenditures, and as a result, these staff told us they lacked the knowledge and skills to understand the audits they needed to review. We also found that some staff did not have access to some of these audits. In addition, we found that BIE lacked written procedures and a risk-based approach to overseeing school spending—both integral to federal internal control standards—which resulted in schools’ misuse of federal funds. For example, external auditors identified $13.8 million in unallowable spending at 24 schools. Auditors also found that one school lost about $1.7 million in federal funds that were improperly transferred to off-shore accounts. As a result, we recommended that Indian Affairs take several actions to address these oversight weaknesses, including developing written procedures and a risk-based approach to monitor school spending and a process to share relevant information, such as audit reports, with all BIE staff responsible for overseeing BIE school spending, among other areas. Of the 4 unimplemented recommendations we made to Interior on the oversight of BIE school spending, agency officials reported taking several actions, including providing their auditors with needed access to schools’ audit reports. Officials also said they would put in place written procedures and a risk-based approach to improve the financial monitoring of BIE schools. As of late August 2017, officials had not provided us with documentation of any steps they have taken to improve oversight of school spending. For a full description of the agency’s actions and our evaluation of these actions, see recommendations in table 2 in appendix I. Safety and Health at Indian School Facilities We made 4 recommendations in a 2016 report on the safety and health of BIE school facilities, none of which have been implemented. These recommendations were based on our findings that Indian Affairs was not annually inspecting all BIE schools, as required by Indian Affairs’ policy. We also found that the agency did not have a plan to monitor safety inspections across its regions to ensure that inspection practices were consistent and supported the collection of complete and accurate inspection information. Further, we found the agency had not taken steps to assist BIE schools to build their capacity to address identified safety deficiencies. Some school officials we spoke to reported lacking staff with the knowledge and skills necessary to understand and address safety issues. Further, at one school we visited, we found seven boilers that failed inspection because of multiple high-risk safety deficiencies, including elevated levels of carbon monoxide and a natural gas leak. Four of the boilers were located in a student dormitory, and three were located in classroom buildings. All but one of the boilers were about 50 years old. Although the poor condition of the boilers posed an imminent danger to the safety of students and staff, most of them were not repaired until about 8 months after failing their inspection, prolonging safety risks to students and staff. As a result of these findings, we recommended that Indian Affairs take several actions, including developing a plan to build BIE schools’ capacity to address safety hazards identified by BIA inspectors, among other areas. Of the 4 unimplemented recommendations we made to Interior on ensuring safety and health at BIE schools, Indian Affairs completed safety inspections at all BIE schools in 2016, among other actions. However, based on our review of the agency’s actions, we determined that several steps remain for these recommendations to be fully implemented. For example, as of late August 2017 the agency had not provided us with documentation that it has developed a plan for monitoring safety inspections across its regions to ensure that inspection practices are consistent. Further, Indian Affairs did not provide documentation that it had taken any actions to develop a plan to build BIE schools’ capacity to address safety and health problems identified with their facilities. For a full description of the agency’s actions and our evaluation of these actions, see recommendations in table 2 in appendix I. We also made 6 recommendations in a May 2017 report on oversight and accountability for BIE school safety inspections, none of which have been implemented. These recommendations were based on our findings of key weaknesses in Indian Affairs’ oversight of school safety inspections. In particular, we found that Interior and Indian Affairs had not taken actions to address identified weaknesses in BIA’s safety program, despite internal evaluations since 2011 that consistently found it to be failing. For example, no Indian Affairs office routinely monitored the quality or timeliness of inspection reports, and BIA employees were not held accountable for late reports despite a new employee performance standard on timely report submission. We found that 28 of 50 inspection reports we reviewed were incomplete, inaccurate, or unclear, including reports in which inspectors did not include all school facilities or incorrectly gave schools a year to fix broken fire alarms instead of the required 24 hours. We concluded that unless steps are taken to address safety program weaknesses, the safety and health of BIE students and staff may be at risk. As a result, we recommended that Indian Affairs take steps to address weaknesses in BIA’s safety program, including establishing processes to monitor the quality and timeliness of BIE school inspection reports, among other areas. Of these 6 unimplemented recommendations we made to Interior to improve its oversight of school safety inspections, Indian Affairs reported taking several actions. In particular, Indian Affairs reported that its safety office had established a procedure to monitor the timeliness of inspection report submissions to schools, and that BIA is currently developing a corrective action plan to address findings and recommendations from a 2016 Interior review of BIA’s safety program. However, as of late August 2017, Indian Affairs had not provided us with any documentation on these two actions. For a full description of the agency’s actions and our evaluation of these actions, see recommendations in table 2 in appendix I. Indian Affairs’ Oversight of School Construction Projects We made 6 recommendations in a May 2017 report on school construction projects, none of which have been implemented. These recommendations were based on our findings of key weaknesses in Indian Affairs’ oversight of school construction projects. In particular, we found that Indian Affairs did not have a comprehensive capital asset plan to guide the allocation of funding for school construction projects. We concluded that until Indian Affairs develops such a plan, it risks using federal funds inefficiently and not prioritizing funds to schools with the most pressing needs. Additionally, we found that Indian Affairs has not consistently used accountability measures or conducted sufficient oversight to ensure that BIE school construction projects are completed on time, within budget, and meet schools’ needs. For instance, Indian Affairs has not always used accountability measures, such as warranties, to have builders replace defective parts or repair poor workmanship, and project managers do not always understand how to use accountability measures because Indian Affairs had not provided them guidance. We concluded that until Indian Affairs develops and implements guidance to ensure accountability throughout the school construction process and improves its oversight of construction projects, it will have little assurance they are completed satisfactorily and meet the needs of students and staff. As a result, we recommended that Indian Affairs take several actions, including developing a comprehensive capital asset plan and guidance on the effective use of accountability measures for managing BIE school construction projects, among other areas. Of these 6 unimplemented recommendations that we made to Interior to improve its oversight of BIE school construction projects, Indian Affairs reported taking several actions. For example, Indian Affairs reported that to support the effective use of accountability measures, it established new oversight mechanisms, hired staff with expertise in construction contracting, and administered training for contracting staff. As of late August 2017, however, Indian Affairs had not provided us any documentation of these steps, so we cannot verify that the actions were responsive to our recommendations. Further, Indian Affairs did not report taking any actions to develop guidance on the effective use of accountability measures, which our recommendation specifies. Indian Affairs also reported that it is currently in the process of establishing a new work group to focus on asset management and will continue working to develop a capital asset management plan. Finally, the agency reported it was planning to take several other actions to address our recommendations. For a full description of the agency’s actions and our evaluation of these actions, see recommendations in table 2 in appendix I. Status of GAO’s Recommendations on Indian Energy As we have previously reported, some tribes and their members hold abundant energy resources and have decided to develop these resources to meet the needs of their community, in part because energy development provides opportunities to improve poor living conditions, decrease high levels of poverty, and fund public services for tribal members. While tribes and their members determine how to use their energy resources, if the resources are held in trust or restricted status, BIA—through its 12 regional offices, 85 agency offices, and other supporting offices—generally must review and approve leases, permits, and other documents required for the development of these resources. In the past 2 years, we have reported that BIA has mismanaged Indian energy resources held in trust, thereby limiting opportunities for tribes and their members to use those resources to create economic benefits and improve the well-being of their communities. Specifically, we issued 3 reports that identified concerns associated with BIA management of energy resources and categorized those concerns into the following four areas: (1) BIA’s data and technology; (2) oversight of BIA activities; (3) collaboration and communication; and (4) BIA’s workforce planning. As discussed below, we made 14 recommendations to BIA to help address BIA management weaknesses that were cited in our 2017 High Risk report. BIA generally agreed with these recommendations. However, none have been fully implemented. BIA’s Data and Technology We made 2 recommendations related to data and technology for which BIA has taken some actions and made some progress to implement. However, neither of these recommendations has been fully implemented. We made these recommendations based on our June 2015 findings that BIA did not have the necessary geographic information systems (GIS) mapping data and that BIA’s federal cadastral surveys cannot be found or are outdated. According to Interior guidance, GIS mapping technology allows managers to easily identify resources available for lease and where leases are in effect. However, we found that BIA did not have the necessary GIS mapping data for identifying who owns and uses resources, such as existing leases. We also found that BIA could not verify who owned some Indian resources or identify where leases were in effect in a timely manner because, in part, federal cadastral surveys could not be found or were outdated. In addition, we found the extent of this deficiency was unknown because BIA did not maintain an inventory of Indian cadastral survey needs, as called for in Interior guidance. Of the 2 unimplemented recommendations to help ensure that BIA can verify ownership in a timely manner and identify resources available for development, BIA has taken several actions. Regarding GIS data, BIA officials told us that the agency has integrated and deployed data viewing and map creation capabilities into its database for recording and maintaining historical and current data on ownership and leasing of Indian land and mineral resources—the Trust Asset and Accounting Management System (TAAMS)—on August 31, 2017. We will work with BIA to obtain the documentation needed to determine if the deployed GIS capability has the functionality for us to consider this recommendation as fully implemented. Regarding cadastral surveys, according to a BIA official, the agency requested that each of its 12 regions review and identify historic survey requests from a data system that has not been fully maintained or consistently used since 2011 to determine if the requests are still valid. BIA officials told us the next step is to create a new database that will track cadastral survey needs and a reporting mechanism for each BIA region to use when making new survey requests. According to BIA officials, the agency anticipates the new database and reporting mechanism will be deployed by September 30, 2017. For a full description of the agency’s actions and our evaluation of these actions, see table 3 in appendix II. BIA’s Oversight of Its Review Process for Energy Development We made 5 recommendations to BIA related to its review process for energy development, none of which have been fully implemented. In June 2015 and June 2016, we found that BIA did not have a documented process or the data needed to track its review and response times throughout the development process, including the approval of leases, rights-of-way (ROW) agreements, and communitization agreements (CA). The ability to track and monitor the review of permits and applications is a best practice to improve the federal review process. Of the 5 unimplemented recommendations we made to help ensure that BIA fulfills its responsibilities concerning the review and approval of documents related to energy development in an efficient and transparent manner, BIA has taken some actions and identified other actions it plans to take. For example, on May 17, 2017, the Acting Assistant Secretary- Indian Affairs testified before this committee that a group of BIA subject matter experts have been working to modify TAAMS, incorporating the key identifiers and data fields needed to track and monitor review and response times for oil and gas leases and agreements. The Acting Assistant Secretary also stated that BIA is in the process of evaluating and reviewing the current realty tracking system and TAAMS to improve efficiencies and timeliness in processing workloads. BIA identified actions to track and monitor review and response times for oil and gas leases and agreements; however, BIA did not indicate whether it intends to track and monitor its review of other energy-related documents, such as ROW agreements, that must be approved before tribes can develop resources. In another example, on May 17, 2017, the Acting Assistant Secretary- Indian Affairs testified before this committee that a National Policy Memorandum has been developed that establishes time frames for review and approval of Indian CAs. The Acting Assistant Secretary also stated that such time frames will also be incorporated into the BIA Fluid Mineral Estate Procedural Handbook and the Onshore Energy and Mineral Lease Management Interagency Standard Operating Procedures. However, in our review of the National Policy Memorandum we did not find that it establishes time frames for review and approval of Indian CAs. In response to our request for clarification, a BIA official told us the agency is in the process of drafting suggested time frames. For a full description of the agency’s actions and our evaluation of these actions, see table 3 in appendix II. BIA’s Collaboration and Communication We made 5 recommendations related to collaboration and communication in our June 2015 and November 2016 reports. BIA has taken some actions, but the actions are generally limited in scope and none of these recommendations have been fully implemented. We found in our November 2016 report that BIA has taken steps to form an Indian Energy Service Center that is intended to, among other things, help expedite the permitting process associated with Indian energy development. However, we found several weaknesses in BIA’s collaboration processes and structure. For example, in November 2016, we reported that BIA did not coordinate with other key regulatory agencies that can have a role in the development of Indian energy resources, including Interior’s Fish and Wildlife Service (FWS), the Army Corps of Engineers (Corps), and the Environmental Protection Agency (EPA). As a result, the Service Center was neither established as the central point for collaborating with all federal regulatory partners generally involved in energy development, nor did it serve as a single point of contact for permitting requirements. In addition, BIA did not include the Department of Energy (DOE) in a participatory, advisory, or oversight role in the development of the Service Center. Further, although Interior’s Office of Indian Energy and Economic Development (IEED) developed the initial concept and proposal for the Service Center and has special expertise regarding the development of Indian energy resources, BIA did not include IEED in the memorandum of understanding (MOU) establishing the Service Center. BIA also did not document the rationale for key management decisions or the alternatives considered in forming the Service Center—a leading practice for effective organizational change. In addition, several tribal leaders and tribal organizations made suggestions that were not currently reflected in BIA’s Service Center. Without documentation on alternatives considered, it was unclear whether these requests were appropriately considered. Of the 5 unimplemented recommendations to help improve efficiencies in the federal regulatory process, BIA reported that it has taken some actions. For example: According to a BIA official, the agency has initiated discussions with FWS, EPA, and the Corps in an effort to establish formal agreements. BIA has a target of December 31, 2017, to establish these agreements. However, in its current structure, the Service Center is not serving as a lead agency or single point of contact to coordinate and navigate the regulatory process. Without additional information, it is unclear if the formal agreements alone will allow the Service Center to serve this role. We will continue to work with BIA officials to understand how the formal agreements with other regulatory agencies will help to transform the Service Center into a central point of contact for Indian energy development. According to a BIA official, the agency developed and is currently reviewing an addendum to expand an existing MOU between DOE and IEED to include the Service Center. However, the existing MOU between DOE and IEED does not identify the role for these agencies as related to the Service Center. As such, the addendum, as currently described to us by a BIA official, will not fully implement our recommendation. On May 17, 2017, the Acting Assistant Secretary- Indian Affairs testified before this committee that Interior considers this recommendation implemented because (1) the development of the Service Center was the result of a concept paper produced by a multi- agency team and (2) a multi-agency team held a tribal listening session, received written comments, and conducted conference calls in an effort to gather input from relevant stakeholders. We do not agree that these actions meet the intent of the recommendation. BIA’s actions have not resulted in documentation on the alternatives considered, whether tribal input and requests were considered, and the rationale for not incorporating key suggestions. In addition, in 2005, Congress provided an option for tribes to enter into a tribal energy resource agreement (TERA) with the Secretary of the Interior that allows the tribe, at its discretion, to enter into leases, business agreements, and rights-of-way agreements for energy resource development on tribal lands without review and approval by the Secretary. However, in a June 2015 report, we found that uncertainties about Interior’s regulations for implementing this option have contributed to deter tribes from pursuing agreements. We recommended that Interior provide clarifying guidance. On May 17, 2017, the Acting Assistant Secretary- Indian Affairs testified before this committee that Interior is working to provide additional energy development-specific guidance on provisions of TERA regulations that tribes have identified to the department as unclear. As part of this effort, the Acting Assistant Secretary reported that IEED continues to perform training and technical assistance on the TERA regulations, and plans to issue guidance on those provisions of TERA that have been identified as unclear. As of September 6, 2017, Interior has not issued additional guidance and several Interior officials told us it is unlikely any new guidance will clarify “inherently federal functions”—one provision of Interior’s regulations tribes have identified as unclear. For a full description of the agency’s actions and our evaluation of these actions, see table 3 in appendix II. BIA’s Workforce Planning We made 2 recommendations on workforce planning to BIA in November 2016, neither of which has been fully implemented. In our November 2016 report we found BIA had high vacancy rates at some agency offices and that the agency had not conducted key workforce planning activities consistent with Office of Personnel Management standards and leading practices identified in our prior work. Of the 2 unimplemented recommendations to help ensure that it has a workforce with the right skills, appropriately aligned to meet the agency’s goals and tribal priorities, BIA has reported several actions it plans to take. On May 17, 2017, the Acting Assistant Secretary- Indian Affairs testified before this committee that BIA is in the process of identifying and implementing a workforce plan regarding positions associated with the development of Indian energy and minerals. Specifically, the Acting Assistant Secretary stated that the Service Center will collect data directly from BIA, Bureau of Land Management (BLM), the Office of Natural Resources Revenue (ONRR), and the Office of Special Trustee (OST) employees in an effort to identify workload and necessary technical competencies. Then, the Service Center will work with partner bureaus to assess skills and competencies needed for energy and mineral workforce standards. BIA’s target for completion of the activities is the end of 2017. BIA stated it is taking steps to identify workload and technical competencies, but without additional information, it is unclear if these actions will identify potential gaps in its workforce or result in the establishment of a documented process for assessing BIA’s workforce composition at agency offices. For a full description of the agency’s actions and our evaluation of these actions, see table 3 in appendix II. Status of GAO’s Recommendations on Indian Health Care As we have previously reported, the Indian Health Service (IHS), an agency within the Department of Health and Human Services (HHS), is charged with providing health care to approximately 2.2 million Indians. IHS oversees its health care facilities through a decentralized system of area offices, which are led by area directors and located in 12 geographic areas. In fiscal year 2016, IHS allocated about $1.9 billion for health services provided by federally and tribally operated hospitals, health centers, and health stations. Federally operated facilities—including 26 hospitals, 56 health centers, and 32 health stations—provide mostly primary and emergency care, in addition to some ancillary or specialty services. When services are not available at federally operated or tribally operated facilities, IHS may, in some cases, pay for services provided through external providers through its Purchased/Referred Care (PRC) program— previously referred to as the Contract Health Services program. The PRC program is funded through annual appropriations and must operate within the limits of available appropriated funds. To be eligible for PRC services, recipients must generally meet several criteria, including being a member or descendant of a federally recognized tribe or having close social and economic ties with the tribe, and living within a designated PRC area. Although funding available for the PRC program has recently increased, we have reported that the program is unable to pay for all eligible services, and that these gaps in services sometimes delay diagnoses and treatments, which can exacerbate the severity of a patient’s condition and necessitate more intensive treatment. As discussed below, we made 13 recommendations to IHS that were unimplemented when we issued our 2017 High Risk report, with which HHS generally agreed. One has been fully implemented. Estimating PRC Program Needs In our February 2017 High Risk report, we cited 2 recommendations from a 2011 report on the accuracy of data used for estimating PRC needs, with which HHS agreed. These recommendations remain unimplemented as of late August 2017. We based these recommendations on our finding that IHS’s estimates of the extent to which unmet needs exist in the PRC program were not reliable because of deficiencies in the agency’s oversight of the collection of data on deferred and denied PRC program services. As a result, we made several recommendations for IHS to develop more accurate data for making these estimates and improving agency oversight. Of the 2 recommendations not yet fully implemented that we made to IHS on estimating PRC program needs, HHS officials reported that updated policy and procedural guidance will be issued to all IHS sites by September 30, 2017. We will evaluate the policy and procedural guidance when it is issued. For a full description of the agency’s actions and our evaluation for these unimplemented recommendations, see table 4 in appendix III. Ensuring Equitable Allocation of PRC Program Funds We made 3 recommendations to IHS to help make its allocation of PRC program funds more equitable, none of which have been implemented. We also raised a matter for Congress to consider requiring IHS to develop and use a new PRC funding allocation methodology. These recommendations and matter for Congress to consider were based on our findings of wide variations in PRC funding across the 12 IHS areas, that these variations were largely maintained by IHS’s long-standing use of its base funding methodology, that variation in PRC funding was sometimes not related to the availability of IHS-funded hospitals, that IHS’s estimate of PRC service users was imprecise, and that IHS allowed area offices to distribute program increase funds to local PRC programs using different criteria than the PRC allocation formula without informing IHS. As a result, we suggested that Congress consider requiring IHS to develop and use a new method to allocate all PRC program funds to account for variations across areas, and recommended that IHS use actual counts of PRC users and variations in levels of available hospital services in allocation formulas, and develop written policies and procedures to require area offices to notify IHS when changes are made to the allocations of funds to PRC programs. In response to our matter for Congress to consider, a bill that would have addressed this matter was introduced in the House and reported out of committee in 2016, but the bill did not become law. In response to our recommendations, HHS officials told us that a tribal/federal workgroup is currently discussing the PRC fund allocation issues. In July 2017, we requested additional information about the workgroup and any discussion that has occurred or decisions that have been made about PRC funding allocation since we made the recommendation 5 years ago, but as of late August 2017, we have not received any information. As the workgroup continues to discuss the PRC fund allocation issues, we will evaluate any decisions that are made to determine if they address this recommendation. For a full description of the agency’s actions and our evaluation for these recommendations, see table 4 in appendix III. Revising IHS Payment Rates for Nonhospital Services We made 1 recommendation to IHS in a 2013 report on IHS payment rates for nonhospital services through the PRC program, which has not been fully implemented, as well as a matter for Congress to consider. The recommendation and matter for Congress to consider were based on our finding that IHS primarily paid nonhospital providers, including physicians, at their billed charges, despite an IHS policy—in place since 1986—that stated that area offices should attempt to negotiate with providers at rates that are no higher than Medicare rates. As a result, we suggested that Congress consider imposing a cap on payments for physician and other nonhospital services made through IHS’s PRC program that is consistent with the rates paid by other federal agencies. We also recommended that IHS monitor PRC program patient access to physician and other nonhospital care in order to assess how any new payment rates may benefit or impede the availability of care. In response to our recommendation, HHS officials told us that the agency has developed an online PRC rates provider tracking tool that enables PRC programs to document providers that refuse to contract for their most favored customer rate or accept the Medicare-like rate. We have requested documentation of this provider tracking tool, but as of late August 2017, we have not yet received information sufficient to consider the recommendation implemented. For a full description of the agency’s actions and our evaluation for these recommendations, see table 4 in appendix III. Ensuring Successful Outreach to Increase Enrollment in Expanded Coverage Options In our February 2017 High Risk report, we cited 1 recommendation from a 2013 report on the eligibility and enrollment of American Indians in expanded health care programs, with which HHS neither agreed nor disagreed. This recommendation remains unimplemented as of late August 2017. We reported that the expansion of Medicaid and new coverage options under the Patient Protection and Affordable Care Act (PPACA) may allow many American Indians to obtain additional health care benefits for which they were not previously eligible, resulting in IHS facilities receiving increased reimbursements from third-party payers and an increased workload for IHS facility staff responsible for processing these payments. We also found that IHS did not have an effective plan in place to ensure that a sufficient number of facility staff were prepared to assist with enrollment and to process increased third-party payments. As a result, we recommended that IHS realign its resources and personnel to increase its capacity to assist with increased enrollment and third-party billing. IHS has not reported taking any new action to implement the remaining recommendation. In response to our request for an update, IHS again provided a copy of a planning template it developed for facility Chief Executive Officers (CEO) that encourages them to assess the need for staffing changes in light of new and expanded coverage options available under PPACA. IHS previously explained, during the course of our review, that its planning template is a document that facility CEOs have been directed to use. We agree that developing a template to aid facilities in their planning for PPACA implementation is a good step. However, considering the large, system-wide growth in eligibility for new and expanded coverage options described in our report, we expect to see a system-wide response. Under its current approach, preparing for increased eligibility is dependent on the discretion of facility CEOs. IHS has not provided any evidence that this approach has resulted in the realignment of personnel needed to address an increased need for application assistance and third party billing. For a full description of the agency’s actions and our evaluation for these recommendations, see table 4 in appendix III. Improving IHS’s PRC Program We made 2 recommendations in a 2013 report on opportunities for IHS to improve the PRC program, neither of which has been fully implemented. Our recommendations were based on our finding that determining eligibility for PRC funding—including the need to ascertain each time a referral is received whether the patient met residency requirements and the service met medical priorities—is inherently complex. As a result, we recommended that IHS take steps to improve the PRC program, including separately tracking IHS referrals and self-referrals, and revising its practices to allow available funds to be used to pay for PRC program staff. HHS agreed with our recommendation to separately track IHS referrals and self-referrals, but not to revise its practices to allow available funds to be used to pay for PRC program staff. HHS agreed to our recommendation to proactively develop potential options to streamline program eligibility requirements. IHS has not yet fully implemented these recommendations. HHS officials told us that IHS is developing 2 new measures that will track and measure PRC authorized referrals and self- referrals to time of payment for each type of referral. We will review the proposed changes when they are available. For a full description of the agency’s actions and our evaluation for these recommendations, see table 4 in appendix III. Improving IHS Oversight of Patient Wait Times We made 2 recommendations in a 2016 report on IHS oversight of patient wait times, one of which was implemented in August 2017. These recommendations were based on our finding that IHS had not set any agency-wide standards for patient wait times at IHS federally operated facilities. We found that, while individual facilities had taken steps to help improve patient wait times, IHS had not monitored the timeliness of patient care on an agency-wide scale. As a result, we recommended that IHS 1) develop specific agency-wide standards for patient wait times, and 2) monitor patient wait times in its federally operated facilities and ensure corrective actions are taken when standards are not met. In response to our first recommendation, IHS developed specific standards for patient wait times and published them to the IHS Indian Health Manual website in August 2017. As a result of this action, we consider this recommendation to be fully implemented. In response to our second recommendation, in early September 2017 HHS officials told us that data collection tools for monitoring are under development. We will review IHS’s monitoring of facility performance, as well as any corrective actions, when these steps have been completed. For a full description of the agency’s actions on the unimplemented recommendation and our evaluation, see table 4 in appendix III. Improving IHS Oversight of Quality of Care We made 2 recommendations in a 2017 report on IHS’s oversight of quality of care in its federally operated facilities, neither of which has been fully implemented. These recommendations were based on our finding that IHS’s oversight of the quality of care provided in its federally operated facilities has been limited and inconsistent, due in part to a lack of agency-wide quality of care standards. We found that these inconsistencies were exacerbated by significant turnover in area leadership and that the agency had not defined contingency or succession plans for the replacement of key personnel, including area directors. As a result, we recommended that IHS develop agency-wide standards for quality of care, systematically monitor facility performance in meeting these standards, enhance its adverse event reporting system, and develop contingency and succession plans for the replacement of key personnel. HHS agreed with our recommendations, and officials reported that the development of agency-wide measures, goals, and benchmarks are nearing completion. According to HHS, it is also developing a system- wide dashboard of performance accountability metrics for use at the enterprise, area, and facility levels. HHS officials told us that the enhancements to their adverse event reporting system are delayed because key personnel on the project became unavailable due to deployment. Finally, HHS officials told us that all IHS headquarters offices and area offices established a succession plan that identified staff and development needs to prepare for future leadership opportunities. We requested documentation of these succession plans, but as of late August 2017, we have not received any. For a full description of the agency’s actions and our evaluation for these recommendations, see table 4 in appendix III. In conclusion, although Interior and HHS have taken some actions to address our recommendations related to federal programs serving Indian tribes, 49 recommendations discussed in this testimony have not yet been fully implemented. We plan to continue monitoring the agencies’ efforts to address these unimplemented recommendations. In order for the Federal Management of Programs that Serve Tribes and Their Members to be removed from our High-Risk List, Interior and HHS need to show improvement on the five key elements described earlier: leadership commitment, capacity, action plan, monitoring, and demonstrated progress. These five criteria form a road map for agencies’ efforts to improve and ultimately address high-risk issues. We look forward to continuing our work with this committee in overseeing Interior and IHS to ensure that they are operating programs for tribes in the most effective and efficient manner, consistent with the federal government’s trust responsibilities, and working toward improving services for tribes and their members. Chairman Hoeven, Vice Chairman Udall, and Members of the Committee, this completes my prepared statement. My colleagues and I would be pleased to respond to any questions that you may have. GAO Contacts and Staff Acknowledgments If you or your staff have any questions about education issues in this testimony or the related reports, please contact Melissa Emrey-Arras at (617) 788-0534 or emreyarrasm@gao.gov. For questions about energy resource development, please contact Frank Rusco at (202) 512-3841 or ruscof@gao.gov. For questions about health care, please contact Kathleen King at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement include Elizabeth Sirois (Assistant Director), Edward Bodine (Analyst-in- Charge), James Bennett, Richard Burkard, Kelly DeMots, Christine Kehr, Liam O’Laughlin, William Reinsberg, James Rebbe, Jay Spaan, Ann Tynan, and Emily Wilson. Appendix I: Status of Unimplemented Recommendations to the Department of the Interior on Indian Education Appendix II: Status of Unimplemented Recommendations to the Department of the Interior on Indian Energy Appendix III: Status of Unimplemented Recommendations to the DHHS on the Indian Health Service Appendix III: Status of Unimplemented Recommendations to the DHHS on the Indian Health Service This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study GAO's High-Risk Series identifies federal program areas needing attention from Congress and the executive branch. GAO added federal management of programs that serve Indian tribes and their members to its February 2017 biennial update of high-risk areas in response to serious problems with management and oversight by Interior and HHS. This testimony identifies GAO's recommendations to Interior and HHS from prior GAO reports on the federal management and oversight of Indian education, energy resources, and health care that remain unimplemented. It also examines agencies' recent actions to address the recommendations and the extent to which these actions address GAO's recommendations. To conduct this work, GAO reviewed and analyzed agency documentation on actions taken to implement the recommendations and conducted interviews with agency officials. What GAO Found As discussed in the 2017 High Risk report, GAO has identified numerous weaknesses in how the Department of the Interior (Interior) and the Department of Health and Human Services (HHS) manage programs serving Indian tribes. Specifically, these weaknesses were related to Interior's Bureau of Indian Education (BIE) and Bureau of Indian Affairs (BIA)—under the Office of the Assistant Secretary-Indian Affairs (Indian Affairs)—in overseeing education services and managing Indian energy resources, and HHS's Indian Health Service (IHS) in administering health care services. GAO cited nearly 40 recommendations in its 2017 High Risk report that were not implemented, and has since made an additional 12 recommendations in two new reports on BIE school safety and construction published in late May of this year. Interior and HHS have taken some steps to address these recommendations but only one has been fully implemented. Education. GAO has found serious weaknesses in Indian Affairs' oversight of Indian education. For example, in 2016, GAO found that the agency's lack of oversight of BIE school safety contributed to deteriorating facilities and equipment in school facilities. At one school, GAO found seven boilers that failed inspection because of safety hazards, such as elevated levels of carbon monoxide and a natural gas leak. In 2017, GAO found key weaknesses in the way Indian Affairs oversees personnel responsible for inspecting BIE school facilities for safety and manages BIE school construction projects. Of GAO's 23 recommendations on Indian education—including recommendations cited in GAO's 2017 High Risk report and in two late May reports—none have been fully implemented. Energy resource management. In three prior reports on Indian energy, GAO found that BIA inefficiently managed Indian energy resources and the development process, thereby limiting opportunities for tribes and their members to use those resources to create economic benefits and improve the well-being of their communities. GAO categorized concerns associated with BIA management of energy resources and the development process into four broad areas, including oversight of BIA activities, collaboration, and BIA workforce planning. GAO made 14 recommendations to BIA to address its management weaknesses, which were cited in the 2017 High Risk report. However, none have been fully implemented. Health care. GAO has found that IHS provides inadequate oversight of its federally operated health care facilities and of its Purchased/Referred Care program. For example, in 2016 and 2017, GAO found that IHS provided limited and inconsistent oversight of the timeliness and quality of care provided in its facilities and that inconsistencies in quality oversight were exacerbated by significant turnover in area leadership. GAO also found that IHS did not equitably allocate funds to meet the health care needs of Indians. Of GAO's 13 recommendations on Indian health care cited in GAO's 2017 High Risk report, one has been fully implemented. What GAO Recommends GAO cited nearly 40 unimplemented recommendations in its February 2017 High Risk report on federal programs for Indian tribes in education, energy development, and health care, and added 12 recommendations in two new reports on BIE school safety and construction in late May of this year. Sustained focus by Interior and HHS in fully implementing these recommendations and continued oversight by Congress are essential to achieving progress in these areas.
gao_GAO-18-646T
gao_GAO-18-646T_0
About One-Third of Covered Entities Had One or More Contract Pharmacies, and Pharmacy Characteristics Varied We found that as of July 1, 2017, about one-third of the more than 12,000 covered entities in the 340B Program had contract pharmacies. A higher percentage of hospitals (69.3 percent) had at least one contract pharmacy compared to federal grantees (22.8 percent). Among covered entities that had at least one contract pharmacy, the number of contract pharmacies ranged from 1 to 439, with an average of 12 contract pharmacies per entity. The number of contract pharmacies varied by covered entity type, with disproportionate share hospitals having the most on average (25 contract pharmacies), and critical access hospitals having the least (4 contract pharmacies). Across all covered entities, the distance between the entities and their contract pharmacies ranged from 0 miles (meaning that the contract pharmacy and entity were co-located) to more than 5,000 miles; the median distance was 4.2 miles. About half of the entities had all their contract pharmacies located within 30 miles, but this varied by entity type. Specifically, more than 60 percent of critical access hospitals and federally qualified health centers, a type of federal grantee, had all of their contract pharmacies within 30 miles. In contrast, 45 percent of disproportionate share hospitals had at least one pharmacy that was more than 1,000 miles away compared to 11 percent or less for critical access hospitals and grantees. Selected Covered Entities Used Various Methods to Pay Contract Pharmacies and TPAs Contracts we reviewed between selected covered entities and contract pharmacies showed that entities generally agreed to pay their contract pharmacies a flat fee per 340B prescription, with some entities also paying additional fees based on a percentage of revenue. The flat fees generally ranged from $6 to $15 per prescription, but varied by several factors, including the type of covered entity and drug, as well as the patient’s insurance status. In addition to flat fees, many of the contracts we reviewed included provisions for the covered entity to pay the pharmacy a fee based on the percentage of revenue generated by each prescription. These percentage fees only applied to prescriptions provided to patients with insurance, and ranged from 12 to 20 percent of the revenue generated by the prescriptions. Selected covered entities and TPAs included in our review indicated two main methods entities use to pay for TPA services: 1) per prescription processed, or 2) per contract pharmacy. Officials with the two TPAs we interviewed and the covered entities that responded to our questionnaire reported that agreements between the parties most frequently involved covered entities compensating their TPAs with a fee for each prescription processed on behalf of the entity, but the exact method and the amount of the fee varied. For example, some covered entities reported paying their TPAs for each prescription regardless of whether it was determined to be 340B eligible, others limited the fees to prescriptions that were 340B eligible, and some reported paying TPAs for 340B-eligible prescriptions dispensed to an insured patient. About Half of the Covered Entities GAO Reviewed Provided Low- Income, Uninsured Patients Discounts on 340B Drugs at Some or All of Their Contract Pharmacies Thirty of the 55 covered entities responding to our questionnaire reported providing low-income, uninsured patients discounts on 340B drugs at some or all of their contract pharmacies. Federal grantees were more likely than hospitals to provide patients with discounts on the price of drugs and to provide them at all contract pharmacies. Of the 30 covered entities that provided discounts, 23 indicated that they pass on the full 340B discount to patients, resulting in patients paying the 340B price or less for drugs. In many cases, these covered entities indicated that patients received drugs at no cost. The 30 covered entities providing 340B discounts to low-income, uninsured patients, reported using a variety of methods to determine whether patients were eligible for these discounts. Fourteen of the covered entities said they determined eligibility for discounts based on whether a patient’s income was below certain thresholds as a percentage of the federal poverty level, 11 reported providing discounts to all patients, and 5 said they determined eligibility for discounts on a case-by-case basis. Some covered entities that did not provide discounts on 340B drugs at their contract pharmacies reported assisting patients with drug costs through other mechanisms. For example, some covered entities reported providing charity care to low-income patients, including free or discounted prescriptions; and some reported providing discounts on drugs dispensed by their in-house pharmacies. Oversight Weaknesses Impede HRSA’s Ability to Ensure Compliance at 340B Contract Pharmacies We found weaknesses in HRSA’s oversight that impede its ability to ensure compliance with 340B Program requirements at contract pharmacies. Specifically: Incomplete Data. We found that HRSA does not have complete data on all contract pharmacy arrangements in the 340B Program to inform its oversight efforts, including its audits of covered entities—the agency’s primary method for assessing entity compliance with program requirements. Although HRSA requires covered entities to register their contract pharmacies with the agency, it does not require covered entities to separately register contract pharmacies to each site of the covered entity with which a contractual relationship exists. HRSA officials told us that the number of registered contract pharmacy arrangements increases a covered entity’s chance of being randomly selected for a risk-based audit. Our analysis of HRSA data showed that the registration of contract pharmacies for 57 percent of covered entities with multiple sites only specified relationships between contract pharmacies and each entity’s main site, as opposed to all sites contracted to distribute drugs on that entity’s behalf. Thus, the likelihood of an entity being selected for an audit is dependent, at least in part, on how an entity registers its pharmacies as opposed to the entity’s actual number of pharmacy arrangements. We concluded that without more complete information on covered entities’ contract pharmacy arrangements, HRSA cannot ensure that it is optimally targeting the limited number of risk-based audits done each year to entities that are at a higher risk for compliance issues because they have more contract pharmacy arrangements. Limited Oversight of Duplicate Discounts. We found that HRSA audits do not fully assess compliance with the 340B Program prohibition on duplicate discounts for drugs prescribed to Medicaid beneficiaries. Specifically, covered entities are prohibited from subjecting manufacturers to “duplicate discounts” in which drugs prescribed to Medicaid beneficiaries are subject to both the 340B price and a rebate through the Medicaid Drug Rebate Program. However, HRSA only assesses the potential for duplicate discounts in Medicaid fee-for-service and not Medicaid managed care, despite the fact that the majority of Medicaid enrollees, prescriptions and spending for drugs were in managed care. HRSA officials told us that they do not assess the potential for duplicate discounts in Medicaid managed care as part of their audits because they have yet to issue guidance as to how covered entities should prevent these duplicate discounts. We concluded that until HRSA develops guidance and includes an assessment of the potential for duplicate discounts in Medicaid managed care as part of its audits, the agency does not have assurance that covered entities’ efforts are effectively preventing noncompliance, and manufacturers are at risk of being required to erroneously provide duplicate discounts for Medicaid prescriptions. Lack of Information on Full Scope of Noncompliance. We found that HRSA requires covered entities for which it identifies issues of noncompliance during audits to assess the full extent of the noncompliance, but it does not provide guidance as to how entities should make these assessments. Specifically, HRSA does not specify the time period covered entities must review to see if any related noncompliance occurred and instead, relies on each entity to make this determination. Additionally, HRSA does not require most covered entities that were audited to communicate the methodology used to assess the full scope of noncompliance, or the findings of their assessments, including how many or which manufacturers were due repayment. As a result, we concluded that HRSA does not know the scope of covered entities’ assessments and whether they were effective at identifying the full extent of the noncompliance identified in the audit. Lack of Evidence of Corrective Actions. We found that prior to closing an audit, HRSA’s audit procedures do not require all covered entities to provide evidence that they have taken corrective action and are in compliance with program requirements. Instead, HRSA relies on the 90 percent of covered entities subject to risk-based audits to self-attest that all audit findings have been addressed and that the entity has come into compliance with 340B Program requirements. We concluded that HRSA, therefore, does not have reasonable assurance that the majority of covered entities audited have corrected the issues identified in the audit, and are not continuing practices that could lead to noncompliance, thus increasing the risk of diversions, duplicate discounts, and other violations of 340B Program requirements. Limited Guidance on Contract Pharmacy Oversight. We found that HRSA’s contract pharmacy oversight guidance for covered entities lacks specificity and thus, provides entities with considerable discretion on the scope and frequency of their oversight practices. Specifically, HRSA’s 2010 guidance on contract pharmacy services specifies that covered entities are responsible for overseeing their contract pharmacies to ensure that the drugs entities distribute through them comply with 340B Program requirements, but states that, “the exact method of ensuring compliance is left up to the covered entity.” According to HRSA officials, if a covered entity indicates that it has performed oversight in the 12 months prior to a HRSA audit, then HRSA considers the entity to have met its standards for conducting contract pharmacy oversight, regardless of what the oversight encompassed. However, due, at least in part, to a lack of specific guidance, we found that some covered entities performed minimal contract pharmacy oversight. Additionally, the identified noncompliance at contract pharmacies raises questions about the effectiveness of covered entities’ current oversight practices. For example, 66 percent of the 380 diversion findings in HRSA audits since 2012 involved drugs distributed at contract pharmacies, and 33 of the 813 audits for which results were available had findings for lack of contract pharmacy oversight. We concluded that as a result of the lack of specific guidance and the numerous HRSA audit findings of noncompliance occurring at contract pharmacies, HRSA does not have assurance that covered entities’ contract pharmacy oversight practices are sufficiently identifying 340B noncompliance. Our June 2018 report contained seven recommendations to HRSA to strengthen its oversight of the 340B Program. HHS concurred with our four recommendations that HRSA should 1) issue guidance to covered entities on the prevention of duplicate discounts under Medicaid managed care; 2) incorporate an assessment of covered entities’ compliance with the prohibition on duplicate discounts, as it relates to Medicaid managed care claims, into its audit process once the guidance is issued; 3) issue guidance on the length of time covered entities must look back following audits to identify the full scope of noncompliance identified during audits; and 4) provide more specific guidance to covered entities regarding contract pharmacy oversight, including the scope and frequency of such oversight. HHS did not concur with our three recommendations that HRSA should 1) require covered entities to register contract pharmacies for each site of the entity for which a contract exists; 2) require all covered entities to specify their methodology for determining the full scope of noncompliance identified during the audit as part of their corrective action plans, and incorporate reviews of covered entities’ methodology into their audit process to ensure that entities are adequately assessing the full scope of noncompliance; and 3) require all covered entities to provide evidence that their corrective action plans have been successfully implemented prior to closing audits, including documentation of the results of the entities’ assessments of the full scope of noncompliance identified during each audit. HHS cited concerns that implementing these recommendations would be burdensome on covered entities and HRSA. However, as explained in our report, we believe that these recommendations would only create limited additional burden on covered entities and the agency and are warranted to improve HRSA’s oversight of the 340B Program. Chairman Burgess, Ranking Member Green, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. GAO Contacts and Staff Acknowledgments If you or your staff members have any questions concerning this testimony, please contact Debra A. Draper at (202) 512-7114 or draper@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, Michelle Rosenberg (Assistant Director), Amanda Cherrin (Analyst in Charge), Jennie Apter, George Bogart, and David Lichtenfeld made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study This testimony summarizes the information contained in GAO's June 2018 report, entitled Drug Discount Program: Federal Oversight of Compliance at 340B Contract Pharmacies Needs Improvement ( GAO-18-480 ). What GAO Found The 340B Drug Pricing Program (340B Program), which is administered by the U.S. Department of Health and Human Services' (HHS) Health Resources and Services Administration (HRSA), requires drug manufacturers to sell outpatient drugs at a discount to covered entities so that their drugs can be covered by Medicaid. Covered entities include certain hospitals and federal grantees (such as federally qualified health centers). About one-third of the more than 12,000 covered entities contract with outside pharmacies--contract pharmacies--to dispense drugs on their behalf. GAO's review of 30 contracts found that all but one contract included provisions for the covered entity to pay the contract pharmacy a flat fee for each eligible prescription. The flat fees generally ranged from $6 to $15 per prescription, but varied by several factors, including the type of drug or patient's insurance status. Some covered entities also agreed to pay pharmacies a percentage of revenue generated by each prescription. Thirty of the 55 covered entities GAO reviewed reported providing low-income, uninsured patients discounts on 340B drugs at some or all of their contract pharmacies. Of the 30 covered entities that provided discounts, 23 indicated that they pass on the full 340B discount to patients, resulting in patients paying the 340B price or less for drugs. Additionally, 14 of the 30 covered entities said they determined patients' eligibility for discounts based on whether their income was below a specified level, 11 reported providing discounts to all patients, and 5 determined eligibility for discounts on a case-by-case basis. GAO found weaknesses in HRSA's oversight that impede its ability to ensure compliance with 340B Program requirements at contract pharmacies, such as: HRSA audits do not fully assess compliance with the 340B Program prohibition on duplicate discounts for drugs prescribed to Medicaid beneficiaries. Specifically, manufacturers cannot be required to provide both the 340B discount and a rebate through the Medicaid Drug Rebate Program. However, HRSA only assesses the potential for duplicate discounts in Medicaid fee-for-service and not Medicaid managed care. As a result, it cannot ensure compliance with this requirement for the majority of Medicaid prescriptions, which occur under managed care. HRSA requires covered entities that have noncompliance issues identified during an audit to assess the full extent of noncompliance. However, because HRSA does not require all the covered entities to explain the methodology they used for determining the extent of the noncompliance, it does not know the scope of the assessments and whether they are effective at identifying the full extent of noncompliance. HRSA does not require all covered entities to provide evidence that they have taken corrective action and are in compliance with program requirements prior to closing the audit. Instead, HRSA generally relies on each covered entity to self-attest that all audit findings have been addressed and that the entity came into compliance with 340B Program requirements. Given these weaknesses, HRSA does not have a reasonable assurance that covered entities have adequately identified and addressed noncompliance with 340B Program requirements.
gao_GAO-18-260T
gao_GAO-18-260T_0
Selected VA Medical Centers’ Reviews of Providers’ Clinical Care Were Not Always Documented or Timely We found that from October 2013 through March 2017, the five selected VA medical centers required reviews of a total of 148 providers’ clinical care after concerns were raised about their care, but officials at these medical centers could not provide documentation to show that almost half of these reviews were conducted. We found that all five VA medical centers lacked at least some documentation of the reviews they told us they conducted, and in some cases, we found that the required reviews were not conducted at all. Specifically, across the five VA medical centers, we found the following: The medical centers lacked documentation showing that one type of review—focused professional practice evaluations (FPPE) for cause—had been conducted for 26 providers after concerns had been raised about their care. FPPEs for cause are reviews of providers’ care over a specified period of time, during which the provider continues to see patients and has the opportunity to demonstrate improvement. Documentation of these reviews is explicitly required under VHA policy. Additionally, VA medical center officials confirmed that FPPEs for cause that were required for another 21 providers were never conducted. The medical centers lacked documentation showing that retrospective reviews—which assess the care previously delivered by a provider during a specific period of time— had been conducted for 8 providers after concerns had been raised about their clinical care. One medical center lacked documentation showing that reviews had been conducted for another 12 providers after concerns had been raised about their care. In the absence of any documentation, we were unable to identify the types of reviews, if any, that were conducted for these 12 providers. We also found that the five selected VA medical centers did not always conduct reviews of providers’ clinical care in a timely manner. Specifically, of the 148 providers, the VA medical centers did not initiate reviews of 16 providers for 3 months, and in some cases, for multiple years, after concerns had been raised about the providers’ care. In a few of these cases, additional concerns about the providers’ clinical care were raised before the reviews began. We found that two factors were largely responsible for the inadequate documentation and untimely reviews of providers’ clinical care we identified at the selected VA medical centers. First, VHA policy does not require VA medical centers to document all types of reviews of providers’ clinical care, including retrospective reviews, and VHA has not established a timeliness requirement for initiating reviews of providers’ clinical care. Second, VHA’s oversight of the reviews of providers’ clinical care is inadequate. Under VHA policy, networks are responsible for overseeing the credentialing and privileging processes at their respective VA medical centers. While reviews of providers’ clinical care after concerns are raised are a component of credentialing and privileging, we found that none of the network officials we spoke with described any routine oversight of such reviews. This may be in part because the standardized tool that VHA requires the networks to use during their routine audits does not direct network officials to ensure that all reviews of providers’ clinical care have been conducted and documented. Further, some of the VISN officials we interviewed told us they were not using the standardized audit tool as required. Without adequate documentation and timely completion of reviews of providers’ clinical care, VA medical center officials lack the information they need to make decisions about providers’ privileges, including whether or not to take adverse privileging actions against providers. Furthermore, because of its inadequate oversight, VHA lacks reasonable assurance that VA medical center officials are reviewing all providers about whom clinical care concerns have been raised and are taking adverse privileging actions against the providers when appropriate. To address these shortcomings, we recommended that VHA 1) require documentation of all reviews of providers’ clinical care after concerns have been raised, 2) establish a timeliness requirement for initiating such reviews, and 3) strengthen its oversight by requiring networks to oversee VA medical centers to ensure that such reviews are documented and initiated in a timely manner. VA concurred with these recommendations and described plans for VHA to revise existing policy and update the standardized audit tool used by the networks to include more comprehensive oversight of VA medical centers’ reviews of providers’ clinical care after concerns have been raised. Selected VA Medical Centers Did Not Report All Providers to the NPDB or to State Licensing Boards as Required We found that from October 2013 through March 2017, the five VA medical centers we reviewed had only reported one of nine providers required to be reported to the NPDB under VHA policy. These nine providers either had adverse privileging actions taken against them or resigned or retired while under investigation before an adverse privileging action could be taken. None of these nine providers were reported to state licensing boards as required by VHA policy. The VA medical centers documented that these nine providers had significant clinical deficiencies that sometimes resulted in adverse outcomes for veterans. For example, the documentation shows that one provider’s surgical incompetence resulted in numerous repeat surgeries for veterans. Another provider’s opportunity to improve through an FPPE for cause had to be halted and the provider was removed from providing care after only a week due to concerns that continuing the review would potentially harm patients. In addition to these nine providers, one VA medical center terminated the services of four contract providers based on deficiencies in the providers’ clinical performance, but the facility did not follow any of the required steps for reporting providers to the NPDB or relevant state licensing boards. This is concerning, given that the VA medical center documented that one of these providers was terminated for cause related to patient abuse after only 2 weeks of work at the facility. Two of the five VA medical centers we reviewed each reported one provider to the state licensing boards for failing to meet generally accepted standards of clinical practice to the point that it raised concerns for the safety of veterans. However, we found that the medical centers’ reporting to the state licensing board took over 500 days to complete in both cases, which was significantly longer than the 100 days suggested in VHA policy. Across the five VA medical centers, we found that providers were not reported to the NPDB and state licensing boards as required for two reasons. First, VA medical center officials were generally not familiar with or misinterpreted VHA policies related to NPDB and state licensing board reporting. For example, at one VA medical center, we found that officials failed to report six providers to the NPDB because they were unaware that they had been delegated responsibility for NPDB reporting. Officials at two other VA medical centers incorrectly told us that VHA cannot report contract providers to the NDPB. At another VA medical facility, officials did not report a provider to the NPDB or to any of the state licensing boards where the provider held a medical license because medical center officials learned that one state licensing board had already found out about the issue independently. Therefore, VA officials did not believe that they needed to report the provider. This misinterpretation of VHA policy meant that the NPDB and the state licensing boards in other states where the provider held licenses were not alerted to concerns about the provider’s clinical practice. Second, VHA policy does not require the networks to oversee whether VA medical centers are reporting providers to the NPDB or state licensing boards when warranted. We found, for example, that network officials were unaware of situations in which VA medical center officials failed to report providers to the NPDB. We concluded that VHA lacks reasonable assurance that all providers who should be reported to these entities are reported. VHA’s failure to report providers to the NPDB and state licensing boards as required facilitates providers who provide substandard care at one facility obtaining privileges at another VA medical center or at hospitals outside of VA’s health care system. We found several cases of this occurring among the providers who were not reported to the NPDB or state licensing boards by the five VA medical centers we reviewed. For example, we found that two of the four contract providers whose contracts were terminated for clinical deficiencies remained eligible to provide care to veterans outside of that VA medical center. At the time of our review, one of these providers held privileges at another VA medical center, and another participated in the network of providers that can provide care for veterans in the community. We also found that a provider who was not reported as required to the NPDB during the period we reviewed had their privileges revoked 2 years later by a non-VA hospital in the same city for the same reason the provider was under investigation at the VA medical center. Officials at this VA medical center did not report this provider following a settlement agreement under which the provider agreed to resign. A committee within the VA medical center had recommended that the provider’s privileges be revoked prior to the agreement. There was no documentation of the reasons why this provider was not reported to the NPDB under VHA policy. To improve VA medical centers’ reporting of providers to the NPDB and state licensing boards and VHA oversight of these processes, we recommended that VHA require its networks to establish a process for overseeing VA medical centers to ensure they are reporting to the NPDB and to state licensing boards and to ensure that this reporting is timely. VA concurred with this recommendation and told us that it plans to include oversight of timely reporting to the NPDB and state licensing boards as part of the standard audit tool used by the networks. GAO Contact and Staff Acknowledgments If you or your staff members have any questions concerning this testimony, please contact me at (202) 512-7114 (williamsonr@gao.gov). Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions to this testimony include Marcia A. Mann (Assistant Director), Kaitlin M. McConnell (Analyst-in-Charge), Summar C. Corley, Krister Friday, and Jacquelyn Hamilton. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study This testimony summarizes the information contained in GAO's November 2017 report, entitled VA Health Care: Improved Policies and Oversight Needed for Reviewing and Reporting Providers for Quality and Safety Concerns ( GAO-18-63 ). What GAO Found Department of Veterans Affairs (VA) medical center (VAMC) officials are responsible for reviewing the clinical care delivered by their privileged providers—physicians and dentists who are approved to independently perform specific services—after concerns are raised. The five VAMCs GAO selected for review collectively required review of 148 providers from October 2013 through March 2017 after concerns were raised about their clinical care. GAO found that these reviews were not always documented or conducted in a timely manner. GAO identified these providers by reviewing meeting minutes from the committee responsible for requiring these types of reviews at the respective VAMCs, and through interviews with VAMC officials. The selected VAMCs were unable to provide documentation of these reviews for almost half of the 148 providers. Additionally, the VAMCs did not start the reviews of 16 providers for 3 months to multiple years after the concerns were identified. GAO found that VHA policies do not require documentation of all types of clinical care reviews and do not establish timeliness requirements. GAO also found that the Veterans Health Administration (VHA) does not adequately oversee these reviews at VAMCs through its Veterans Integrated Service Networks (VISN), which are responsible for overseeing the VAMCs. Without documentation and timely reviews of providers' clinical care, VAMC officials may lack information needed to reasonably ensure that VA providers are competent to provide safe, high quality care to veterans and to make appropriate decisions about these providers' privileges. GAO also found that from October 2013 through March 2017, the five selected VAMCs did not report most of the providers who should have been reported to the National Practitioner Data Bank (NPDB) or state licensing boards (SLB) in accordance with VHA policy. The NPDB is an electronic repository for critical information about the professional conduct and competence of providers. GAO found that selected VAMCs did not report to the NPDB eight of nine providers who had adverse privileging actions taken against them or who resigned during an investigation related to professional competence or conduct, as required by VHA policy, and none of these nine providers had been reported to SLBs. GAO found that officials at the selected VAMCs misinterpreted or were not aware of VHA policies and guidance related to NPDB and SLB reporting processes resulting in providers not being reported. GAO also found that VHA and the VISNs do not conduct adequate oversight of NPDB and SLB reporting practices and cannot reasonably ensure appropriate reporting of providers. As a result, VHA's ability to provide safe, high quality care to veterans is hindered because other VAMCs, as well as non-VA health care entities, will be unaware of serious concerns raised about a provider's care. For example, GAO found that after one VAMC failed to report to the NPDB or SLBs a provider who resigned to avoid an adverse privileging action, a non-VA hospital in the same city took an adverse privileging action against that same provider for the same reason 2 years later.
gao_GAO-19-50
gao_GAO-19-50_0
Background The Military Health System is responsible for, among other things, assuring the overall oral health of all uniformed DOD personnel. As part of this health system, each service’s dental corps provides dental care for its servicemembers. The Army, the Navy, and the Air Force Dental Corps include approximately 3,000 active duty dentists and approximately 247 (200 in the United States) dental clinics to serve over 1.3 million servicemembers. Unlike their medical counterparts, the services’ dental corps rarely provide beneficiary care, according to service officials. The primary role of military dentists is to ensure the oral health and readiness of servicemembers. Servicemembers’ oral health is evaluated using standardized measures to determine the extent to which they are deployable. Generally, servicemembers with identified urgent, emergent, or unknown dental treatment needs are not considered to be worldwide deployable until their oral health is adequately addressed. Becoming a Military Dentist Most military dentists enter service through the Armed Forces’ Health Professions Scholarship Program (AFHPSP), a scholarship program available to students enrolled in or accepted to dental school. Under the services’ AFHPSP program, DOD pays for tuition, books, and fees, and provides a monthly stipend. In return, the students incur an obligation to serve 6 months of active duty service for each 6 months of benefits received, with a 2-year minimum obligation. AFHPSP dental students can pursue either a Doctor of Dental Surgery or Doctor of Dental Medicine degree to become a general dentist. In addition to the AFHPSP program, the services recruit fully qualified licensed dentists. For example, individuals may become military dentists through direct accessions, either by entering the service as a fully trained, licensed dentist or through the Financial Assistance Program, which provides stipends for dentists accepted or enrolled in a residency program. For additional information on these and other recruitment programs, see appendix I. Regardless of the recruitment program, dentists may begin to practice after obtaining a degree and completing licensure requirements. Military dentists may pursue postgraduate training through a general dentistry program, such as the Advanced Education in General Dentistry Program, a general practice residency, or a specialty dental program offered through the Uniformed Services University of the Health Sciences Postgraduate Dental College. Postgraduate dental college includes training and/or residency within a specific specialty and typically requires between 1 to 6 years of additional training. While in a postgraduate dental college program, participants incur an additional 6 months of active duty service obligation for each 6 months in training, with a minimum of 2 years active duty service obligation. However, this obligation can be served concurrently with obligations already incurred through AFHPSP if incurred through sponsored postgraduate education in a military or affiliated program. Figure 1 portrays the path to becoming a military dentist and the active duty obligation incurred for AFHPSP dental students. Each service takes steps to validate whether the military dentist has the appropriate professional qualifications and clinical abilities. Validation includes ensuring the dentist is credentialed and privileged to practice. See appendix II for more details on service processes for monitoring qualification and performance of dentists. Roles and Responsibilities for the Recruitment and Retention of Military Dentists The Assistant Secretary of Defense for Health Affairs (ASD(HA)) serves as the principal advisor for all DOD health policies and programs. The ASD(HA) issues DOD instructions, publications, and memorandums that implement policy approved by the Secretary of Defense or the Under Secretary of Defense for Personnel and Readiness and governs the management of DOD medical programs. The ASD(HA) also exercises authority, direction, and control over the President of the Uniformed Services University of the Health Sciences (USUHS). Further, ASD(HA) sets the special and incentive pay amounts for all military dentists. The ASD(HA) reports to the Under Secretary of Defense for Personnel and Readiness, who in turn reports to the Secretary of Defense. The Army, the Navy, and the Air Force medical commands and agencies report through their service chiefs to their respective military department secretaries and then to the Secretary of Defense. The Army, the Navy, and the Air Force have the authority to recruit, train, and retain dentists. Each military service has its own organizational structure and responsibilities. See figure 2. In September 2013, the Defense Health Agency was established to support greater integration of clinical and business processes across the Military Health System. The Defense Health Agency, among other things, manages the execution of policies issued by the ASD(HA) and manages and executes the Defense Health Program appropriation, which funds the services medical departments. By no later than September 30, 2021, the Director of the Defense Health Agency will assume responsibility for the administration of each military treatment facility, to include budgetary matters, information technology, and health care administration and management, among other things. Although military treatment facilities include dental clinics, DOD initially intended to exclude dental care (except oral and maxillofacial surgery), from the transfer to the Defense Health Agency. However, as of September 2018, DOD stated it is assessing the extent to which dental care will fall under the Defense Health Agency’s administration. GAO’s Prior Work on Military Treatment Facility Staffing Models and Tools In July 2010, we found that the services’ collaborative planning efforts to determine staffing of medical personnel working in fixed military treatment facilities, including dentists, were limited, and that their staffing models and tools had not been validated and verified in all cases as DOD policy requires. Specifically, we found that some Army specialty modules contained outdated assumptions, and that only a portion of the models had been completely validated. We also found that the Navy did not have a model, but instead employed a staffing tool that used current manning as a baseline and adjusted its requirements based on emerging needs or major changes to its mission. However, the Navy’s tool was not validated or verified in accordance with DOD policy. Further, we found that the Air Force may not know its true medical requirements because the model it relied on also was not validated or verified. We made several recommendations in our 2010 report, two of which were aimed at improving staffing of MTFs. Specifically, we recommended that the services identify common medical capabilities shared across military treatment facilities and develop and implement cross-service medical staffing standards for these capabilities as appropriate. We also recommended that each service update or develop medical personnel requirements determination tools as needed to ensure that they use validated and verifiable processes. The Army, the Navy, and the Air Force have implemented our recommendation related to the development and implementation of validated and verifiable tools for developing medical personnel requirements. Additionally, they identified and developed standardized cross-service staffing standards for over 40 medical specialties and incorporated them into their individual MTF staffing tools. Two of Three Services Use Validated Dental Clinic Staffing Models, and None of the Models Incorporate Cross- Service Standards The Army and the Air Force Use Validated Dental Clinic Staffing Models, and the Navy’s Proposed Model Is under Review The Army and the Air Force have validated the dental clinic staffing models that they use, and the Navy’s draft model is under review. In the absence of a validated model, the Navy uses a general ratio to staff its dental clinics. See table 1 for a description of each of the services’ methodology for staffing dental clinics. The Army and the Air Force models, which were developed in accordance with DOD guidance and service-specific requirements, are subject to the following validation processes: Army. Since 2011, the Army has used the Army Dental Clinic Model, which, according to officials, is intended to determine the minimum number of dentists necessary, by location, to ensure the medical readiness of soldiers. Army staffing models are subject to validation by the U.S. Army Manpower Analysis Agency, which validated the Army’s Dental Clinic Model when it was developed in 2011. According to an Army official, the model’s validation expired in 2014, and was not re-validated until May 2018 due to limited resources. Additionally, Army officials stated that the data used in the model are updated on an annual basis and that the model is subject to revalidation every 5 years. Air Force. Since 2014, according to Air Force officials, the Air Force has used its Dental Manpower Model to determine the minimum number of dentists required, by clinic, to ensure the medical readiness of servicemembers served by Air Force dental clinics. According to Air Force officials, the Air Force Dental Manpower Model is subject to review and validation that includes input from the Air Force Medical Service; Surgeon General’s Manpower, Personnel, and Resources office; Air Force Personnel Center; and consultants. Officials told us the model is reviewed and validated annually and presented to the Dental Operations Panel and Air Force’s medical service corporate structure. The model was most recently validated in April 2018. According to Navy Bureau of Medicine and Surgery (BUMED) officials, the Navy does not yet have a model and therefore instead uses a general ratio of one dentist for every 1,000 sailors as a baseline to initially determine the staffing requirements of its dental clinics. This ratio is adjusted based upon emerging needs or major changes to mission. In 2013, according to Navy officials, BUMED began developing a Dental Services Model that could be used to determine dental clinic staffing needs. In November 2016, BUMED internally released a draft report recommending that the dental corps approve and implement the Dental Services Model as the staffing standard for dental clinics. According to a Navy official, this report was provided to dental corps leadership for review in July 2018 and they are expected to complete their review in October 2018. According to BUMED officials, if the dental corps leadership approves the model for use as an official staffing standard, the model would be subject to official Navy validation processes which, in accordance with DOD policy, would entail verification and validation throughout the model’s lifecycle. Conversely, if the dental corps decides to use the model as an informal staffing tool to supplement its current processes, a BUMED official stated that it will be subject to an ad-hoc internal review every 3 years that mirrors the Navy’s review of its validation process. The Services Have Not Developed Cross-Service Staffing Standards for Dental Care Currently, the Army, the Navy and the Air Force each use different service-specific standards and other factors to determine the number of dentists needed at their respective dental clinics. As previously discussed, the services have developed and are in the process of implementing cross-service staffing standards—that is, a standardized approach to staffing the common day-to-day health needs of the patient population—for certain medical specialties. In response to DOD policy and our 2010 recommendation, the services established a working group to identify and develop common cross-service staffing standards, and in 2017, the tri-service working group established such standards for 42 different medical specialties. These standards are based on actual workload data for common capabilities within selected medical specialties and were incorporated into each service’s staffing tools to provide consistent values for the minimum number of staff required to meet patient needs. However, according to an official involved in the development of the standards, the services have not collaborated to develop a plan to establish a similar set of standards for dental care. DOD guidance directs modeling and simulation management to develop plans and procedures and to pursue common and cross-cutting modeling tools and data across the services. Also, the ASD(HA) has supported the effort to establish consistent workload drivers across services for determining personnel requirements for MTFs. According to a tri-service working group co-chair, they did not develop cross-service staffing standards for dental care because at the time, the quality of available data on dental procedure frequency and duration varied across the services. The same official stated that these data have been improved, but they still do not have plans to develop cross-service staffing standards for dental care. Additionally, service officials maintained that they must operate their respective dental clinics autonomously and in a manner that best supports their service-specific needs and unique command structures. Specifically, officials from each service’s dental corps stated that their primary mission is focused on the medical readiness of servicemembers and generally does not involve beneficiary care. As such, they have not collaborated on staffing efforts with the other services. While we recognize that each service operates under a different command structure, readiness requirements for oral health are standardized across DOD, and all servicemembers are required to meet the same level of oral health in order to be deployable. Additionally, since DOD is currently assessing whether it will consolidate the services’ dental corps staff under the Defense Health Agency’s administration, it remains unclear to us why dental care has been excluded from cross- service efforts to develop a common set of standards for staffing military dental clinics—especially because the services have developed common staffing standards for42 other medical specialties. The Services Generally Have Met Goals for Recruiting Dental Students, but Not for Fully Qualified Dentists and Do Not Know the Extent to Which Certain Programs Are Effective at Helping Recruit and Retain Dentists The Army, the Navy, and the Air Force have generally met their recruitment goals for dental students, but generally have not met their recruitment goals for fully qualified dentists to address oral health needs of the services. Overall, we found that the services maintained their staffing levels for military dentists during fiscal years 2012 through 2016, but experienced gaps within certain specialties. Further, the services rely on various programs and special pays and incentives, to recruit and retain military dentists, but they do not know the extent to which some of these programs are effective at helping them to do so. The Services Generally Met Recruitment Goals for Dental Students, but Faced Challenges Recruiting Fully Qualified Dentists Our analysis of Army, Navy, and Air Force data found that in fiscal years 2012 through 2016, the services generally met their goals for dental students recruited through the Armed Forces Health Professions Scholarship Program (AFHPSP). From fiscal year 2012 through fiscal year 2016, the Army met 94 percent of its goals, the Navy met 100 percent of its goals, and the Air Force met 97 percent of its goals. Figure 3 shows the AFHPSP recruitment goals and achievements, by service for fiscal years 2012 through 2016. To address their immediate need for dental providers, the services also recruit fully qualified general dentists or specialists. However, the services have experienced challenges meeting their recruitment goals for fully qualified dentists. Figure 4 below shows the recruitment goals and achievements for fully qualified dentists from fiscal years 2012 through 2016. As shown in the figure, the Army did not meet its recruitment goals for 5 consecutive years, the Navy did not meet its goals for 2 of these 5 years, and the Air Force did not meets its goals for 3 of these 5 years. While the services have experienced challenges in recruiting fully qualified dentists, the challenges are most pronounced in certain specialties. For example, based on our analysis of service data from fiscal years 2012 through 2016, the Army and the Navy were unable to recruit any oral surgeons and the Air Force recruited 50 percent of the oral surgeons it needed. Service officials cited various reasons for not being able to meet their recruitment goals for certain specialties, including the availability of more lucrative careers in the private sector and quality of life concerns associated with military service, such as frequent moves. Additionally, Air Force officials stated that they are not always able to offer accession bonuses consistently, which has caused challenges in recruiting. Table 2 shows the recruitment goals and percentage achieved for fully qualified dentists, by specialty, from fiscal years 2012 through 2016. The Services Have Maintained Overall Staffing Levels for Military Dentists, but Have Experienced Challenges Retaining Certain Specialties While the services maintained overall staffing levels for military dentists, they have experienced some challenges retaining certain specialties. Overall, military dentist end strengths—the actual number of dentists on board at the end of the fiscal year—have generally met or exceeded dental authorizations. Specifically, between fiscal years 2012 and 2016, the Army’s dental authorizations were filled, on average, at about 109 percent, the Navy’s at about 101 percent, and the Air Force’s at about 97 percent. Further, DOD data show that average overall gain rates are equal to average overall loss rates for the services’ dental corps in fiscal years 2012 through 2015 at approximately 10 percent for the Army, 9 percent for the Navy, and 11 percent for the Air Force. Additionally, according to our analysis of Army and Navy data, on average, approximately 73 percent of Army dentists continue on active duty after 5 years of service, and approximately 63 percent of Navy dentists continue on active duty after 5 years of service. According to Air Force officials, the Air Force does not routinely track data on the number of dentists that continue on active duty after 5 years of service. Although the services have been able to maintain their overall numbers for the total number of dentists in their respective dental corps, we found, based on the data the services provided us, that each service experienced gaps in certain dental specialties, including critically short wartime specialties. For example, all of the services experienced gaps in comprehensive dentistry from fiscal years 2012 through fiscal year 2016. In addition, for the same time period, all of the services experienced gaps in prosthodontists and oral surgeons. Officials from all three services cited family concerns, frequent moves, and competition from the private sector as reasons why these and other dentists choose to leave the military. Additionally, Army and Navy officials cited limited training and education opportunities and limited scope of practice as reasons why specialists such as oral surgeons leave the military. The Services Monitor Their Recruitment and Retention Programs, but Do Not Know Whether The Programs Are Effective The services rely on programs, such as AFHPSP, the Critical Wartime Skills Accession Bonus (CWSAB), and special pay and incentives, to attract and retain military dentists, but they do not know the extent to which some of these programs are effective at helping them meet their recruiting and retention goals. Our prior work on effective strategic workforce planning principles concluded that agencies should periodically measure their progress toward meeting human capital goals. These principles state that measuring the extent that human capital activities contribute to achieving programmatic goals provides information for effective oversight by identifying performance shortfalls and appropriate corrective actions. Further, according to these principles, agencies should develop use of flexibilities and other human capital strategies that can be implemented with the resources that can be reasonably expected to be available, and should consider how these strategies can be aligned to eliminate gaps. Additionally, Standards for Internal Control in the Federal Government states that management should monitor internal control systems, through ongoing monitoring and evaluations. According to these standards, evaluations should be used to provide feedback on the effectiveness of ongoing monitoring and should be used to help design systems and determine effectiveness. The standards also provide that management should determine the appropriate corrective actions to address any identified deficiencies upon completing its evaluation. According to Army, Navy, and Air Force officials, the services have taken various actions to monitor their recruitment and retention programs. For example, officials told us that they review recruitment goals, achievements, and retention rates; conduct workforce planning and modeling; and participate in recruitment and retention working groups. Specifically, Army officials stated that they use forecasts from a 5-year management plan to determine the Army’s recruiting mission and review its continuation rates to assess retention of dentists. Navy officials told us that they review annual recruitment goals and track whether they are meeting those goals on a weekly basis. Air Force officials stated that they participate in the Medical Accessions Working Group three times per year to assess ongoing recruitment activities. While the services monitor their progress toward recruitment and retention goals, they do not know the extent to which the programs designed to help them meet their goals affect their ability to recruit and retain dentists because they have not evaluated their effectiveness. For example, DOD Directive 1304.21 allows the services to use accession bonuses to meet their personnel requirements and specifies that bonuses are intended to influence personnel inventories in specific situations in which less costly methods have proven inadequate or impractical. The services have the discretion to issue up to $20,000 as an accession bonus under the AFHPSP—in addition to paying full tuition, education expenses, and a monthly stipend. In fiscal years 2012 through 2016, the Army and the Navy offered the accession bonus and generally met their recruitment goals—an achievement that Army officials credit, in part, to their use of the incentive. Specifically, Army officials told us that prior to using the bonus in 2009, they were not meeting their recruitments goals. They also expressed concern that, if they were to discontinue use of the bonus, they would not be able to meet their current goals. Conversely, Air Force officials told us that they stopped offering the bonus in 2012 because the number of AFHPSP applicants had exceeded the number of AFHPSP positions; the Air Force has continued to meet its recruiting goals without the use of the bonus. An Air Force official acknowledged that not offering the bonus could result in their losing potential applicants to the services that do offer the bonus, but Air Force officials also recognized that money is not always a deciding factor for those who choose to serve as a dentist in the military. The uncertainty described by the Army and Air Force officials demonstrates their lack of information about what factors motivate individuals to join the military. Moreover, the differing use of the accession bonus by the two services with similar outcomes indicates that the services do not know when it is necessary to use the recruiting tool because they have not evaluated the effectiveness of this program. Another bonus the services can offer is the CWSAB, which ranges from $150,000 for general dentists to $300,000 for comprehensive dentists, endodontists, prosthodontists, and oral maxillofacial surgeons, to individuals entering the military as a dentist in a critically short wartime specialty. While a bonus can be offered to any dental specialty designated as a critically short wartime specialty, data that we analyzed indicate that the bonus may be disproportionately effective in recruiting for these specialties. For example, from fiscal years 2012 through 2016, the Navy used this type of bonus and was able to meet or exceeded its recruitment goals for critically short wartime specialty general dentists and staffed this specialty at between 108 and 122 percent. However, our analysis of the Navy’s data also found that, even after offering this bonus, the Navy was unable to recruit any oral surgeons during the same time period. However, like with the accession bonus, service officials do not know the extent to which the CWSAB bonus is an effective recruitment incentive for some or all of the critically short wartime specialties because they have not evaluated the effectiveness of this program. In addition, the services offer special pay and incentives, which vary by specialty, to qualified dentists. Special pay and incentives include incentive pay, retention bonuses, and board certification pay. Each bonus and incentive, except board certification pay, requires an additional service obligation, thus creating a retention tool for the services. The services and officials from the Office of the ASD(HA) participate in the Health Professions Incentives Working Group to review recruitment and retention special pay and incentives and recommends adjustment to amounts offered as necessary. Service and officials from the Office of the ASD(HA) told us that there are no ways to evaluate the effectiveness of these incentives because they cannot account for the emotional or non- monetary decisions that contribute to whether servicemembers stay in the military, and money is not always an effective incentive for getting people to train in certain specialties or to continue their service. However, in our recent review of DOD’s special pay and incentive programs in 2017, we recommended that DOD take steps to improve the effectiveness of its special pay and incentive programs. Additionally, in February 2018, through our review of gaps in DOD’s physician specialties, we recommended that the services develop targeted and coordinated strategies to alleviate military physician gaps. An official from the Office of the ASD(HA) stated that they have started discussing measures with the services to evaluate the effectiveness of DOD’s medical and dental recruitment and retention programs, including special pay and incentives. Additionally, Office of ASD(HA) and service officials stated that they will begin reviewing the dental special pays and incentives in fiscal year 2019. Because these reviews are in the early stages, it is too soon to know how effective they will be in evaluating pay and incentive programs. Although service officials also told us that they believe their recruitment and retention programs are effective because they have generally met their overall recruiting and retention goals, until the services evaluate the effectiveness of their recruitment and retention efforts, they will not have the information to know which programs are the most efficient and cost- effective. Conclusions DOD continues to implement several major initiatives to support the mission of its health system maintaining the medical readiness of servicemembers while operating as efficiently as possible. The dental corps plays a critical role in these efforts by ensuring the oral health and dental readiness of all servicemembers. Ensuring dental readiness requires, in part, that the services are able to staff dentists adequately and consistently across DOD’s dental clinics. However, the Army, the Navy, and the Air Force have not collaborated in their approaches to staffing dental clinics, and have not developed cross-service staffing standards for dental clinic staffing. As DOD progresses in its efforts to implement efficiencies across its Medical Health System and assesses the scope of medical care to be transferred to the Defense Health Agency, it could be of benefit to the dental corps to develop cross-service standards that could result in improvements to the consistency and efficiency of dental clinic staffing. In addition to ensuring the appropriate number of dentists at each clinic to support the dental corps’ mission, recruiting and retaining fully qualified dentists has been an ongoing challenge for the services. However, the services have not evaluated whether their existing programs have been effective at helping them recruit and retain dentists, and therefore do not know whether they are effectively targeting their resources to address the most significant recruitment and retention challenges. Recommendations for Executive Action We are making a total of six recommendations, including two to the Army, two to the Navy, and two to the Air Force. Specifically: The Secretary of the Army should ensure that the Surgeon General of the Army Medical Command (1) collaborate with the Navy Bureau of Medicine and Surgery and the Air Force Medical Service to develop a common set of planning standards to be used to help determine dental clinic staffing needs, and (2) incorporate these standards into the Army’s dental corps staffing model. (Recommendation 1) The Secretary of the Navy should ensure that the Surgeon General of the Navy Bureau of Medicine and Surgery (1) collaborate with the Army Medical Command and the Air Force Medical Service to develop and implement a common set of planning standards to be used to help determine dental clinic staffing needs, and (2) incorporate these standards into the Navy’s dental corps staffing model. (Recommendation 2) The Secretary of the Air Force should ensure that the Surgeon General of the Air Force Medical Service (1) collaborate with the Army Medical Command and the Navy Bureau of Medicine and Surgery to develop and implement a common set of planning standards to be used to help determine dental clinic staffing needs, and (2) incorporate these standards into the Air Force’s dental corps staffing model. (Recommendation 3) The Secretary of the Army should ensure that the Surgeon General of the Army Medical Command evaluates the effectiveness of its recruitment and retention programs for military dentists, including the need for and effectiveness of the recruitment and retention incentives currently offered. (Recommendation 4) The Secretary of the Navy should ensure that the Surgeon General of the Navy Bureau of Medicine and Surgery evaluates the effectiveness of its recruitment and retention programs for military dentists, including the need for and effectiveness of the recruitment and retention incentives currently offered. (Recommendation 5) The Secretary of the Air Force should ensure that the Surgeon General of the Air Force Medical Service evaluates the effectiveness of its recruitment and retention programs for military dentists, including the need for and effectiveness of the recruitment and retention incentives currently offered. (Recommendation 6) Agency Comments We provided a draft of this report to DOD for review and comment. DOD did not provide comments. DOD did provide us with technical comments, which we have incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Office of the Assistant Secretary of Health Affairs, the Secretaries of the Army, the Navy, the Air Force, and the President of the Uniformed Services University of the Health Sciences. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or FarrellB@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Military Dentist Accession Programs and Incentives In addition to the Department of Defense’s (DOD) Armed Forces Health Professions Scholarship Program, DOD uses several other programs and incentives to recruit military dentists. Table 3 includes a selection of DOD’s military dentist accession programs and incentives. Appendix II: The Services’ Mechanisms to Monitor Qualifications and Performance of Military Dentists DOD policy requires that all military dentists must be credentialed and privileged to practice dentistry. Credentialing is the process of inspecting and authenticating the documentation to ensure appropriate education, training, licensure, and experience. Privileging is the corresponding process that defines the scope and limits of practice for health care professionals based on their relevant training and experience, current competence, peer recommendations, and the capabilities of the facility where they are practicing. According to officials, the services have developed and implemented processes to continuously monitor dentist performance in accordance with DOD policy. According to officials, the services monitor military dentist performance through On-Going Professional Practice Evaluations (OPPE) and Focused Professional Practice Evaluations (FPPE). The OPPE is a continuous evaluation of dentist performance that reviews six dimensions of performance: (1) patient care, (2) medical knowledge, (3) professionalism, (4) practice-based learning and improvement, (5) interpersonal and communication skills, and (6) systems-based practice. The FPPE is a process of periodic evaluation by the dental clinic of the specific competence of a dentist performing procedures and administering care. FPPEs are conducted during a dentist’s initial appointment, when granting new privileges, or if a question arises about a dentist’s ability to provide, safe, high quality patient care. In addition to the performance monitoring required by DOD, according to officials, the Army and the Air Force have instituted their own mechanisms for monitoring the quality and performance of their dentists. Army: According to officials, the Army monitors dental quality through its quarterly Continuous Quality Management Program. This program includes the review of data related to records audits, infection control, radiation protection, utilization management, implant reports, drug utilization reports, patient safety events, and risk management. According to officials, these reviews are intended to identify and address any errors or trends in dental care. Air Force: According to officials, annually, Air Force dentists must document that they have reviewed and will follow the Air Force Medical Service Dental Clinical Practice Guidelines. According to officials, this ensures that all dentists are following the same standard of care for dental treatment. Additionally, according to officials, Air Force dentists participate in a peer review process known as Clinical Performance Assessment and Improvement. According to officials, in this process, a licensed peer dentist preferably of the same specialty reviews the dentist’s practice and procedures. Further, according to officials, depending on the nature of issues found during the review, corrective actions—ranging from refresher courses to a loss of license and credentials—may be taken. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Kimberly Mayo, Assistant Director; Nicole Collier; Alexandra Gonzalez; Amie Lesser; Tida Barakat Reveley; Rachel Stoiko; John Van Schaik; Lillian Yob; and Elisa Yoshiara made key contributions to this report. Related GAO Products Defense Health Care: Additional Assessments Needed to Better Ensure an Efficient Total Workforce. GAO-18-102, Washington, D.C.: Nov. 27, 2018. Defense Health Care: DOD Should Demonstrate How Its Plan to Transfer the Administration of Military Treatment Facilities Will Improve Efficiency. GAO-19-53, Washington, D.C.: Oct. 30, 2018. Defense Health Care: Expanded Use of Quality Measures Could Enhance Oversight of Provider Performance. GAO-18-574, Washington, D.C.: Sept. 17, 2018. Military Personnel: Additional Actions Needed to Address Gaps in Military Physician Specialties. GAO-18-77. Washington, D.C.: Feb. 28, 2018. Defense Health Reform: Steps Taken to Plan the Transfer of the Administration of the Military Treatment Facilities to the Defense Health Agency, but Work Remains to Finalize the Plan, GAO-17-791R. Washington, D.C.: Sept. 29, 2017. Military Compensation: Additional Actions Are Needed to Better Manage Special and Incentive Pay Programs. GAO-17-39. Washington, D.C.: Feb. 3, 2017. Defense Health Care Reform: DOD Needs Further Analysis of the Size, Readiness, and Efficiency of the Medical Force. GAO-16-820. Washington, D.C.: Sept. 21, 2016. Defense Health Care: Additional Information Needed about Mental Health Provider Staffing Needs. GAO-15-184. Washington, D.C.: Jan. 30, 2015. Human Capital: Additional Steps Needed to Help Determine the Right Size and Composition of DOD’s Total Workforce. GAO-13-470. Washington, D.C.: May 29, 2013 Defense Health Care: Actions Needed to Help Ensure Full Compliance and Complete Documentation for Physician Credentialing and Privileging. GAO-12-31. Washington, D.C.: Dec. 15, 2011. Military Cash Incentives: DOD Should Coordinate and Monitor Its Efforts to Achieve Cost-Effective Bonuses and Special Pays. GAO-11-631. Washington, D.C.: June 21, 2011. Military Personnel: Enhanced Collaboration and Process Improvements Needed for Determining Military Treatment Facility Medical Personnel Requirements. GAO-10-696. Washington, D.C.: Jul. 29, 2010. Military Personnel: Status of Accession, Retention, and End Strength for Military Medical Officers and Preliminary Observations Regarding Accession and Retention Challenges. GAO-09-469R. Washington, D.C.: Apr. 16, 2009. Military Personnel: Better Debt Management Procedures and Resolution of Stipend Recoupment Issues Are Needed for Improved Collection of Medical Education Debts, GAO-08-612R. Washington, D.C.: Apr. 1, 2008. Primary Care Professionals: Recent Supply Trends, Projections, and Valuation of Services. GAO-08-472T. Washington, D.C.: Feb. 12, 2008. Military Physicians: DOD’s Medical School and Scholarship Program. GAO/HEHS-95-244. Washington, D.C.: Sept. 29, 1995. Defense Health Care: Military Physicians’ Views on Military Medicine. GAO/HRD-90-1. Washington, D.C.: Mar. 22, 1990.
Why GAO Did This Study DOD has taken steps to modernize its Military Health System to ensure that it operates efficiently. For example, in September 2013, the Defense Health Agency was created, in part, to implement common clinical and business processes across the services. Essential to this effort are the services' ability to effectively staff their medical facilities, including the processes used for staffing dental clinics and the services' ability to recruit and retain military dentists. Senate Report 115-125 included a provision for GAO to review the services' processes for determining requirements for dentists and its programs for recruiting and retaining military dentists, among other things. GAO assessed the extent to which the services (1) use validated dental clinic staffing models that also incorporate cross-service staffing standards, and (2) have recruited and retained military dentists and measured the effectiveness of their recruitment and retention programs. GAO assessed service dental clinic models, analyzed recruitment and retention data from fiscal year 2012 through 2016, and interviewed DOD and service officials. What GAO Found The Army and the Air Force use validated staffing models for their respective dental clinics, and the Navy has developed a model that is under review. The Army and the Air Force's models are based on service-specific staffing standards. For example, the Army's model generally projects dental clinic staffing based on historical facility data and, according to officials, the Air Force model is largely a population-based model that requires one dentist for every 650 servicemembers. In contrast, in the absence of a validated model, officials stated that, the Navy uses a general ratio of one dentist for every 1,000 servicemembers to staff its dental clinics. The Navy has developed a model that is under review, and if approved, according to officials, will be subject to the Navy's validation processes. While the services have developed and implemented cross-service staffing standards for 42 medical specialties, according to a key official involved in developing these standards, they currently do not plan to develop a similar set of standards for dental care. Cross-service staffing standards help the services standardize clinic staffing to address the common day-to-day health needs of patients. Service officials maintain that they must operate their respective dental clinics autonomously and in a manner that best supports their service-specific needs and unique command structures. However, as oral health requirements for servicemembers are standardized across the Department of Defense (DOD), it is unclear why dental care has been excluded from the staffing standardization effort—especially because the services have implemented cross-service staffing standards for 42 other medical specialties. The Army, the Navy, and the Air Force meet their needs for military dentists by recruiting both dental students and fully qualified dentists. The services generally met their recruitment goals for dental students between fiscal years 2012 and 2016, but faced challenges recruiting and retaining fully qualified dentists during that period. For example, the Army missed its recruitment goals for fully qualified dentists in all 5 years, the Navy missed its goals in 2 out of 5 years, and the Air Force missed its goals in 3 out of 5 years. These challenges are most pronounced for certain specialties. For example, service data indicate that the Army and the Navy were unable to recruit any oral surgeons, while the Air Force recruited 50 percent of the oral surgeons it needed. Service officials cited various reasons for not meeting recruitment goals, including the availability of more lucrative careers in the private sector and quality of life concerns associated with military service. The services rely on various programs, including scholarships and special pay and incentives, to attract and retain military dentists, and service officials stated that they monitor their programs by reviewing their goals, among other actions. However, GAO found that some services continue to provide incentive bonuses for positions that are overstaffed and have met or exceeded recruitment goals, but they do not know whether this is necessary because they have not evaluated the effectiveness of their programs. Without evaluating their programs, the services lack the information necessary to ensure that they are using recruitment and retention incentives effectively and efficiently for attracting and retaining dentists. What GAO Recommends GAO recommends that each of the services develop cross-service staffing standards to be incorporated into their staffing models, and evaluate the effectiveness of their recruitment and retention programs. DOD did not provide comments on a draft of this report.
gao_GAO-19-82
gao_GAO-19-82_0
Background Coast Guard’s Organizational Approach to Managing Its Shore Infrastructure Portfolio Coast Guard shore infrastructure includes buildings and structures, which it has organized into 13 asset types, known as asset lines. Table 1 provides information on Coast Guard asset lines, including examples of assets, the number within each asset line in 2017, and the Coast Guard’s estimated replacement value of each asset line in 2017—the most recent value available at the time of our review. The Coast Guard’s Office of Civil Engineering sets Coast Guard-wide civil engineering policy, which includes facility planning, design, construction, maintenance, and disposal. The Coast Guard’s Shore Infrastructure Logistics Center, established in 2009, is to manage and coordinate infrastructure condition assessments via six regional Civil Engineering Units (CEUs), along with other divisions and offices. The condition of individual shore infrastructure assets is determined by CEU personnel and civil engineers in the field. According to Coast Guard officials, every Coast Guard facility, such as a base or boat station, is to be inspected by a CEU representative every 3 years. The representative is to conduct a facility condition assessment of all shore infrastructure assets—buildings and structures—located at that facility. According to Coast Guard CEU officials, the representative is to identify if any new maintenance-related deficiencies exist at the facility and add them to the backlog of projects, review the previous backlog, and verify that the Coast Guard’s shore facilities’ inventory records are correct. This process is intended to help define the current conditions of assets and identify maintenance needs. According to Coast Guard guidance, the Shore Infrastructure Logistics Center also establishes project priorities for the acquisition, programmed depot maintenance, major repair, and modification of Coast Guard shore facilities, and implements shore infrastructure policies. Among other things, the Shore Infrastructure Logistics Center is to (1) assure that all Coast Guard facilities meet their operational and functional requirements, (2) take corrective action before advanced deterioration requires major repairs, (3) ensure preventative maintenance is performed on a routine schedule, and (4) prevent over-maintenance and under-maintenance. In addition, this guidance states that all Coast Guard property must have a documented, standardized system of maintenance for facilities by designated personnel familiar with, and properly trained on, the maintenance system in place to support its shore infrastructure. Coast Guard’s Civil Engineering Program Has a Requirements-Based Budget to Determine Funding Needs In 2016, the Coast Guard’s civil engineering program began using requirements-based budget planning to determine shore infrastructure funding needs. According to the Coast Guard, a requirements-based budget is an estimate of the cost to operate and sustain the Coast Guard’s shore infrastructure portfolio of assets over the lifecycle of the asset, from initial construction or capital investment through divestiture or demolition. Coast Guard budgeting for shore infrastructure distinguishes between procurement and acquisitions and recurring and non-recurring maintenance, among other things. Procurement and acquisitions encompasses major projects to alter, acquire, or build new infrastructure—for example, modifying the bay doors on a boat garage so that larger boats can be accommodated. In contrast, there are two types of maintenance for shore infrastructure. Routine recurring maintenance, known as Organizational-Level Maintenance (OLM), includes tasks such as clearing moss and debris from a rooftop drain or applying caulk to seal a building. Non-recurring maintenance, known as Depot-Level Maintenance (DLM), consists of major maintenance tasks that are beyond the capability of an individual unit, such as replacing exterior doors and windows. The Coast Guard uses three accounts for its shore infrastructure. Amounts in the Procurement, Construction and Improvements (PC&I) account are used for the acquisition, procurement, construction, rebuilding, and improvement of shore facilities and are directed to specific projects. Amounts in the shore OLM account are used for routine recurring maintenance, and amounts in the DLM account are used for major maintenance and repair of Coast Guard real property. See Table 2 for additional information about these accounts. Coast Guard Utilizes Planning Boards to Prioritize Shore Infrastructure Projects The Coast Guard makes decisions regarding the allotment of resources for shore infrastructure through PC&I, regional DLM, and central DLM planning boards, which meet twice annually to prioritize Coast Guard shore infrastructure needs on the basis of expected appropriations and other factors, such as damage caused by natural disasters. These boards are responsible for evaluating potential shore infrastructure projects that have been identified by managers who are responsible for evaluating, ranking, and recommending projects to the boards within their specified product line. For example, aviation asset line managers are responsible for aviation-related shore infrastructure projects, such as runways, landing areas, and hangars. Table 3 provides specific information on these planning board responsibilities and members. Figure 1 shows how the planning boards are to prioritize shore infrastructure projects. Additional details about the planning boards’ processes, including the extent to which they are documented and align with leading practices, are described later in this report. Coast Guard Is Required to Report Unfunded Shore Infrastructure Priorities The Coast Guard is statutorily required to provide a list of each unfunded priority, including unfunded shore infrastructure priorities, to certain committees of Congress to support the President’s budget, and its 5- year Capital Investment Plan (CIP). The term ‘unfunded priority’ means a program or mission requirement that (1) has not been selected for funding in the applicable proposed budget, (2) is necessary to fulfill a requirement associated with an operational need, and (3) the Commandant would have recommended for inclusion in the applicable proposed budget had additional resources been available, or had the requirement emerged before the budget was submitted. Almost Half of the Coast Guard’s Shore Infrastructure is Beyond Its Service Life, and Project Backlogs Will Cost at Least $2.6 Billion to Address Coast Guard Reported that 45 Percent of Its Shore Infrastructure Is Beyond Its Service Life As of 2017, the Coast Guard’s annual report on shore infrastructure stated that 45 percent of Coast Guard assets have exceeded their service lives. The Coast Guard also reported that its overall shore inventory has a 65-year service life. For example, the Coast Guard’s 2017 shore infrastructure report identified at least 65 percent of aviation pavements, 60 percent of aviation fuel facilities, and at least 53 percent of piers—all of which the Coast Guard has identified as mission-critical assets—as being past their service lives. Coast Guard officials told us that the agency had changed their service life standard from 50 years to service lives linked to each asset’s assigned category code, based on Department of Defense (DOD) standards, before they reported service life calculations in their 2017 annual report on shore infrastructure. As a result of this change, some shore infrastructure that has been in service 50 to 65 years, which would previously have been identified as past its service life, will be characterized by the Coast Guard as within its service life—a better condition than the Coast Guard would have reported under its 50-year standard. Additionally, in 2017, the Coast Guard rated its overall shore infrastructure condition as a C- based on criteria it derived from standards developed by the American Society of Civil Engineers. Some asset lines, such as aviation, whose assets are generally mission-critical, are rated lower. For example, the Coast Guard rated its industrial asset line as a D, in part because 8 of the 9 assets which comprise the Coast Guard Yard—the only Coast Guard facility that can perform drydock maintenance on large Coast Guard ships—are more than 5 years beyond their service life. Table 4 shows additional detail about Coast Guard asset lines, including the rate at which the Coast Guard reported these assets were functioning past their service life, and the condition grades assigned by the Coast Guard for fiscal year 2017. According to Coast Guard officials, the demand placed on the Coast Guard’s shore infrastructure in recent years has increased because of the new ships and aircraft the Coast Guard has acquired. For example, a senior Coast Guard official told us that the agency has recently needed to upgrade some of its hangars with liquid oxygen storage facilities in order to support the Coast Guard’s new HC-27A aircraft. Another official told us that because the Coast Guard’s National Security Cutters—which the Coast Guard began operating in 2010—are 40 feet longer than the High Endurance Cutters they are replacing, the Coast Guard has had to either build new piers or lengthen existing ones. Coast Guard’s Data Indicate that Project Backlogs of Shore Infrastructure Will Cost At Least $2.6 Billion to Address, as of 2018 Coast Guard data show that it will cost at least $2.6 billion to address its two project backlogs—(1) recapitalization and new construction, and (2) deferred maintenance. Given the level at which the Coast Guard has been requesting such funding, it will take many years for the agency to address the backlogs. For example, the Coast Guard estimated that based on its fiscal year 2017 appropriation it would take 395 years to address its current $1.77 billion PC&I recapitalization and new construction backlog, assuming that funding would continue at this level. This time frame estimate does not include the Coast Guard’s deferred DLM maintenance backlog, which the Coast Guard estimated to be nearly $900 million in fiscal year 2018. Table 5 provides information on the Coast Guard’s two shore infrastructure backlogs as of August 2018. However, the number of projects in the Coast Guard’s backlogs and the associated cost for addressing them is incomplete. In July 2018, Coast Guard officials told us that the majority of the projects on the PC&I backlog do not yet have associated cost estimates, and thus have not been factored into the backlog cost estimates they have previously reported to Congress. In November 2018, the Coast Guard told us there were 205 projects on the PC&I backlog without cost estimates. Officials explained that they have not prepared cost estimates for these projects because they are in the preliminary stage of development and cost estimates would not be accurate. Figure 2 shows the number of projects with cost estimates and the estimated value of its PC&I backlog for fiscal years 2012 through 2018. See appendix II for additional details. In addition to the estimated $2.6 billion backlogs of PC&I recapitalization and new construction and DLM deferred maintenance projects, the Coast Guard carries out routine and recurring maintenance and repairs (maintenance) through OLM funding. However, Coast Guard officials stated that funding for maintenance projects cannot be disaggregated from overall OLM funding. The Coast Guard’s 2017 shore infrastructure annual report states that industry studies establish that the most effective maintenance organizations spend about 17 percent of their staff labor effort on corrective maintenance (i.e., repairs) and 83 percent on preventative maintenance (e.g., activities such as changing buildings systems’ filters and oil, resealing pavement surfaces, or repainting buildings). However, Coast Guard’s analysis of OLM records indicated that 66 percent of their facilities’ staff labor effort was used for corrective maintenance. This imbalance indicates that fewer funds are available for preventative maintenance than industry studies suggest, which could increase costs and affect service lives if preventative maintenance cannot be performed to the extent necessary. The annual report further stated that the significant investment needed for corrective maintenance reflects the state of the Coast Guard’s aging infrastructure and the strain it places on maintenance personnel. Moreover, Coast Guard officials testified to Congress in June 2017 that aging infrastructure adversely affects operational efficiency. Further, in July 2018 Congressional testimony by the Coast Guard Deputy Commandant for Mission Support stated that the agency needs to rebuild shore infrastructure readiness with sound investments in operations and maintenance, but budget realities result in deferred maintenance, fewer spare parts, and infrastructure reliability and security concerns. The Coast Guard’s Process for Managing Its Shore Infrastructure Does Not Fully Meet 6 of 9 Leading Practices, Resulting in Management Challenges The Coast Guard’s process to manage its shore infrastructure recapitalization and deferred maintenance backlogs does not fully meet 6 of 9 leading practices we have previously identified for managing public sector maintenance backlogs. Specifically, of the nine leading practices, the Coast Guard met three, partially met three, and did not meet three, as shown in Table 6. We, as well as others, have identified that deferring maintenance and repair backlogs can lead to higher costs in the long term and pose risks to safety and agencies’ missions. Coast Guard Met 3 of 9 Leading Practices for Managing Public Maintenance Backlogs The Coast Guard met 3 of 9 leading practices for managing public maintenance backlogs by identifying the types of risks posed by not making timely investments in its shore facilities; identifying the types of assets, such as buildings, that are mission-critical; and by establishing guidance that identifies the primary methods to be used for delivering maintenance and repair activities, among other things. We have previously found that these three practices are an important step toward increased transparency and more effective management of maintenance backlogs. Identify the Types of Risks Posed By Lack of Timely Investment According to leading practices, agencies should identify the types of risks posed by not investing in deteriorating facilities, systems, and components because this is important for providing more transparency in the decision-making process, and for communicating with staff at all organizational levels. The Coast Guard has a process to identify, document and report risks in its annual shore infrastructure reports for fiscal years 2015 through 2017. These reports identified the types of risks the Coast Guard faces in not investing in its facilities, including financial risk, capability risk, and operational readiness risk, but did not specifically measure these risks. The Coast Guard met this leading practice because the leading practice requires agencies to identify risk in general terms—for example, in terms of increased lifecycle costs, or risk to operations. The leading practice does not require the agency to quantify or measure this risk by, for example, calculating the probability that a building or structure will fail and impair the Coast Guard’s operations. Identify Types of Facilities or Specific Buildings that Are Mission-Critical and Mission- Supportive Leading practices state that agencies should identify buildings as mission-critical and mission-supportive to help establish where maintenance and repair investments should be targeted, to ensure that funds are being used effectively. Since at least 2012, the Coast Guard has documented its process to classify all of its real property under a tier system and established minimum investment targets by tier as part of its central DLM planning boards. These tiers—mission-critical versus mission-supportive—were incorporated into the guidance that Coast Guard decision-makers are to follow in their deliberations about project funding and to help them determine how to target funding more effectively. For example, the Coast Guard’s PC&I planning board guidance for fiscal years 2019 through 2023 prioritized expenditures on shore infrastructure-supporting front line operations such as piers or runways over shore infrastructure providing indirect support to front line operations such as administrative buildings. Identify the Primary Methods to Be Used for Delivering Maintenance and Repair Activities Identification of the primary methods of delivery for maintenance and repair activities is intended to help agencies determine the level of resources that should be allocated to each type of maintenance activity and to repair projects, according to leading practices. The Coast Guard’s Civil Engineering Manual and other guidance documents detail how the maintenance and repair program is structured and how budget accounts are to be utilized. For example, the manual defined how projects should be classified and funded—e.g., DLM or OLM—which has helped to determine the Coast Guard units responsible for carrying out these maintenance or repair activities. Coast Guard Partially Met 3 of 9 Leading Practices for Managing Maintenance Backlogs The Coast Guard partially met 3 of 9 leading practices for managing public sector maintenance backlogs, including conducting condition assessments, establishing performance goals and measures, and aligning property portfolios with mission needs and disposing of unnecessary assets. Conduct Condition Assessments as a Basis for Establishing Appropriate Levels of Funding Required to Reduce, If Not Eliminate, Any Deferred Maintenance and Repair Backlog Conducting periodic condition assessments are an effective approach for facility management as identifying condition deficiencies can inform budgeting decisions, according to leading practices. Under the Coast Guard’s process, facility condition assessments are to be used to evaluate the condition of infrastructure and identify deficiencies. These assessments are to lead to the creation of the maintenance and recapitalization projects that then compose the Coast Guard’s deferred maintenance backlogs. However, the Coast Guard partially met this leading practice because it has not issued specific guidance on how these assessments are to be conducted, nor do the six CEUs follow a standardized or consistent process for conducting their assessments, according to Coast Guard field and headquarters officials. Further, Coast Guard officials at 5 of the 6 CEUs told us that some or all of the officials who conduct facility condition assessments serve on a rotational basis. As a result, the level of familiarity inspectors have with the facilities they inspect may vary, which could lead to differences in the assessments they produce. Moreover, while inspectors at 3 of the 6 CEUs are to use checklists when conducting their inspections, all of these checklists are different, and the other three CEUs do not currently use checklists. We found that these differences have contributed to inconsistencies in the information collected. For example, assessment results we analyzed used different scales for prioritizing maintenance projects, such as letter grades or red/amber/green scales. One assessment we reviewed listed both DLM and OLM projects, and provided the unit commander with detailed instructions accompanied by pictures explaining how to address these issues, whereas other assessments only identified DLM projects or “items of concern.” One senior official acknowledged that the Coast Guard did not have standardized assessments, and that developing them had not been the highest priority among numerous guidance documents the Coast Guard is trying to complete. Without standardized assessments, the Coast Guard’s ability to systematically compare projects for prioritization is limited, and this could directly impact its ability to establish appropriate levels of funding for addressing the backlog, as identified in this leading practice. Coast Guard officials told us they intend to issue guidance to standardize facility condition assessments, but they could not provide a date for completing the guidance that would be issued. Moreover, according to the Coast Guard, it began to modernize its shore infrastructure civil engineering management in 2006, and it has been working to develop its current asset management model, including updating guidance, since 2013. By executing plans for a standardized facility condition assessment process and developing a plan with milestones and timeframes for standardizing the process, the Coast Guard will be better positioned with more consistent data to prioritize and plan its shore infrastructure projects. Establish Performance Goals, Baselines for Performance Outcomes, and Performance Measures According to leading practices, establishing performance goals, baselines for performance outcomes, and performance measures allows agencies to track the effectiveness of maintenance and repair investments, provide feedback on progress, and indicate where investment objectives, outcomes, or procedures require adjustment. According to Coast Guard guidance, the Chief of the Office of Civil Engineering and the Shore Infrastructure Logistics Center are to identify and promulgate performance metrics annually. The Coast Guard partially met this leading practice by documenting and tracking facility condition information using a letter grade system and reporting this in its annual reports from 2015 through 2017. However, the Coast Guard has not set performance goals for improving an asset’s grade, or established baselines to indicate where investments require adjustment, because it continues to revise the formula it uses to calculate the letter grades. Consequently, the letter grades from fiscal years 2015 through 2017 are not comparable year to year to measure performance. Definitions of Performance Management Common Terms Performance goal - a target level of performance expressed as a tangible, measurable objective against which actual achievement can be compared, including a goal expressed as a quantitative standard, value, or rate. A performance goal is comprised of a measure, a time frame, and a target. Performance measure - a tabulation, calculation, recording of activity or effort, or assessment of results compared to intended purpose, that can be expressed quantitatively or in another way that indicates a level or degree of performance. Performance target - quantifiable or otherwise measurable characteristic typically expressed as a number that tells how well or at what level an agency or one of its components aspires to perform. Baselines for Performance Outcomes- a quantifiable point at which an effort began and from which a change in outcomes can be measured and documented. In 2017, the Coast Guard reported a new performance measure for its maintenance efforts, called Average Condition Index, which reflects the average condition of the assets weighted by their replacement value. The Coast Guard set targets for this measure, but it did not establish what actions it would take to meet these targets. Limitations with the Coast Guard’s performance measures for its shore infrastructure are not a new issue, as they were also identified in 2015 by an external study commissioned by the Coast Guard. Specifically, the study reported that the Coast Guard’s condition index, which was more than 15 years old at the time, was not defensible because it lacked trend data and analysis capabilities. This study recommended that the Coast Guard develop key performance measures, among other things, for managing its shore infrastructure. Coast Guard officials told us that it has collected data and drafted some performance measures, but they have not yet implemented the recommendations from the 2015 study or set a time frame for doing so because they had not identified it as a priority. Establishing goals, measures, and baselines would better position the Coast Guard to assess their effectiveness and take appropriate actions to improve the condition of its shore infrastructure. Align Real Property Portfolios with Mission Needs and Dispose of Unneeded Assets Leading practices state that agencies should efficiently employ available resources, limit construction of new facilities, adapt existing buildings to new uses, and transfer ownership of unneeded buildings to other public or private organizations to align real property with mission needs. In addition, facilities that are functionally obsolete, not needed to support an agency’s mission, not historically significant, or not suitable for transfer or adaptive reuse should be demolished whenever it is cost effective to do so, under this leading practice. We have previously reported that the eventual need to address deferred maintenance and repair could significantly affect an agency’s future budget resources. The Coast Guard has made limited progress and partially met this leading practice by disposing of some unneeded assets, but it has not consistently or extensively aligned its property and mission needs. For example, in 2017, the Coast Guard’s Civil Engineering Units and facility engineers reviewed all projects on its $1.77 billion PC&I project backlog and removed 132 projects from it because, according to officials, they were either no longer valid as a result of mission changes, a non-PC&I alternative/solution was found to be more beneficial, or the need was met through another project. This validation effort was a positive step toward aligning property and mission needs, but it raises questions about whether and to what extent the PC&I backlog is routinely and consistently managed to ensure that projects reflect mission needs. The Coast Guard made some progress aligning property and mission needs through the sale of some assets. For example, in 2017, it sold 189 of its 2,961 housing assets through use of an initiative to divest itself of some housing assets—an effort which garnered $26.8 million in total sales proceeds over the life of the program. However, the Coast Guard’s ability to dispose of unneeded assets has been limited in some instances. For example, in 2013, the Coast Guard identified 18 multimission stations with duplicative coverage that could be permanently closed, using a process based on criteria that reflected mission needs. In October 2017, we reported that closing these stations could potentially generate $290 million in cost savings over 20 years; however, as of September 2018, the Coast Guard had taken no action to close these stations or establish time frames for their closure, although Coast Guard agreed with our recommendation that they do so. Moreover, our analysis of Coast Guard planning documents found that 5 of the 18 multimission stations recommended for closure in 2013 have projects on the Coast Guard’s current PC&I backlog. For example, Station Shark River, in New Jersey, was recommended for recapitalization in fiscal year 2017, despite Coast Guard recommendations to close the station in 1988, 1996, 2007, and 2013. Notably, the Coast Guard has made multiple attempts in previous years to close stations that it deemed suitable for closure but was unable to close them due to congressional intervention, and subsequent legislation prohibiting closures. Given the Coast Guard’s competing acquisition, operational, and maintenance needs, and PC&I backlog that will cost at least $1.77 billion to address, difficult trade- off decisions to align real property needs by disposing of unneeded assets may help to mitigate some resource challenges. Coast Guard Did Not Meet 3 of 9 Leading Practices for Managing Shore Infrastructure Backlogs The Coast Guard did not meet 3 of 9 leading practices for managing shore infrastructure backlogs, including establishing clear maintenance and repair investment objectives, employing models for predicting the outcomes of investments and analyzing trade-offs, and structuring budgets and related information to address maintenance backlogs. Establish Clear Maintenance and Repair Investment Objectives and Set Priorities among Outcomes to Be Achieved Agencies with maintenance and repair responsibilities should determine what outcomes are most important to achieve and set priorities among them, according to leading practices. Coast Guard provided guidance for central DLM planning boards, which calls for stakeholders to identify which projects will be reviewed by the planning boards, for board members to consider project trade-offs and to make recommendations on which projects to fund, and for stakeholders to then review the results. However, Coast Guard headquarters did not provide documented guidance to the six CEUs responsible for administering regional DLM planning boards—a process intended to establish clear objectives or priorities among outcomes to be achieved for approximately 70 percent of the Coast Guard’s DLM funds. Coast Guard headquarters officials told us that they instead rely on each CEU to hold their respective regional planning boards in accordance with locally established practices. However, only 1 of the 6 CEUs has developed and implemented written guidance for its DLM planning board process, and it is not clear how these boards set objectives or priorities among outcomes to be achieved. The Coast Guard provided some documentation detailing how regional DLM planning board inputs and subsequent decisions were linked to decision-making criteria for one regional DLM planning board meeting hosted by one of its nine Districts. Table 7, among other things, shows the limited extent of documentation to substantiate Coast Guard decisions. However, the Coast Guard did not meet this leading practice because it could not demonstrate, with documentation, how decisions were linked to criteria for its PC&I planning board meetings, central DLM planning board meetings, or any other regional DLM planning board meeting. Without the full range of information on which planning board decisions were made, neither we, nor the Coast Guard, could substantiate the extent to which the Coast Guard followed its processes or evaluate whether its processes for managing shore infrastructure projects were sound. OMB guidance calls for agencies to use information to support decision- making, such as whether an asset is continuing to meet business needs and contribute to goals, and whether there are smarter or more cost effective ways to deliver the function. This guidance is comparable to the leading practice discussed above, which calls for agencies to establish clear maintenance and repair investment objectives and set priorities among outcomes to be achieved. Additionally, according to OMB, agencies are to have a plan for periodic, results-oriented evaluations of program effectiveness, and agencies should discuss the results of these evaluations when proposing reauthorizations. Establishing guidance for planning boards to document project prioritization decision-making, as well as the impact of trade-off decisions, would allow agency decision makers, and Congress, to better understand Coast Guard priorities and how shore infrastructure project priorities might potentially affect other priorities. The Coast Guard was unable to provide documentation showing how it prioritized projects for a number of reasons, including that they didn’t have written guidance, documentation to verify the use of standardized meeting inputs such as presentations, and meeting minutes. Furthermore, officials could not explain why certain documentation was not maintained to demonstrate how the Coast Guard had made and prioritized funding decisions. Such documentation may allow the Coast Guard to show, for example, why repairing a station they previously wanted to close is a higher priority than fixing a station they appear to need to perform maintenance on certain assets (see fig. 3). To ensure that investment decisions are aligned with agency missions and goals, agencies should employ models to predict the future condition and performance of its facilities as a portfolio, according to leading practices. Performance-prediction models predict the deterioration of building components over time and are important because certain facility components are particularly prone to deterioration or failure, thus requiring more frequent maintenance or repairs. A 2015 review of the Coast Guard’s asset management framework identified the benefit of analyzing tradeoffs between reactive and preventative maintenance and described how preventative maintenance efforts could translate into cost savings. Coast Guard officials provided one example of its efforts to model outcomes, but it did not meet this leading practice because it has not properly used the results of this model to optimize competing investments for that asset line or any other asset line or provided documentary evidence verifying that it properly applied it. In December 2017, a Coast Guard Aviation Pavement Study employed a model that found that the Coast Guard could more efficiently prioritize investment in aviation pavement. It also identified strategies to achieve a long-term sustainable pavement condition. A proposed fiscal year 2018 to 2020 Coast Guard aviation pavement maintenance and recapitalization plan proposed using the study results and recommended actions that it said could save the Coast Guard $13.8 million by accelerating investment in aviation pavement sooner rather than deferring such maintenance and recapitalization. According to Coast Guard officials, the analytical approach outlined in its 2017 study could be applied to all 13 of its shore infrastructure asset lines. However, the Coast Guard has not properly implemented a maintenance and recapitalization strategy based on the results of its aviation pavement plan, nor has it applied the analytical approach from this plan to other asset lines. Coast Guard officials told us they have not fully acted on the aviation pavement plan nor developed models for other asset lines. Specifically, a Coast Guard official described actions the agency is taking as piecemeal; 1 of 5 PC&I projects identified by their plan has been prioritized and funded. According to Coast Guard officials, the other pavement projects continue to be a priority for the asset line, but funding decisions have been deferred due to resource constraints and other competing priorities. As a result of not properly implementing its plan, it is unclear if the Coast Guard will achieve the cost savings it projected. By not employing similar models across its asset lines for predicting the outcome of investments, analyzing trade-offs, and optimizing decisions among competing investments, the Coast Guard is missing opportunities to potentially identify and achieve cost savings across other asset lines. Structure Budgets to Identify Funding Allotted (1) for Routine Maintenance and Repair and (2) to Address Any Backlog of Deferred Maintenance and Repair Deficiencies Because Insufficient Levels of Such Funding Can Cause Agencies’ Backlogs to Increase According to leading practices, agencies should structure maintenance and repair budgets to differentiate between funding allotted for routine maintenance and repairs, and funding allotted to addressing maintenance and repair backlogs, to help ensure that underfunding does not affect the health and safety or reduce the productivity of employees, among other things. We found that Coast Guard budget requests did not provide Congress with accurate information about its funding needs. Specifically, we found that the Coast Guard did not meet this leading practice as its budget requests (1) have not clearly identified funding allotted for routine shore infrastructure maintenance needs, and (2) have not generally addressed deferred maintenance and repair deficiencies, resulting in increases to its backlogs. In addition, the Coast Guard has not included information in its Unfunded Priorities Lists and other related reports that clearly articulated trade-offs, or aligned with its requirements-based budget targets for shore infrastructure. Coast Guard officials were not able tell us why they have not requested maintenance and repair funding to adequately address their shore infrastructure backlog of deferred maintenance and repair deficiencies. First, we found that Coast Guard budget requests did not clearly identify funding allotted for routine shore infrastructure maintenance needs to address backlogs. Specifically, we found that budget requests related to shore infrastructure for fiscal years 2012 through 2019 did not provide Congress with required and complete information, as previously noted, necessary to inform decision-makers of the risks posed by untimely investments in maintenance and repair backlogs. While major maintenance and repair funding can be tracked within the Coast Guard’s budget, funding for routine recurring maintenance for shore infrastructure is embedded in a budget account that is used for both maintenance and operational expenses. As a result, the Coast Guard could not disaggregate expenditures from this account or determine how much funding goes towards routine maintenance. Second, we found that Coast Guard budget requests did not generally identify funding to address any backlogs of deferred maintenance or recapitalization, except for one fiscal year—2012—when the Coast Guard requested $93 million to recapitalize deteriorated/obsolete facilities and address the highest priority Shore Facilities Requirements List backlog items. The 2012 budget request also noted that the health and maintenance of its shore facilities are foundational for the safe and effective execution of Coast Guard missions. However, the Coast Guard reported on some challenges to completing maintenance projects. For example, Coast Guard officials we interviewed stated that the annual Congressional Budget cycle has contributed to infrastructure management challenges because they are prohibited from signing contracts for maintenance projects during continuing resolutions. For example, since the fiscal year 2018 budget was not passed until March 2018, they had to rush during the summer, their busiest time of year, to establish contracts and work orders to ensure projects were funded before the end of the fiscal year on September 30th. Third, we found that the Coast Guard’s annual Unfunded Priorities Lists and other reports, including their 5-Year CIP, did not clearly describe trade-offs. In July 2018, we reported that by continuing to manage its operational asset acquisitions through its annual budget process and 5- year CIP, the Coast Guard creates constant churn as program baselines must continually realign with budget realities, instead of budgets being formulated to support program baselines. Coast Guard officials said that prioritization and trade-off decisions are made as part of the annual budget cycle, and that the shore infrastructure projects on its Unfunded Priorities List reflect the highest priorities for the department within the given top level funding. However, the annual Unfunded Priorities List does not clearly articulate prioritization decisions, including information about trade-offs among competing project alternatives, as well as the impacts on missions conducted from shore facilities in disrepair that had not been prioritized in previous years. According to Coast Guard officials, and as we previously reported, such information is not included in the 5- Year CIP or Unfunded Priorities List because it is not statutorily required. These information shortcomings are consistent with previous findings and recommendations that the DHS Office of Inspector General has made. Finally, we found that Coast Guard budget requests have not been aligned with its requirements-based budget targets for shore infrastructure. For example, we found that Coast Guard budget requests have not identified appropriations sufficient to meet its DLM maintenance and repair targets, which call for annual expenditures equal to two percent of plant replacement value. According to the Coast Guard, meeting its target for DLM would require allocating about $260 to $392 million annually for these repairs. Coast Guard officials told us that they have made difficult decisions to postpone necessary facility maintenance and construction projects in order to address other competing priorities related to mission execution, such as maintaining, operating, and recapitalizing its aging surface and air fleets. Between fiscal years 2012 and 2017, the Coast Guard reported that it expended an average of $208 million per year on DLM, and officials stated that the Coast Guard never met its target during this time period. Similarly, Coast Guard budget requests have not been in alignment with its PC&I targets for recapitalization. For example, Coast Guard recapitalization targets show a far greater need for funding than the allotments from the appropriations it requested between fiscal years 2012 and 2019. Specifically, Coast Guard targets for recapitalization of shore assets indicate that $290 to $392 million in PC&I funding is needed annually. However, the Coast Guard budget requests for fiscal years 2012 through 2018 have ranged between about $5 million and about $99 million annually, as shown in Table 8. Notwithstanding the mismatch between Coast Guard budget requests and its requirements-based budget targets, allotments for Coast Guard shore PC&I from its appropriations in fiscal years 2016 through 2018 exceeded the Coast Guard’s requests. For example, in fiscal year 2016, the Coast Guard’s allotment of $130 million was almost three times the nearly $47 million requested. In 2018, the almost $45 million allotted was more than four times the $10 million requested. Explanatory materials on the annual appropriations act for fiscal year 2018 indicated that the appropriated funding above requested amounts was to be used for modernization and recapitalization of facilities, and facility improvements, among other things. Without accurate and transparent information about the Coast Guard’s budgetary requirements, Congress will lack critical information that could help to prioritize funding to address the Coast Guard’s shore infrastructure backlogs. Conclusions The Coast Guard’s inventory of shore infrastructure assets is vast, aging, and vulnerable to damage from extreme weather. Many of these assets are also critical to the Coast Guard’s operational mission performance. The Coast Guard has taken some steps to manage this infrastructure by implementing 3 of 9 leading practices for managing public sector maintenance backlogs—including identifying assets that are mission- critical, identifying risks posed by untimely investments, and identifying the primary methods for delivering maintenance and repair activities. However, significant work remains if the Coast Guard is going to make headway on reducing its backlog of at least $2.6 billion. Fully implementing the three leading practices that the Coast Guard now partially meets could help ensure that it benefits from establishing timeframes for and enhancing its guidance, establishing its performance metrics, baselines, and targets, and shedding unneeded assets. Additionally, fully implementing the leading practices that it does not meet—including implementing new approaches for documenting its project prioritization decisions, developing models that could help identify cost savings, and providing Congress with transparent and requirements- based budget requests that clearly identify alternatives and trade-offs— could help the Coast Guard more efficiently manage existing resources and better position the Coast Guard and Congress to address the shore infrastructure challenges. Recommendations for Executive Action We are recommending the following six actions to the Coast Guard: The Commandant of the Coast Guard should direct the program managers to develop a plan with milestones and time frames for standardizing Coast Guard’s facility condition assessments. (Recommendation 1) The Commandant of the Coast Guard should direct program managers to establish shore infrastructure performance goals, measures, and baselines to track the effectiveness of maintenance and repair investments and provide feedback on progress made. (Recommendation 2) The Commandant of the Coast Guard should work with Congress to develop and implement a process to routinely align Coast Guard’s shore infrastructure portfolio with mission needs, including by disposing of all unneeded assets. (Recommendation 3) The Commandant of the Coast Guard should establish guidance for planning boards to document inputs, deliberations, and project prioritization decisions for infrastructure maintenance projects. (Recommendation 4) The Commandant of the Coast Guard should employ models for its asset lines for predicting the outcome of investments, analyzing trade- offs, and optimizing decisions among competing investments. (Recommendation 5) The Commandant of the Coast Guard should include supporting details about competing project alternatives and report trade-offs in Congressional budget requests and related reports. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of this report to DHS for review and comment. In its comments, reproduced in appendix III, DHS concurred with our recommendations. DHS, through the Coast Guard, also provided technical comments, which we incorporated as appropriate. DHS concurred with our first recommendation that the Commandant of the Coast Guard direct program managers to develop a plan with milestones and time frames for standardizing the Coast Guard’s facility condition assessments. DHS stated that the Coast Guard plans to complete a standardized facility condition assessment by December 2019. However, to fully implement the recommendation, the Coast Guard needs to ensure that it standardizes the process for conducting facility assessments—action that goes beyond completing a singular standardized facility assessment. DHS concurred with our second recommendation that the Commandant of the Coast Guard direct program managers to establish shore infrastructure performance goals, measures, and baselines to track the effectiveness of maintenance and repair investments and provide feedback on progress made. DHS stated that the Coast Guard plans to develop initial shore infrastructure measures with associated goals and baselines during its annual strategic planning process and expects to complete this process in March 2020. DHS concurred with our third recommendation that the Commandant of the Coast Guard work with Congress to develop and implement a process to routinely align the Coast Guard’s shore infrastructure portfolio with mission needs, including by disposing of all unneeded assets. DHS stated that the Coast Guard plans to establish, by June 2020, a process to assess current and projected operational and mission support needs to identify and recommend disposal of unneeded land, buildings, and structures. The Coast Guard reported that in the interim it will continue to communicate with Congress about unneeded assets through its required annual Conveyance of Coast Guard Real Property Report. The Coast Guard reported that in the interim it will continue to communicate with Congress about unneeded assets through its required annual Conveyance of Coast Guard Real Property Report. DHS concurred with our fourth recommendation that the Commandant of the Coast Guard establish guidance for planning boards to document inputs, deliberations, and project prioritization decisions for infrastructure maintenance projects. DHS stated that the Coast Guard plans to review existing guidance and issue updates as necessary and that promulgation of this guidance for its next planning boards will be completed by December 2019. To fully implement this recommendation, the Coast Guard needs to ensure that its guidance requires that inputs, deliberations, and project prioritization decisions for these boards are all fully documented. DHS concurred with our fifth recommendation that the Commandant of the Coast Guard employ models for its asset lines for predicting the outcome of investments, analyzing trade-offs, and optimizing decisions among competing investments. DHS stated that the Coast Guard plans to assess the use of modeling tools used by the Department of Defense as well as other alternatives to enhance its real property asset management capability. DHS stated that the Coast Guard expects to complete its initial identification of alternatives in December 2019 and complete its examination of alternatives in December 2020. DHS concurred with our sixth recommendation that the Commandant of the Coast Guard include supporting details about competing project alternatives and report trade-offs in Congressional budget requests and related reports. DHS stated that the Coast Guard plans to submit future budget proposals based on OMB guidance and will include additional information in its Congressionally-mandated future Unfunded Priorities Lists. To fully implement this recommendation, the Coast Guard needs to ensure it includes supporting details about competing project alternatives and report on trade-offs, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or AndersonN@gao.gov. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology The objectives of this report are to evaluate (1) what is known about the condition and costs of managing the Coast Guard’s shore infrastructure, and (2) the extent to which the Coast Guard’s process for managing its shore infrastructure meets leading practices for managing public maintenance backlogs. To identify what is known about the condition and costs of managing the Coast Guard’s shore infrastructure, we reviewed three Coast Guard annual reports on shore infrastructure, issued for 2015 through 2017. We also reviewed Coast Guard documentation and data on its shore infrastructure inventory to describe the condition and costs of managing these assets. To measure the size of the Coast Guard’s total backlog, we examined the Coast Guard’s shore Acquisition, Construction, & Improvements (AC&I) backlog of projects the Coast Guard has identified as necessary to fulfill its missions (i.e., its Shore Facilities Requirements List) from fiscal years 2012 through 2018, as well as its depot-level maintenance backlog as of March 2018. We also reviewed planning and budget documents to determine how the backlog has changed over time. To identify the appropriation targets the Coast Guard identified as needed to address these backlogs, we reviewed guidance and budget data for the three appropriations related to shore infrastructure, reviewed planning and budget documents such as Coast Guard’s annual Unfunded Priorities List—which are lists of projects the Coast Guard would undertake if funding were available—and the Coast Guard’s annual Congressional Budget Justifications for fiscal years 2012 through 2019, to demonstrate how the backlog has changed over time relative to budgeted funds. We also interviewed Coast Guard officials at headquarters and in the field to obtain their perspectives on the appropriation targets and budget formulation process. To obtain additional information about the condition of the Coast Guard’s infrastructure in different parts of the country, we interviewed officials from each of the Coast Guard’s six geographically-organized Civil Engineering Units (CEUs), which are responsible for implementing both District and Headquarters directives. We also interviewed officials from the Coast Guard’s two geographically-defined Area Commands—Pacific Area (PACAREA) and Atlantic Area (LANTAREA), who vote on the Procurement, Construction and Improvements (PC&I) and central DLM planning boards. To review the Coast Guard’s longer-term planning process for its shore infrastructure, we reviewed the Coast Guard’s 5-year Capital Investment Plan and interviewed agency officials. To assess the reliability of the Coast Guard’s data discussed in this report, we interviewed knowledgeable agency officials, reviewed documentation, and electronically tested the data for obvious errors and anomalies. Specifically, we interviewed Coast Guard officials and discussed the mechanisms they use to assess the quality of their data and the extent to which Coast Guard employs quality control mechanisms, such as automated edit checks. Additionally, in August 2018, the Coast Guard informed us that its data on its shore infrastructure may not be complete if field inspectors did not identify problems at the facilities they inspected. Coast Guard officials also told us in July 2018 that not all projects on the Coast Guard’s PC&I backlog have cost estimates. As a result, the amount of funding needed to address the Coast Guard’s backlog of shore infrastructure projects could be understated because the Coast Guard has not identified all deficiencies that exist at its facilities nor estimated the cost to fix all of the deficiencies it knows about. Despite these limitations, we determined that the Coast Guard’s data are sufficiently reliable for the purposes of reporting on the Coast Guard’s overall portfolio of shore infrastructure assets and the minimum amount of money the Coast Guard identified as needed to complete deferred repair and PC&I projects. To identify leading practices for managing backlogs of deferred maintenance projects, we reviewed our prior work and the literature on deferred maintenance and repair as it pertains to federal real property portfolios. In our prior work, we identified nine leading practices based on studies conducted by the National Research Council (NRC) of the National Academy of Sciences between 1998 and 2012. These studies were (1) Stewardship of Federal Facilities: A Proactive Strategy for Managing the Nation’s Public Assets (1998); (2) Investments in Federal Facilities: Asset Management Strategies for the 21st Century (2004); (3) Predicting Outcomes from Investments in Maintenance and Repair for Federal Facilities (2012). As we previously reported, the nine leading practices we employed were the ones we identified as being the most relevant and appropriate to federal agencies managing their deferred maintenance and repair backlogs, however these practices do not represent all actions that federal agencies can employ to improve management of their real property to include their real property maintenance and repair backlogs. To evaluate the extent to which the Coast Guard’s process for managing its shore infrastructure met leading practices for managing public maintenance backlogs, we analyzed Coast Guard plans, policies, procedures, and related laws for managing, maintaining and repairing shore infrastructure. We identified and analyzed Coast Guard guidance on its decision-making process for determining maintenance and repair decisions, and assessed Coast Guard practices against our main criteria, the leading practice discussed above. We also compared Coast Guard practices with the Office of Management and Budget’s (OMB) program evaluation and capital programming guidance. We used the following scale to evaluate the Coast Guard’s management of its shore infrastructure deferred maintenance and repair: Met—The Coast Guard properly considered the leading practice and demonstrated with documentary evidence that it had fully applied it. Partially Met—The Coast Guard properly considered and demonstrated with some documentary evidence that it had applied the leading practice to some extent. Not Met—The Coast Guard did not properly consider or apply the leading practice and had no documentary evidence verifying that it had applied it. To further our understanding of the Coast Guard’s process for prioritizing PC&I and deferred maintenance projects and the extent to which Coast Guard actions aligned with the aforementioned leading practices, we interviewed knowledgeable Coast Guard officials with a role in making or implementing decisions related to shore infrastructure to obtain their perspectives. Specifically, we interviewed officials from Coast Guard units to (1) obtain information about local conditions and maintenance practices, and/or to (2) obtain information on the experiences these officials had pertaining to the PC&I planning board, central DLM planning board, and/or regional DLM planning board processes. We interviewed officials from all six of the Coast Guard’s regional Civil Engineering Units (CEU) which are responsible for assessing the condition of Coast Guard’s shore infrastructure to obtain their perspectives on this topic and to determine the extent to which data from one CEU is comparable to data from another. We also interviewed officials from the Atlantic and Pacific Areas in order to obtain a high-level regional perspective on requirements, conditions, and planning efforts. To evaluate how Coast Guard leadership assesses the condition of its infrastructure and makes trade-offs between competing projects, we also interviewed officials from Coast Guard headquarters units which oversee Coast Guard’s shore infrastructure. These interviews included officials from the Office of Civil Engineering, the Shore Infrastructure Logistics Center, the Facilities Operations & Support Division, and the Office of the Assistant Commandant for Capability. To identify examples of (1) what is known about the condition and costs of managing the Coast Guard’s shore infrastructure, and (2) obtain information about the Coast Guard’s process for managing its shore infrastructure, we conducted a site visit to Coast Guard Base Alameda in Alameda, CA. The selection of Base Alameda for our site visit was based on the concentration there of regional Coast Guard leadership and Coast Guard facilities. Our findings from our Base Alameda site visit are not generalizable to other Coast Guard facilities. Additionally, because the Coast Guard personnel we interviewed were not necessarily performing the same function or role, or even stationed in Alameda, for all years covered by our review (2012-2018), our findings from these interviews are not necessarily generalizable across time. Taken as a whole, however, our site visit provided us with insights into the condition of the Coast Guard’s shore infrastructure and into the processes the Coast Guard uses to maintain, repair, and replace these assets. We conducted this performance audit from November 2017 to February 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Characteristics of Coast Guard’s Shore Infrastructure Procurement, Construction, and Improvements Backlog This appendix provides summary statistics for the Coast Guard’s Procurement, Construction, and Improvements (PC&I) backlog as of June, for 2012 through 2018. Table 9 provides details of individual shore infrastructure projects on the PC&I backlog, table 10 provides details of aids to navigation and projects that were grouped together by the Coast Guard for planning purposes, and table 11 sums values in tables 9 and 10. Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contact and Staff Acknowledgements GAO Contact Nathan J. Anderson, (202) 512-3841 or andersonn@gao.gov. Staff Acknowledgements In addition to the contact above, Dawn Hoff (Assistant Director), Andrew Curry (Analyst-in-Charge), Michael Armes, John Bauckman, Chuck Bausell, Rick Cederholm, Billy Commons, John Crawford, Michele Fejfar, Peter Haderlein, Eric Hauswirth, Landis Lindsey, Michael Pinkham, Maria Mercado, Jan Montgomery, Forrest Rule, Christine San, and Adam Vogt made key contributions to this report.
Why GAO Did This Study The Coast Guard, within the Department of Homeland Security (DHS), owns or leases more than 20,000 shore facilities, such as piers, docks, boat stations, air stations, and housing units, at more than 2,700 locations. In June 2017, the Coast Guard testified to Congress that it had a $1.6 billion recapitalization backlog for its shore infrastructure, which had a replacement value of about $20 billion. GAO was asked to review the Coast Guard's management of its shore infrastructure. This report examines: (1) what is known about the condition and costs of managing the Coast Guard's shore infrastructure, and (2) the extent to which the Coast Guard's process for managing its shore infrastructure meets leading practices. To answer these questions, GAO reviewed relevant laws and Coast Guard annual reports on its shore infrastructure, analyzed Coast Guard data, and interviewed Coast Guard officials. GAO also compared Coast Guard policies and procedures, and actions taken during fiscal years 2012 through 2018 to manage its shore infrastructure, against the leading practices that GAO previously identified for managing public sector maintenance backlogs. What GAO Found About 45 percent of the Coast Guard's shore infrastructure is beyond its service life, and its current backlogs of maintenance and recapitalization projects, as of 2018, will cost at least $2.6 billion to address, according to Coast Guard information. The deferred maintenance backlog included more than 5,600 projects, with an estimated cost of $900 million. The recapitalization and new construction backlog had 125 projects, with an estimated cost of at least $1.77 billion as of 2018 (see figure). GAO's analysis of Coast Guard data found that as of November 2018 there were hundreds of recapitalization projects without cost estimates—the majority of recapitalization projects. Coast Guard officials told GAO that these projects are in the preliminary stages of development. The Coast Guard's process for managing its shore infrastructure did not fully meet 6 of 9 leading practices that GAO previously identified. Of the nine leading practices, the Coast Guard met three, partially met three, and did not meet three. For example, the Coast Guard generally has not employed models for predicting the outcome of maintenance investments and optimizing among competing investments, as called for in leading practices. In one instance, the Coast Guard used a model to optimize maintenance for its aviation pavement and, according to Coast Guard officials, found that it could save nearly $14 million by accelerating investment in this area (e.g., paving runways) sooner rather than deferring such maintenance. Coast Guard officials told us that such modeling could be applied within and across all of its shore infrastructure asset types, but the Coast Guard did not implement the results of this model and does not require their use. Without requiring the use of such models, the Coast Guard could be missing opportunities to achieve cost savings and better manage its maintenance backlogs. What GAO Recommends GAO is making six recommendations, which DHS agreed to implement, including that the Coast Guard align its management of its shore infrastructure backlogs with leading practices by requiring the use of models for predicting the outcome of, and optimizing among, competing investments for maintenance projects.
gao_GAO-18-379
gao_GAO-18-379_0
Background Effective communications are vital to first responders’ ability to respond to emergencies and ensure the safety of both their personnel and the public. In particular, first responders use communications systems to gather information, coordinate a response, and request additional resources and assistance from neighboring jurisdictions and the federal government. First responders use different communications systems, such as land mobile radio (LMR) and commercial wireless services. LMR: These systems are the primary means for first responders to gather and share information while conducting their daily operations and to coordinate their emergency response efforts. LMR systems are intended to provide secure, reliable voice communications in a variety of environments, scenarios, and emergencies. Across the nation, there are thousands of separate LMR systems. They operate by transmitting voice communications through radio waves at specific frequencies and channels within the radio frequency portion of the electromagnetic spectrum. Commercial wireless services: For data transmissions (such as location information, images, and video) public safety entities often pay for commercial wireless services. Some jurisdictions also use commercial wireless services for voice communications. These systems must work together, or be interoperable, to ensure effective communication. Emergency communications interoperability refers to the ability of first responders and public safety officials to use their radios and other equipment to communicate with each other across agencies and jurisdictions when needed and as authorized, as shown in our hypothetical example of response to a fire in figure 1. First responders may use designated radio frequencies—known as interoperability channels—to help communicate among different jurisdictions. Certain interoperability channels have been designated for federal agencies to communicate with non-federal agencies, and others have been designated for use at the state and local levels. OEC, created within DHS in 2007, has taken a number of steps aimed at supporting and promoting the ability of public safety officials to communicate in emergencies and work toward operable and interoperable emergency communications nationwide. OEC develops policy and guidance supporting emergency communications across all levels of government and various types of technologies. OEC also provides technical assistance—including training, tools, and online and on-site assistance for federal, state, local, and tribal first responders. Also as required by the Post-Katrina Act, OEC developed the National Emergency Communications Plan in 2008 and worked with federal, state, local, and tribal jurisdictions to update it in 2014 to reflect an evolving communications environment. The long-term vision of the plan—which OEC views as the nation’s current strategic plan for emergency communications—is to enable the nation’s emergency response community to communicate and share information across levels of government, jurisdictions, disciplines, and organizations for all threats and hazards, as needed and when authorized. FEMA is responsible for coordinating government-wide disaster response efforts, including on-the-ground emergency communications support and some technical assistance. Additionally, FEMA provides a range of grant assistance to state, local, tribal, and territorial entities, including preparedness grants that can be used for emergency communications. FEMA provides assistance to the RECCWGs, which report to their respective FEMA regional administrator. A chair and co-chair serve as the leaders for each RECCWG and provide direction in determining activities and priorities. These groups are comprised of federal, state, and local officials, and coordinate with private sector stakeholders. For example, members include representatives from local fire departments, state and local police departments, tribal officials, telecommunications companies, and federal agencies. Figure 2 shows the member states and territories that compose each group. The Post-Katrina Act established the RECCWGs and requires each group: to assess local emergency communications systems to meet goals of the National Emergency Communications Plan, to ensure a coordination process for multijurisdictional and multi- agency emergency communications networks through the expanded use of mutual aid agreements for emergency-management and public-safety communications, and to coordinate support services and networks designed to address immediate needs in responding to disasters, acts of terrorism and other manmade disasters. According to FEMA officials, these groups are run by their members and determine their own activities. FEMA plays a role in facilitating the groups and provides some administrative support. Each group reports annually on the status of the region’s operable and interoperable emergency- communications initiatives. In these reports, the groups describe how they fulfill their responsibilities and identify areas for improvement. FEMA compiles the reports into a RECCWG annual report with an executive summary and distributes it to the heads of OEC, the Federal Communications Commission, and the National Telecommunications and Information Administration, as well as to the groups themselves, which may further distribute the final report as they see fit. Selected Stakeholders Cited Ongoing Interoperability, Funding, and Training Challenges We identified several prevalent challenges to emergency communications based on our analysis of RECCWG annual reports, case studies, and interviews with emergency communications stakeholders. These challenges included achieving the interoperability of communication systems, obtaining funding, ensuring ongoing training, and increasing the emphasis on communications during emergency response exercises. As discussed in more detail later, DHS technical assistance and grant programs as well as coordination efforts of the RECCWGs have focused on addressing these ongoing challenges. Interoperability Challenges We identified ongoing technical and non-technical challenges in achieving interoperability of emergency communications systems. In the 2016 RECCWG annual report, most of these groups (7 of 10) cited interoperability as a challenge to emergency communications in their regions. We have reported over the years that interoperability issues can affect mission operation and put first responders and the public at risk when responding officials cannot communicate with one another. Technical Challenges Interoperability challenges can exist due to technical issues such as equipment’s incompatibility. As mentioned previously, first responders primarily rely on LMR to communicate and coordinate during emergencies. Although LMR systems have similar components, such as handheld portable radios and mobile radios mounted in vehicles, systems that operate on different radio frequency bands are not always interoperable, making it difficult for different jurisdictions to communicate with each other without technical solutions such as multi-band radios and interoperable gateways. Within Los Angeles County, local stakeholders told us that many jurisdictions use LMR systems that operate on different radio frequency bands across the area’s 88 cities and 56 law enforcement agencies. When an emergency involves first responders from a variety of jurisdictions, communication among them can be challenging. For example, one stakeholder told us about an incident in September 2015 where a carjacking turned into a car chase through multiple jurisdictions before the suspect barricaded himself with hostages in a restaurant. The restaurant was surrounded by multiple law enforcement entities and none of them could immediately communicate with each other since their LMR systems operated on different radio frequency bands. According to this stakeholder, this interoperability challenge was dangerous because the officers could not share information such as a description of the suspect. Interoperability challenges can also exist because of a reliance on commercial wireless providers for voice and data emergency communications. In such cases, if the commercial network is overloaded or damaged, first responders could be unable to communicate within their own agency. This situation could also result in interoperability challenges when an agency’s first responders cannot communicate with other jurisdictions. According to a 2017 OEC report, reliance on commercial providers for first responders’ voice and data access can be problematic for a variety of reasons—including that they must share these networks with the public. According to the report, recent events around the country have demonstrated that regional and city commercial networks are sometimes overwhelmed and compromised by both routine events and large gatherings of people. For instance, the report stated that during the 2017 Mardi Gras celebrations in New Orleans, first responders’ wireless voice and data connections were impaired while responding to an accident along the parade route, possibly because of the spike in cellular usage by the public. Additionally, two stakeholders from the same region told us that a state in their region does not have a statewide LMR system and relies on commercial wireless service for emergency communications; such reliance could cause interoperability challenges in the event of an emergency. The First Responder Network Authority (FirstNet) is working to establish a nationwide dedicated network for public safety use that is intended to foster greater interoperability, support important data transmissions, and meet public safety officials’ reliability needs. FirstNet is working with five jurisdictions designated as “early builder projects” of the public-safety broadband network that are deploying local and regional public-safety broadband networks similar to what FirstNet must do on a national scale. Non-Technical Challenges Interoperability challenges can also result from non-technical or human factors such as a lack of coordination or not properly using interoperability channels. Additionally, as we reported in 2016, 23 states’ responses to our survey indicated that they have experienced interoperability difficulties when communicating or attempting to communicate with federal partners during disasters. For example, following Hurricane Harvey, stakeholders with the City of Houston and Harris County reported interoperability challenges when they were unable to communicate with members of FEMA’s Urban Search & Rescue teams deployed to the area. However, according to stakeholders we interviewed, they were initially unaware these teams were operating in the area because the teams did not share information—including the LMR channels on which they were operating— with local first responders. According to a stakeholder from the State of Texas this was a communications coordination challenge. Stakeholders from the City of Houston, Harris County, and the State of Texas told us that having this information would have been useful to help coordinate emergency response. FEMA officials told us that they were aware of this issue, which they noted was an isolated incident, and have emphasized to these teams the importance of sharing this information in the future. We also found that at least one stakeholder in each of our case study locations identified challenges due to first responders not using interoperable LMR channels properly. Additionally, a report about the response to the Boston Marathon bombings stated that first responders underutilized dedicated channels or had difficulty accessing them, a situation that limited coordination. Two stakeholders in Boston told us that officials in the city develop a comprehensive communications plan for major events to help allow all levels of government to better communicate, but one of these stakeholders said there is a continued need for training on using interoperability channels. As discussed later, DHS offers technical assistance and grants to improve interoperability. Challenges with Training and Exercises Based on RECCWG annual reports, our case studies, and interviews with stakeholders, we identified: (1) an ongoing need for training and (2) the lack of a communications component in emergency response exercises as both challenges to emergency communications. Stakeholders in each of our three case study locations told us there is an ongoing need for training and practice in using emergency communications equipment. Additionally, this issue was raised in a recent RECCWG annual report and a report about the response to the 2013 Boston Marathon bombing. Stakeholders in two of our case study locations, Los Angeles and Boston, told us that first responders continue to need training after investments are made in new interoperable communications equipment, posing an ongoing need for training. In addition, stakeholders from all three of our case study locations told us that first responders need training on the proper use of interoperability channels. For example, this gap in training was the case during the response to the Boston Marathon bombing when responders used their everyday channels rather than interoperable channels. If all responders are not operating on the same channels, there is the possibility of missing critical information. Additionally, with staff turnover and position changes, four stakeholders told us there is a constant need to educate first responders and other personnel. For example, officials from one department told us that emergency communications training is always a challenge with their approximately 10,000 personnel. Other stakeholders also told us that public safety officials must know how to properly use new technologies and that evolving technology requires additional training. OEC officials said that their training and technical assistance has evolved to address new and emerging technologies such as broadband. For example, OEC’s current technical assistance catalog contains new or revised offerings on topics related to Next Generation 911 such as the technical and procedural challenges associated with integrating digital communications into these 911 systems. OEC officials told us they work with various emergency communications stakeholders, such as state and local agencies, to stay informed about training needs. Exercises—which can be planned and carried out at the federal, state, or local level—are important in preparing for emergencies because they can expose challenges, which can then be addressed before an actual emergency, according to stakeholders we interviewed. According to OEC officials, these exercises are intended to simulate large-scale disasters or emergencies and bring participants (including first responders, state and federal officials, hospital personnel, etc.) together to test equipment and actual response procedures. According to DHS’s Interoperability Continuum, implementing effective exercise programs to practice communications interoperability is essential for ensuring that the technology works and that first responders are able to effectively communicate. One stakeholder in Houston told us that planned events prior to Hurricane Harvey revealed that many first responders in the area were not comfortable using interoperability channels because they did not typically operate on these channels or did not need to use radios for their daily work. After planned events (such as the 2017 Super Bowl), they gained experience and familiarity, and were able to use these interoperability channels without incident during the response to Hurricane Harvey, according to this stakeholder. According to RECCWG annual reports in 2015 and 2016, major emergency-response exercises often do not include a large communications component, which can limit the preparedness of state and local public safety officials. Additionally, the 2016 RECCWG annual report states that in a large-scale disaster, compromised or insufficient communications can have dramatic effects on response efforts. All 10 RECCWGs agreed on the need to test communications during emergency response exercises, and two of these groups cited this need as a specific priority for the upcoming year. FEMA officials told us they are working to build scenarios into exercises that will also help to test communications. Three stakeholders told us that during large-scale events, there is still too often an assumption that emergency communications systems will remain operational in the event of an emergency. The stakeholders said exercises are more beneficial and realistic when communications personnel are included in exercise planning and the exercises include a communications component. OEC officials told us that communications are frequently either omitted from or only notionally included in exercises and assessments, and because of this situation, OEC offers training on planning exercises. As discussed later, DHS offers technical assistance to help address the above challenges related to training and exercises. Funding Challenges Based on RECCWG annual reports and interviews with emergency communication stakeholders we identified challenges in obtaining funding for acquiring and maintaining interoperable equipment and systems, as well as for travel and training. For example, a recent RECCWG annual report noted that determining funding sources to address interoperability needs was a challenge. This report raised concerns that two federal grant programs that jurisdictions previously used to address interoperability needs are no longer funded. Stakeholders told us that DHS grant programs have been important for emergency communications projects in their regions. They also noted that within a jurisdiction many projects compete for a limited amount of funding. For example, one stakeholder explained that even after his jurisdiction used a DHS grant to purchase a new LMR system, the jurisdiction must continue to seek funding to upgrade and maintain the system. Further, one recent RECCWG annual report identified funding limitations as causing many states and agencies to make trade-offs among capabilities essential for operable and interoperable communications—such as deciding whether to upgrade equipment or systems. As existing communications systems and equipment continue to age or become obsolete, these trade-offs put the agencies at an increasing risk of not being able to effectively exchange communications during an event response, according to this recent RECCWG annual report. Additionally, leaders from all 10 RECCWGs also told us funding was currently a challenge to emergency communications in their region. For example, half (10 of 20) of these group cited limited funding to upgrade or replace equipment as a challenge in their region. According to a leader in one region that identified funding as a major challenge, many entities within the region need funding for this purpose. They noted that efforts to find alternative funding sources have not been successful and that as emergency communications technology evolves it will grow increasingly difficult for first responders to keep pace with the changes. Likewise, representatives from one public safety association told us that maintaining interoperable communications is a challenge due to the expense of new radios and software. As a result, they noted that jurisdictions, particularly those in less populated areas, might decide to purchase less costly equipment that is not interoperable. Such purchases can result in emergency communications challenges. The leader of one RECCWG told us that due to consistent budget shortfalls over the past several years, one state in the region has deferred maintenance of communications infrastructure. This deferral is expected to create more expensive problems in the future. Leaders from 5 of the 10 RECCWGs told us they have also experienced funding challenges related to travel or training. For example, one regional group leader told us that funding is a challenge because funding shortfalls prevent personnel from attending courses that would increase their knowledge of equipment and new technologies. Another regional group leader told us that funding is a challenge in that travel money is very limited. Given the large geographic area covered by this RECCWG, it is expensive for group members to travel to meetings, inhibiting participation and information sharing at RECCWG meetings. Stakeholders Indicated DHS’s Technical Assistance and Grants Have Enhanced Emergency Communications in Their Regions Technical Assistance Technical assistance, including guidance and training, is one of OEC’s main responsibilities, and while FEMA does provide certain technical assistance, it is not the agency’s primary responsibility. These OEC and FEMA efforts are intended help address emergency communications challenges, including those discussed above. OEC offers various types of technical assistance, such as workshops and assessments to help participants strengthen their communications plans and governance structures, as well as a seminar to help participants incorporate communications into emergency response exercises. According to OEC officials, they have delivered more than 2,000 technical-assistance-training courses and workshops since OEC was created in 2007. In addition, OEC has developed other resources, such as a toolkit for managing emergency communications at planned events such as the Super Bowl. According to OEC officials, they have a technical assistance budget of approximately $9 million per year, and OEC delivers this assistance at no cost to the requesting state or territory. OEC also has 11 subject matter experts located across the country who help jurisdictions with their communications programs and resources. These individuals seek to build partnerships across different levels of government and the private sector and are involved with their respective RECCWGs. FEMA offers training related to emergency communications, such as various courses on emergency management topics. FEMA also has 10 regional emergency communications coordinators who are responsible for providing assistance on an as-needed basis to their respective regions and coordinating FEMA’s tactical communications support during a disaster or emergency. These coordinators also support the RECCWGs. OEC and FEMA jointly provide training to first responders and other public safety officials to prepare them to act as communications unit leaders. OEC also provides training for other specialized communications support roles. The communications unit is part of a standardized organizational emergency response structure called the Incident Command System. When a disaster or emergency occurs, the communications unit is responsible for managing the operational and technical aspects of communications. For example, one of the unit leader’s tasks includes developing a plan to coordinate the radio frequencies used by first responders, to help ensure interoperability. The unit may also include a communications technician who provides the technical skills to implement the required equipment and systems. OEC trained more than 8,000 individuals between 2007 and August 2017 to serve in communications unit positions, according to OEC information. While stakeholders continue to face a range of emergency communications challenges, they are generally satisfied with DHS’s technical assistance to help address them. Specifically, nearly all the stakeholders we contacted (36 of 41) were generally satisfied with technical assistance from OEC, FEMA, or both. In addition, in 2016 we reported that all states had received OEC technical assistance and that almost all were satisfied with the support they received from OEC. When asked about the general topic of DHS technical assistance, more than half (25 of 41) of stakeholders we interviewed said that training for communications unit positions was useful in advancing emergency communications capabilities in their jurisdictions. OEC and FEMA also employ a “train-the-trainer” approach for the communications unit-leader course. Houston-area stakeholders told us that over 1,000 local personnel across the state had received communications unit training and that the area now has a large number of local trainers. Five stakeholders we interviewed for our Houston case study praised this training and said it was critical in preparing communications personnel to respond to Hurricane Harvey. Specifically, one stakeholder who served as a communications unit leader during Hurricane Harvey told us that this training prepared him to develop an effective interoperable radio communications plan for the storm. This individual also said that first responders who came to assist from outside the region often brought their own communications unit leaders with them, and because this training is consistent nationwide, the outside groups knew how the response effort would be organized and whom to call about which radio frequencies to use. However, a stakeholder from the Los Angeles area told us that while having the communications unit train itself was useful, it was insufficient without opportunities to practice the skills in real-life situations, a challenge that other stakeholders also noted in a recent RECCWG annual report. Based on feedback from state and local personnel, OEC is assisting states with establishing policies and procedures for their communications unit resources, including a process to demonstrate skills required for these specialized positions. While stakeholders are generally satisfied with technical assistance, many (19 of 41) stakeholders said their jurisdictions would still benefit from additional technical assistance, aligning with a challenge we identified earlier regarding the need for training. Four stakeholders told us that OEC adapted technical assistance offerings to the needs of their jurisdictions. OEC officials told us that they customize technical assistance as needed—for example, when providing communications- planning support to a local jurisdiction, OEC will collect local agencies’ policies and facilitate a discussion with stakeholders to determine the best overall approach. A stakeholder in Texas said that OEC’s technical assistance—including communications-focused exercises and support with developing a statewide interoperability plan—had helped to advance capabilities in the state. Another stakeholder told us that FEMA’s training has been critical in helping tribal nations build emergency-management programs, including providing an introduction to emergency communications. When asked about their experiences with technical assistance, six stakeholders specifically told us they had benefited from OEC’s support with communications planning or coordination for special events, such as the Super Bowl. Each state or territory can request up to five offerings per year from OEC’s technical assistance catalog, and OEC officials told us that, given their available resources, they can generally fulfill about 60–70 percent of requests each year. Grant Funding DHS administers several grant programs to help address emergency communications challenges. Three programs provided the majority of DHS’s grant funding aimed at improving emergency communications from fiscal year 2011 to 2016, based on our analysis of data from FEMA’s Grants Reporting Tool. FEMA administers these three grant programs, which are intended to support a wide range of emergency response capabilities, one of which is operational communications. Urban Area Security Initiative: Assists high-threat, high-density urban areas in efforts to build and sustain the capabilities necessary to prevent, protect against, mitigate, respond to, and recover from acts of terrorism. This assistance can include building, sustaining, and enhancing emergency preparedness activities, including emergency communications interoperability. State Homeland Security Program: Assists state, local, tribal, and territorial preparedness activities that address high-priority preparedness gaps across all emergency preparedness capabilities— including communications to prevent, protect against, respond to, and recover from acts of terrorism and other catastrophic events. Emergency Management Performance Grant: Assists state, local, tribal, and territorial emergency-management agencies in preparing for “all hazards,” and can be used to support all capabilities, including communications. Each state and territory and the District of Columbia receive a base amount of funding, and the program requires recipients to commit matching funds. According to FEMA’s data, which is reported by recipients, between fiscal years 2011 and 2016 more than $700 million in grants were provided to support emergency communications, as described in table 1. According to FEMA officials, these funding amounts are approximate totals because the recipient-reported data have certain limitations. For example, the information may be incomplete if the recipient does not submit required biannual reports. In addition, FEMA officials told us that recipients identify which core capability the funding was used to support, but the data might not capture all aspects of a project because only one core capability may be selected at a time. FEMA officials told us that FEMA tracks funds obligated and dispersed at the overall grant level and uses the recipient-reported data to have a general understanding of how funding supports emergency communications and other capabilities. According to FEMA officials, recipient-reported data is sufficient for that general purpose. We have a substantial body of work related to DHS’s grant program management and in 2013 recommended that FEMA make improvements in collecting and validating performance data for certain grant programs. FEMA implemented these improvements in 2017. FEMA officials told us they have also initiated a multi-year effort to improve the oversight and monitoring of grants and support data analytics for improved efficiencies—called the Grant Management Modernization program—which is scheduled to be operational in 2020. Given these ongoing actions, we did not assess FEMA’s grants management efforts as part of this review. Some state and local stakeholders told us that DHS grants (outlined in table 1 above) have allowed them to build and enhance communications capabilities that their jurisdictions would otherwise lack funding to address. These grants have been used to, among other things, build interoperable communications networks and purchase equipment, for example: Urban Area Security Initiative grant funds were used to enhance a regional radio system in the Houston area. According to stakeholders, the system helped the region respond to Hurricane Harvey because it enhanced interoperability in the Houston area, so that first responders from multiple counties and agencies were all using the same system to communicate. Urban Area Security Initiative grant funds have also been used to help build the LMR component of an interoperable communications network in Los Angeles County. Urban Area Security Initiative and State Homeland Security Program grants funds were used to build a large radio cache in Massachusetts, with over 400 multi-band radios that can be quickly deployed into the field to support both emergency and planned events across multiple jurisdictions. One stakeholder told us that these radios are requested on a regular, often weekly, basis. Emergency Management Performance grants have been used to establish and enhance state and local emergency operations centers across the country. These centers are activated during disasters and emergencies and provide a single location for leaders to coordinate the response effort, including the coordination of communications. RECCWGs Have Enhanced Capabilities in Several Ways, but Collaboration across Regions Is Limited As part of the Post-Katrina Act, Congress established the RECCWGs to help address emergency communications issues, such as a lack of equipment interoperability. We found the RECCWGs have enhanced emergency communications capabilities through relationship building and information sharing—with demonstrated benefits. Although these groups have had successes, they still face challenges, such as ensuring continuous and broad participation and increasing the national visibility of the groups. Further, collaboration across these groups is limited. Without ways to collaborate across the regions, RECCWG members may be missing opportunities to share best practices and leverage the experience of their counterparts nationwide. RECCWGs Facilitate Relationship Building and Information Sharing, with Demonstrated Benefits Relationship Building The RECCWGs bring together communications stakeholders from different levels of government and the private sector, and all of these groups have identified relationship building as a major benefit, according to our analysis of RECCWG annual reports and interviews with these groups’ leaders. Members expand their professional networks and build relationships within their regions when they gather for in-person meetings and participate in regular conference calls. For example, a leader of one RECCWG told us that through these interactions, members learn about each other’s areas of expertise and also make connections in the region. A leader of another RECCWG told us that his members were more willing to call on each other for assistance because of the strong working relationships they had developed in the group. The relationships established in these groups have facilitated cooperation and resulted in more effective emergency response efforts, as described below. Information Sharing All of the RECCWGs share best practices and lessons learned, according to the groups’ annual reports and the leaders of these groups. Information sharing takes a variety of forms, including discussing lessons learned after disasters or other major events, sharing experiences with new technologies, and presenting information from federal and private industry partners. For example, the Region X group reported in 2016 that members shared lessons learned after declared disasters in several states. Further, according to the 2016 RECCWG annual report, in Region VII, members from Nebraska shared their experiences with expanding their statewide LMR system. This expansion helped members in Iowa construct their own system in a more timely and cost-effective way. RECCWG members share information about communications resources within their regions; that information can be deployed when a disaster or emergency occurs. For example, nearly all of these groups (9 of 10) groups have or are working to compile information about communications assets, such as equipment and personnel. Information sharing about communications resources has been used to facilitate response efforts, as described below. The groups have helped promote awareness of developments in federal programs, such as the public safety broadband network, according to the 2016 RECCWG annual report. The groups also provide a forum for FEMA to understand the regions’ capabilities, needs, and vulnerabilities. According to FEMA officials, they use this information to develop regional plans that help FEMA assist the regions more effectively during a disaster. Demonstrated Benefits In several instances, RECCWG members have reported assisting each other during disasters and emergencies, drawing on the relationships and information sharing fostered by the groups. For example, a member of the Region I group, which includes New England, told us that prior to his group’s formation, emergency communications stakeholders from different levels of government in that region did not meet. However, because of the relationships that regional group helped to build, these stakeholders now meet regularly to develop communications plans for large planned events and have collaborated to provide communications support in responding to the Boston Marathon bombing in 2013, Hurricane Sandy in 2012, and other events both within and outside of the region. According to a leader of the Region X group, relationships developed in the group were also helpful in responding to wildfires in Washington State in 2014 and 2015. In addition, after Hurricane Matthew and a major flood in 2016, Region IV group members drew on relationships developed in the RECCWG to coordinate support from other states in the region to assist South Carolina, according to a leader of that group. As discussed earlier, nearly all of these groups (9 of 10) have or are working to share information about resources that can be deployed during a disaster. At least three regions have consulted these resource compilations during recent disasters. For example, according to the 2016 RECCWG annual report, this information was used during Hurricanes Hermine and Matthew in 2016, severe storms and flooding in Minnesota and Wisconsin in 2016, and severe winter storms in New England in 2015. Several RECCWGs have or are working to develop technical solutions to enhance interoperability within or bordering their regions, according to the groups’ annual reports, the leaders of these groups, and FEMA officials. For example, the group in Region V connected disparate statewide radio systems in Illinois, Indiana, Ohio, and Michigan, so that responders would be able to communicate in the event of a regional disaster or emergency. The Region VIII group, which includes the border states of Montana and North Dakota, is working to develop solutions to enhance interoperability among states in the region and with Canada. After the Deepwater Horizon oil spill in 2010, the Region IV group, which includes the southeastern states along the Gulf of Mexico, developed a communications network that is still in place and could be used for other events affecting the Gulf Coast. In 2011 this network was modified to connect to Arkansas and Louisiana’s statewide communications networks, and was successfully tested during a multi-state hurricane evacuation exercise. The Region IV group is also working to identify technology to directly connect emergency operations centers in the southeastern states to coordinate assistance and evacuations when other communications methods fail, according to the 2016 RECCWG annual report. RECCWGs have addressed or are working to address several policy concerns based on joint positions developed within their groups, according to the groups’ annual reports, interviews with RECCWG leaders, and FEMA officials. For example, RECCWG efforts led to changes in the National Telecommunications and Information Administration manual allowing for state and local use of federal interoperability channels, according to FEMA officials. In addition, the Region I group raised concerns regarding an interoperability challenge with Department of Defense (DOD) first responders, resulting in a nationwide rule change for DOD’s land mobile radios used for domestic response activities. After a corporate jet crashed at Hanscom Air Force Base in Massachusetts in 2014, local first responders could not communicate with the Hanscom Fire Department because the base’s radio programming policies did not permit the use of interoperable radio channels. The RECCWG subsequently collaborated with DOD and other federal agencies on an initiative to program DOD radios with national interoperability channels. In addition, during a Region VI group meeting, members learned that multiple states were experiencing a common problem with the use of national interoperability channels. They found that in multiple areas, local entities were using these channels for day-to-day operations, meaning they could not be reliably used during disaster and emergency situations because first responders experienced interference on these interoperability channels. In February 2017, the Region VI group raised its concerns to the Federal Communications Commission, which had licensed these channels to local entities for use on a secondary basis, and the group continues to work on addressing this issue. FEMA officials told us that the participation and involvement of federal agencies in the RECCWGs has been critical in addressing policy changes. RECCWGs Face Other Ongoing Challenges Although the RECCWGs have cited several achievements, they have ongoing challenges, such as ensuring broad, continuous participation and establishing national visibility for the groups, according to their annual reports and interviews with group leaders and other selected group members. Various factors can make participation in these groups difficult. Participation is on a volunteer basis, in addition to members’ regular work responsibilities, and some groups cover large geographic areas. Leaders or members from four RECCWGs told us their groups have had turnover in membership, such as when individuals move to other positions or retire. FEMA officials told us that this turnover is a challenge shared across the RECCWGs. In the 2016 RECCWG annual report, many of these groups reported progress in broadening and diversifying their membership. For example, 7 of 10 groups added state and local 911 representatives to their membership, and nearly all saw an increase in participation from cellular providers. However, four of the groups identified challenges with tribal participation in 2016, and all 10 groups reported that they have continued outreach to tribal nations in their respective regions. A representative from a tribal emergency-management organization told us that time and resource demands can affect the level of engagement from tribal members, because emergency response personnel for tribal nations often have many other primary responsibilities. The activity level and achievements also vary across the 10 RECCWGs, according to our analysis of the groups’ reports, as well as interviews with group leaders, selected group members, FEMA officials, and other stakeholders. As noted earlier, each group determines its own activities. Stakeholders we interviewed told us that some regions have very active groups with many achievements, while other RECCWGs meet less frequently and have had fewer achievements. For example, stakeholders from Region I told us that they meet on a monthly basis and collaborate frequently outside of formal meetings. On the other hand, a leader from another region said that his group has not been very active in recent years. According to the 2016 RECCWG annual report, that group did not have any formal meetings in 2016, and instead stakeholders worked together through other coordination groups in the members’ states and territories. We also found that the emergency communications stakeholders’ awareness of the activities of the RECCWGs can vary. For example, two stakeholders told us they are interested in regional collaboration but were not aware that these groups existed. In addition, four other stakeholders we interviewed knew about the groups in their respective regions, but they told us the groups’ activities were limited or they were not aware of what the group had done. The RECCWGs have identified other issue areas they are working to address. For example, almost all of these groups (9 of 10) are working to improve the information that states, private sector partners, and others share about communications resources that can be deployed during disasters or emergencies, according to the 2016 RECCWG annual report. In addition, a member of one RECCWG told us it can be challenging to address policy concerns when federal agencies they contact are not aware of the groups or their purpose. This stakeholder said that it was important to increase the national visibility of the groups in order to improve their effectiveness. Increasing national collaboration, as discussed below, could be one way to address this concern. Collaboration across RECCWGs Has Been Limited OEC’s National Emergency Communications Plan—which OEC views as the nation’s strategic plan for this area—established a vision of enabling the nation’s emergency response community to communicate and share information across all levels of government, disciplines, and jurisdictions. This plan has prioritized enhancing coordination among stakeholders, processes, and planning activities across the emergency response community. In addition, our previous work has found that collaboration can be used to address a range of purposes, including information sharing and communication. In this work, we identified key considerations for implementing interagency collaborative mechanisms, such as ensuring that all relevant participants have been included. Federal internal control standards also speak broadly to the importance of communicating to achieve an entity’s objectives. FEMA has taken some steps to encourage collaboration among RECCWG leaders, but broader collaboration across regions remains limited. RECCWGs have periodically shared information with their counterparts in other regions, but according to our analysis of the groups’ annual reports and interviews with group leaders, these exchanges primarily involve one region working with another on an ad-hoc basis. For example, according to one group member in Region VI, members of other RECCWGs reached out to him to learn more about communications successes and challenges during Hurricane Harvey. FEMA has taken some steps to encourage information sharing and collaboration among the RECCWGs. Specifically, FEMA encouraged the establishment of a monthly conference call for RECCWG co-chairs in 2015, and its Disaster Emergency Communications division distributes a biweekly newsletter to RECCWG members, according to FEMA officials. However, there is not an ongoing mechanism for communication across all of the regions so that the full membership can effectively share information with each other and collaborate. While the co-chair conference calls are intended to enhance collaboration across the regions, the meetings do not involve the broader membership of the groups. Most RECCWG leaders (15 of 20), as well as 9 other stakeholders, told us that more collaboration across the groups was needed. For example, four stakeholders explained to us that if a RECCWG in another part of the country has identified best practices it would be useful to share the information more broadly. Three other stakeholders who said their groups were less active told us it would still be helpful to receive information about what other groups are doing to enhance emergency communications. Stakeholders suggested several possible methods, such as an in-person conference or a national-level working group that functions using virtual or other means. FEMA officials have considered ways to enhance collaboration but they face certain limitations. Specifically, FEMA officials told us they had considered an in-person national conference, but FEMA’s budget for the groups was limited and a national conference would be too resource- intensive. FEMA officials also explained that they facilitate the groups, but the groups are run by their members. According to FEMA officials, they have tried some ways to enhance collaboration across the RECCWGs, such as by encouraging the groups to extend meeting invitations to other regions and use online portals for collaboration. Developing and implementing an appropriate ongoing mechanism for collaboration may be a worthwhile investment because it could further enhance the RECCWGs’ efforts to improve emergency communications. Reaching a consensus with RECCWG members may help FEMA determine options that are both useful for the membership and feasible, given FEMA’s resource constraints. In the role as a facilitator for RECCWGs FEMA is well positioned to lead this effort. Without ways for all members of the RECCWGs, not just the groups’ leaders, to collaborate across regions, members may be missing opportunities to share best practices and leverage the knowledge and experience of their counterparts throughout the nation. For example, lessons learned from Hurricane Harvey and other natural disasters in 2017—such as how first responders used interoperability channels effectively—may not be shared across all of the regions without additional methods for collaboration. Further, several of these groups are working to address similar challenges and priorities, as discussed above. For example, nearly all of the groups want to improve the way information about emergency communications resources is shared in their regions, so that these resources can be better leveraged during disasters and emergencies. Some of the RECCWGs have explored ways to better leverage these resources, but in the absence of methods to exchange information more broadly, RECCWGs may not be able to easily share what has been successful for their regions. Conclusions When disasters strike or emergencies arise, they can span multiple jurisdictions, making coordination and collaboration critically important for effective emergency response. The RECCWGs established by the Post- Katrina Act have enhanced emergency communications within their regions. While the relationship building and information sharing within these groups have contributed to benefits at the regional level, nationwide collaboration among the groups has been more limited. Such collaboration could help the groups address common challenges by providing a way to improve the sharing of best practices and lessons learned and to allow members to leverage the knowledge and experience of their counterparts to improve emergency communications capabilities in their regions and nationwide. Therefore, it could benefit FEMA to work with these groups to reach consensus on and to implement a mechanism for accomplishing cross-regional collaboration. A concerted effort focusing on these groups’ collaboration needs, while also considering FEMA’s resource constraints, could help FEMA and regional stakeholders determine an appropriate mechanism for collaboration moving forward. Recommendation for Executive Action The Administrator of FEMA should work with RECCWG members to reach consensus on and implement an ongoing mechanism to encourage nationwide collaboration across these groups, considering the costs of one or more suitable methods, such as a national-level working group that uses virtual or other means of coordination, as appropriate. (Recommendation 1) Agency Comments We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are reprinted in appendix I. In written comments, DHS concurred with our recommendation and provided an attachment describing the actions it would take to implement the recommendation. DHS noted that FEMA is committed to increased collaboration among RECCWGs to coordinate multi-state efforts and measure progress on and improving survivability, sustainability, and interoperability of communication at the regional level and nationwide. Separately FEMA provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or members of your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, David Sausville (Assistant Director); Aaron Kaminsky (Analyst in Charge); Melissa Bodeau; Josh Ormond; Kate Perl; Cheryl Peterson; and Kelly Rubin made key contributions to this report.
Why GAO Did This Study During emergencies, reliable communications are critical. Disasters, such as 2017's hurricanes, continue to test the nation's emergency communications capabilities. As disasters can cross jurisdictional boundaries, collaboration within and across regions is very important. GAO was asked to review implementation of the Post-Katrina Act's provisions related to disaster preparedness, response, and recovery. This report examines: (1) challenges related to emergency communications that selected stakeholders have experienced; (2) their views on DHS's emergency communications assistance; and (3) the regional working groups established by the Post-Katrina Act and their effect on emergency communications capabilities. GAO reviewed DHS's reports and grant data for fiscal years 2011–2016 and conducted case studies of three cities—Houston, Los Angeles, and Boston—selected based on the number of declared disasters, DHS grant funding, and geographic diversity. GAO interviewed DHS officials; leaders of all 10 regional working groups and other stakeholders, including public safety officials in the case study cities; and others chosen for their expertise. What GAO Found Selected first responders and public safety officials identified various challenges related to emergency communications. These challenges include attaining the interoperability of communication systems, obtaining funding, ensuring ongoing training, and increasing the emphasis on communications during emergency response exercises. For example, some stakeholders told GAO about challenges related to equipment that is not interoperable, and others said first responders need training after investments are made in new interoperable communications equipment. To help address these challenges and as required by the Post-Katrina Emergency Management Reform Act of 2006 (Post-Katrina Act), the Department of Homeland Security (DHS) has provided technical assistance, such as training, and Federal Emergency Management Agency (FEMA) grants. It has also established regional emergency communications coordination working groups, which bring together stakeholders from different levels of government and the private sector within FEMA's 10 regions. While emergency communications challenges persist, stakeholders told GAO that DHS's technical assistance generally meets their needs and that FEMA grants have helped them enhance emergency communications capabilities. In particular, stakeholders found training for specific communications positions was useful. Houston-area officials said this training was critical in preparing first responders for Hurricane Harvey. Some stakeholders told GAO that FEMA grants helped them address needs that would otherwise go unfunded, including interoperable communications networks and equipment. GAO found that the regional working groups have enhanced emergency communications capabilities through building relationships and sharing information. Within the respective regions, group members have: assisted each other during disasters and emergencies, developed technical solutions to enhance interoperability, and addressed policy concerns, such as the use of interoperable radio channels during emergencies. However, most regional group leaders told GAO that more collaboration across the groups was needed. GAO's prior work has also found that including all relevant participants can enhance collaborative efforts. Further, DHS's strategic plan for emergency communications established a vision of collaboration among stakeholders across the nation. While FEMA has encouraged collaboration among regional working-group leaders, cross-regional efforts have been limited and do not involve all group members. Developing and implementing an appropriate ongoing mechanism for collaboration could enhance emergency communications capabilities, such as by helping group members address common challenges. Without ways for all members of these groups to collaborate across regions, members may be missing opportunities to share information and leverage the knowledge and experiences of their counterparts throughout the nation. What GAO Recommends FEMA should work with regional working-group members to reach consensus and implement an ongoing mechanism, such as a national-level working group, to encourage nationwide collaboration across regions. DHS concurred with this recommendation.
gao_GAO-18-203
gao_GAO-18-203_0
Background Restitution Roles and Responsibilities DOJ and its components, as well as the judiciary, play important roles in requesting and collecting restitution. DOJ and select components: Prosecutors in DOJ’s Criminal Division and the Criminal Divisions of the 94 USAOs are responsible for overseeing criminal matters, including identifying and notifying victims, determining their losses as part of a case investigation, prosecuting cases and negotiating the terms of plea agreements, of which restitution may be a part. Within DOJ’s Criminal Division, the Money Laundering and Asset Recovery Section manages DOJ’s Asset Forfeiture Program. As previously stated, FLUs within each USAO undertake activities to collect restitution from offenders in their district. Additionally, all USAOs have asset forfeiture staff responsible for forfeiting property seized by law enforcement agencies because the property was used in criminal activities or purchased with the proceeds of criminal activities. According to EOUSA guidance, coordination between the FLU and Asset Forfeiture units is highly encouraged to use forfeited assets as a means to collect on unpaid restitution debts. DOJ requires each USAO to have its own policies and procedures related to debt collection efforts but allows them discretion in developing these policies and procedures to ensure that they are appropriate for local conditions. DOJ also requires USAOs to have policies and procedures to make early, effective, and coordinated asset investigations and recovery a routine part of every case involving victims but allows USAOs to specify these policies and procedures. DOJ’s EOUSA provides USAOs with management assistance, guidance, training, and administrative support. Among other activities, EOUSA provides management assistance to USAOs by administering internal evaluations for each USAO, which are intended to provide on-site management support for that office. Further, EOUSA provides guidance to enhance offices’ efforts to request and collect restitution. Judiciary: Within the judiciary, the 94 federal district courts order restitution, receipt restitution payments, and disburse restitution to victims. Within the federal district where the offender was convicted, a probation officer prepares the presentence investigation report (PSR) for the court, which includes information on the victim’s losses and an offender’s financial information. Probation officers may obtain this information from DOJ, which has the statutory responsibility for the enforcement and collection of criminal debt. The court uses the PSR, among other things, to determine whether to order restitution. If an offender is released to the community by the court and placed on supervision, probation officers are responsible for ensuring the offender abides by the terms of release, including paying any restitution owed to victims. The Clerk of each District Court is responsible for the receipt of restitution from offenders and for disbursing payments to victims. The Judicial Conference is the national policy-making body for the federal courts. The Conference operates through a network of committees created to address and advise courts on a wide variety of subjects such as information technology, personnel, probation and pretrial services, space and facilities, security, judicial salaries and benefits, budget, defender services, court administration, and rules of practice and procedure. The Judicial Conference has taken policy positions on restitution-related issues and has supported legislative proposals to improve the restitution process. AOUSC is the agency within the judiciary that provides a broad range of legislative, legal, financial, technology, management, administrative, and program support services to federal courts. AOUSC is responsible for carrying out Judicial Conference policies and a primary responsibility of AOUSC is to provide staff support and counsel to the Judicial Conference and its committees. USSC is an independent agency within the judiciary which, among other activities, establishes and promulgates detailed sentencing guidelines that judges are to consider in sentencing offenders convicted of federal crimes, including guidelines on when and how to order restitution. Additionally, each district court is required to submit to USSC a report of each offender’s sentence that includes, among other information, details on the offenses for which the offender was convicted; the sentence imposed on the offender; and if the judge departed from the sentencing guidelines, information on reasons why. USSC maintains a database containing sentencing data on federal offenders convicted of felonies or serious misdemeanors, analyzes it and publishes these data on an annual basis. USSC is also statutorily required to annually report to Congress its analysis of sentencing-related documents, including an accounting of districts USSC believes have not submitted appropriate information to the commission, among other things. Restitution Overview During the course of a federal criminal investigation, federal prosecutors identify and notify victims, as well as determine their losses in conjunction with the federal agents investigating the case. If a defendant pleads guilty or is found guilty at trial, the prosecutor has the burden of proving the victims’ losses in court. To facilitate this, a Victim-Witness coordinator within the USAO responsible for the case provides victims the opportunity to explain their losses in detail, usually through a Victim Impact Statement. This information is then to be provided to a federal probation officer who uses it to begin a PSR. To develop the PSR, probation officers use information provided by the USAO and may contact victims and verify the loss amounts. Additionally, probation officers will investigate an offender’s economic circumstances— such as if the offender has a job, any assets or any dependents. If a judge determines that restitution is to be ordered, the judge must order restitution for the full amount of a victim’s losses for offenses without consideration of the economic circumstances of the defendant. Judges may decline to order restitution in certain instances, for example, where restitution is discretionary, or in certain cases where the number of identifiable victims makes restitution impracticable or the complexity of calculating restitution would unduly prolong the sentencing process. If the court does not order restitution, or orders only partial restitution, the judge must provide the reason, and judges usually do so in a written Statement of Reasons document. Figure 1 provides an overview of the federal restitution process. Upon imposition of a restitution debt by the court, FLU staff use two mechanisms to determine the collectability of the debt and what collection actions to take. First, FLU staff classify the debt into one of four categories to determine the extent to which the FLU will pursue enforcement actions to collect upon the debt. FLUs classify debts from a Priority Code 1 debt (indicating that FLUs will make collection of this debt the highest priority) to a Priority Code 4 debt (indicating that FLUs will make collection of this debt the lowest priority). Second, FLUs may suspend collection action on criminal debts, regardless of their categorization, under certain circumstances if they determine the debts are uncollectible. FLU staff may also determine that debts are permanently uncollectible and categorize them as Priority Code 4 debts. If a debtor does not provide payment, FLU staff then use various enforcement actions to collect the restitution debt. These can include, among other actions, filing liens against an offender’s property, coordinating with asset forfeiture staff to use forfeited assets to pay the restitution debt, and garnishing wages an offender may earn. Victims can be compensated for losses with the proceeds of forfeited assets through DOJ’s Asset Forfeiture Program and in accordance with law and regulation. Federal regulations provide that the proceeds from forfeited assets are first used to cover program costs associated with forfeiture-related activities and next to pay valid owners, lien-holders, and federal financial regulatory agencies. Forfeited assets can then be distributed to other victims of crime as compensation for their losses if their loss is a direct result of the commission of the offense underlying forfeiture or a related offense. Any remaining funds from the forfeited asset may be placed into official use, distributed to foreign governments, state or local law enforcement agencies as part of the equitable sharing program to enhance cooperation with federal investigations. When victims are eligible for compensation using forfeited assets, DOJ employs two processes: restoration and remission. The restoration process involves the USAO staff requesting funds on behalf of a victim when there is both an order of forfeiture and an order of restitution. Under the restoration process, USAO staff request DOJ’s Money Laundering and Asset Recovery Section to use the forfeited asset to pay a restitution debt. If DOJ approves the request for restoration, the funds from the forfeited property are then transferred to the Clerk of the Court who disburses this money to the victim. The remission process requires a victim of a crime to directly petition DOJ to receive funds from the forfeited property. According to officials in DOJ’s Criminal Division, the courts may not order restitution on behalf of victims who suffered a specific actual loss as a direct result of a crime for a variety of reasons, and therefore the remission process serves as a complement to the restoration process to ensure victims are made whole. For example, these officials stated that, among other reasons, the courts may not order restitution if a defendant dies prior to sentencing or if the case is one in which a court is not required to, and does not, order restitution, but the victim has suffered eligible losses. Select USAOs Reported Documenting Requests for Restitution, but the Judiciary Did Not Always Document Reasons It Was Not Ordered EOUSA and Officials in Six USAOs Told Us Their Offices Document Requests for Restitution in Case Files EOUSA and USAO officials in all six of the offices with whom we spoke told us that prosecutors document requests for restitution in their case files and that their offices employ other internal controls, such as the use of templates and forms, throughout the prosecution process to ensure that prosecutors request restitution as appropriate. EOUSA officials told us that although the agency does not track this information, they believed all USAOs generally document requests for restitution in their offices’ case files. Further, USAO officials in all six offices told us that prosecutors document requests for the court to order restitution in their case files by including this information in a written memorandum. To support prosecutors in documenting this information, all six offices we selected provide prosecutors with a prosecution memorandum template. Of the six templates we reviewed, four explicitly include a section for prosecutors to indicate whether victims have been identified and the extent of any victim losses. In addition to these templates, four of six USAOs we selected had forms that prosecutors could use to identify whether cases have victims and their need for restitution when drafting criminal charging documents. Moreover, officials from two of the six USAOs told us their offices use this form as an internal control to ensure prosecutors have identified all victims and considered their need for restitution, if applicable. All six offices we selected also provided prosecutors templates for drafting plea agreements, and templates we reviewed from all six USAOs included language requesting the offender pay restitution, if applicable. However, prosecutors are not required to use plea agreement templates, nor are they required to request restitution as part of a plea agreement. USAO officials from one office stated that including this language in the plea agreement template served to remind prosecutors of their requirement to consider requesting restitution as stated in the U.S. Attorney’s Manual. Select USAO officials also described various forms of management oversight to ensure prosecutors request restitution as appropriate. Specifically, four USAOs we selected require supervisory review of the form that prosecutors fill out when drafting criminal charging documents, which includes information on victims. Additionally, officials in all six USAOs told us that they require supervisory review of plea agreements for every case. For example, officials from two USAOs told us their office requires the Criminal Chief, the supervisor of all criminal cases, to approve documents in the plea agreement, which may include requests for restitution. USSC Has Information on Restitution Orders for 95 Percent of All Offenders Sentenced From Fiscal Years 2014 through 2016 Federal courts sent information on sentencing decisions to USSC and USSC had information on restitution decisions for 95 percent of all offenders from fiscal years 2014 through 2016. According to our analysis of USSC data, 214,578 federal offenders were sentenced from fiscal years 2014 through 2016 and restitution was ordered for 33,158 of those offenders, or 15 percent. Collectively, courts ordered these offenders to pay $33.9 billion in restitution during this period. Courts did not order restitution for the remaining 181,420 offenders, or 85 percent. Table 1 shows the number of federal offenders sentenced and ordered to pay restitution for fiscal years 2014 through 2016, as well as the total amount of restitution ordered by the courts. The majority of federal offenders were sentenced for immigration or drug- related offenses, and USAO officials in all six offices we selected told us that these types of offenses do not typically have victims with actual losses. For example, from fiscal years 2014 through 2016, USSC data showed that 131,088 offenders, 61 percent of offenders sentenced, were sentenced for immigration or drug-related offenses and courts ordered 999 (or less than 1 percent) of these offenders to pay restitution. USSC data show that courts ordered restitution more often for offenders sentenced for other offenses, such as fraud. For example, courts sentenced 21,551 offenders for fraud offenses from fiscal years 2014 through 2016, and courts ordered restitution for 15,902 of these offenders, or 74 percent. Table 2 shows the number of offenders sentenced and the number ordered to pay restitution by offenses for which restitution was most often and least often ordered by courts from fiscal years 2014 through 2016. The percentage of federal offenders ordered to pay restitution varied across federal court districts; from 2 percent of offenders in one district to 42 percent in another district. USAO officials we interviewed stated that some of this variation may be due to the types of offenses prosecuted within different districts. For example, officials from one USAO stated that their office, which had a high volume of immigration–related offenders, had few cases in which restitution was applicable. Our analysis of USSC data showed that from fiscal year 2014 through fiscal year 2016 and across all districts, districts with a higher than average rate of immigration-related offenders had lower than average rates of restitution ordered. Conversely, districts with above-average rates of offenders convicted of financial offenses such as fraud, embezzlement, money laundering, tax offenses, counterfeiting or bribery had higher than average rates of restitution ordered, as shown in table 3. Judges indicated on documents sent to USSC that restitution was not applicable and thus did not order it for most offenders sentenced from fiscal years 2014 through 2016—167,230 offenders—or 78 percent of all offenders sentenced during this time period. Our analysis of sentencing information for the remaining offenders found that courts ordered restitution at a higher rate as compared to all offenders. Specifically, after excluding offenders for whom restitution was not applicable and were not ordered to pay it, we found that courts ordered restitution for 70 percent of the remaining 47,348 offenders. EOUSA and USAO officials told us that in cases where there are identifiable victims, restitution may not be ordered for other reasons. EOUSA officials told us that restitution may not be ordered for several reasons, such as when victims provide no proof of their losses or when victims recover compensation through other means, such as through civil proceedings. Further, officials from one USAO told us that victims must provide documentation of their losses for restitution and, if victims are not able to provide this documentation, courts may decline to order restitution. Also, in certain cases, courts are not required to order restitution—such as when there is no identifiable victim or, on the other hand, when the number of identifiable victims is so large as to make restitution impractical, among other reasons. Additionally, the court might not order, or order only partial restitution for other reasons, such as when the value of property the defendant returned to the victim was deducted from the restitution award or because the victim received compensation from insurance. Data on Five Percent of Restitution Orders Were Incomplete If a court does not order restitution, or orders partial restitution, it is required to provide the reason for its decision and to provide that reason to USSC, but our analysis showed USSC did not always have these data. Specifically, from fiscal years 2014 through 2016, we found that restitution was not ordered—and no reason was documented in USSC data for that decision—for 9,848 offenders (5 percent of the 214,578 offenders sentenced during this time period). Information on offenders’ sentences, including restitution, assists USSC in its continuous reexamination of its guidelines and policy statements and ensures that various sentencing practices are achieving their stated purposes. Further, Standards for Internal Control in the Federal Government state that management should evaluate issues identified through monitoring activities or reported by personnel to determine whether any of the issues rise to the level of an internal control deficiency. In response to our questions about the missing information on reasons why restitution was not ordered, AOUSC and USSC officials stated that they were unaware of the missing information or why it was missing. Judiciary officials stated that because various entities within the judiciary participate in the process of collecting and recording information on reasons restitution was not ordered, they did not know which entities could take action to improve USSC data. However, as previously discussed, if the court does not order restitution, or orders only partial restitution, the judge must provide the reason, and judges usually do so in a written Statement of Reasons form. The Judicial Conference, along with USSC, has developed guidance to help judges fill out the Statement of Reasons form and AOUSC supports the Judicial Conference in carrying out its policies. Further, courts must provide USSC the written Statement of Reasons form for sentences imposed. USSC is also responsible for collecting, analyzing, and distributing information on federal sentences provided by each district court, including information related to orders for restitution. However, judicial officials, including from the entities listed above, agreed that further studying the missing data may inform the judiciary of the cause of the missing data, as well as any efforts needed to improve USSC information. Courts are required to provide reasons for not ordering restitution and to provide this information to USSC so that the agency can analyze and report on sentencing data. Determining why USSC data are incomplete could help inform the judiciary whether the issue rises to the level of an internal control deficiency and whether additional action can be taken to improve the transparency of sentencing decisions. Doing so could help the judiciary ensure reasons for not ordering restitution are provided consistently in all cases and potentially improve data provided to USSC, in turn supporting its mission to promote transparency in sentencing decisions. DOJ Collected $2.95 Billion in Restitution Debt from Fiscal Years 2014 through 2016, but Most Debt Remains Outstanding Due to Offenders’ Inability to Pay DOJ Generally Collected More on Newer Debts, Though Not All Collected Restitution Is Disbursed Our analysis of DOJ data showed that DOJ collected $2.95 billion in restitution debt from fiscal years 2014 through 2016, half of which was collected on debts imposed during this period. The extent of collections across the 94 USAOs ranged from a high of $848 million in one USAO to a low of $1.2 million in another USAO. The median amount collected for USAOs was $10.7 million. DOJ was more successful at collecting restitution on newer debts—debts imposed from fiscal years 2014 through 2016. Of the $2.95 billion in restitution debt collected, about half was collected from new debts imposed by courts during this time period. Specifically, DOJ collected $1.5 billion (4 percent), of the $34 billion ordered from fiscal years 2014 through 2016. The remaining half of the debt collected during this time frame was collected from debts imposed between fiscal year 1988 and fiscal year 2014. New debts—imposed in fiscal years 2014 through 2016—were also more likely to be fully paid during this time period compared to all debts. Specifically, from fiscal years 2014 through 2016, DOJ collected the full amount of restitution on 4,003 of the 24,950 debts imposed during this time, 16 percent. However, across all debts, including debts imposed prior to fiscal year 2014, DOJ collected the full amount of restitution ordered on only 5 percent of debts. Across all restitution debts, DOJ collected at least some of the debt for one-third of debts and did not collect any restitution on the remaining two-thirds. More than 60 percent of the restitution DOJ collected in fiscal years 2014 through 2016 was owed to non-federal victims ($1.8 billion), including individuals, corporations and state and local governments. An additional 37 percent of restitution was collected on behalf of federal agencies that were victims of crimes. One percent of restitution collected was community restitution, which is restitution collected for drug offenses that otherwise have no victims and which is disbursed to state victim assistance agencies and state agencies dedicated to the reduction of substance abuse, as shown in table 4. AOUSC officials noted that some collected restitution is not disbursed to non-federal victims due to a lack of accurate contact information for these victims. Specifically, according to AOUSC, as of June 2017, courts had more than $132 million in restitution due to 113,260 victims that could not be disbursed because of a lack of accurate contact information for these victims. DOJ is required to provide courts with victim contact information, and victims are to notify DOJ if their contact information changes. However, AOUSC and USAO officials told us that this notification by victims may not always occur. For example, officials in one USAO told us that due to the length of court proceedings, victims may move without notifying the court prior to the disbursement of restitution and, as a result, the court is unable to disburse restitution to those victims. Of $110 Billion in Outstanding Debt, 91 Percent Is Uncollectible Because Offenders Have Little Ability to Pay According to our analysis of DOJ data, at the end of fiscal year 2016, $110 billion in restitution was outstanding and USAOs had identified $100 billion of that debt as uncollectible, as shown in figure 2. USAOs may identify debts as uncollectible and suspend collection actions on a debt for a variety of reasons, including that the offender has no, or only a nominal, ability to pay the debt. Probation officials, EOUSA officials, and officials from five of six USAOs we interviewed stated that most outstanding restitution debt is identified as uncollectible and collection action is suspended because many offenders have little ability to pay the debt—a conclusion supported by USSC data. For example, according to USSC data, 95 percent of offenders ordered to pay restitution from fiscal years 2014 through 2016 received a waiver from paying a court-ordered fine, indicating their inability to pay. While courts are allowed to take an offender’s economic circumstances into consideration when issuing fines, they generally may not do so when ordering restitution. As a result, EOUSA and federal probation officials with whom we spoke stated that offenders ordered to pay restitution often do not have an ability to do so and therefore a large amount of restitution orders is uncollectible. Select USAO Officials View DOJ’s Recommended Practices for Requesting, Ordering and Collecting Restitution as Generally Effective Through various guidance documents, DOJ has identified and recommended numerous practices for DOJ prosecutors and FLU staff to use throughout the restitution process to help ensure full and timely restitution for victims. USAO officials in all six offices with whom we spoke stated that, based on their experience, these practices were generally effective. Specifically, DOJ and EOUSA officials identified practices for prosecutors and FLU staff to use when requesting restitution, facilitating court orders for restitution, and collecting restitution and documented these practices in several guidance manuals. Officials we interviewed from all six USAOs stated they were generally satisfied with the guidance from EOUSA and that they thought most of DOJ’s recommended practices were effective when requesting restitution, facilitating court orders for restitution, and collecting restitution. Requesting restitution. Officials we interviewed from three USAOs identified coordination between prosecutors and case investigators prior to sentencing to identify victims and their losses as an important practice for requesting restitution. USAO officials from three of the six offices stated that gathering detailed information on an offender’s financial resources, which include assets that could be forfeited and used to pay a restitution debt, was a very effective practice related to requesting restitution. Facilitating court orders of restitution. Although the courts, and not prosecutors, are responsible for ordering restitution, DOJ guidance identifies several practices that prosecutors can use to facilitate orders of restitution that may increase the likelihood of full and timely restitution for victims. Officials from three of six USAOs stated that the most effective practice related to ordering restitution was ensuring courts ordered restitution as due and payable immediately. Specifically, when offenders cannot pay restitution in an immediate lump-sum payment, the courts must specify a payment schedule through which the offender will pay restitution based on the offender’s ability to pay. In these cases, USAO officials stated that it is effective for prosecutors to ensure the restitution order specifies that restitution is due and payable immediately. According to an EOUSA official, this permits the agency to immediately pursue all collection remedies allowed by law whenever the debtor has or subsequently obtains the ability to pay. Collecting restitution. Officials from all six USAOs stated that using the Treasury Offset Program (TOP), a program that allows for the reduction or withholding of a debtor’s federal benefits, such as a tax refund, was one of the most effective practices for collecting restitution. Specifically, officials in one USAO told us that TOP requires minimal effort for FLU staff and can result in a high amount of collections. As an example, officials from two USAOs told us their respective offices each recovered more than $500,000 dollars in restitution debt in fiscal year 2016 through TOP. Officials from three of the six offices also identified using wage garnishment as an effective practice for collecting restitution. Across all parts of the restitution process, USAO officials we spoke with also consistently identified DOJ recommended practices related to internal and external communication and collaboration as effective for improving the restitution process. Specifically, the officials identified collaboration between various units in the USAO as an effective practice to ensuring restitution for victims. For example, USAO officials in two of the six offices highlighted coordination between Victim-Witness coordinators and prosecutors to help identify victims and quantify their losses as effective to assisting in the request for restitution. Additionally, USAO officials in all six offices stated that strong coordination between FLU personnel and criminal prosecutors to identify an offender’s financial resources and available assets was an effective practice to help ensure FLU staff could collect restitution using those resources or assets. USAO officials from five of six offices identified external communication between FLU and the federal probation office as an effective practice. Specifically, officials from these USAOs stated that FLUs coordinating with probation officers during the offender’s supervision period to enforce restitution terms was an effective practice for collecting restitution. Additionally, according to EOUSA guidance, FLU staff can use outreach and training with other partners such as the probation office to facilitate information sharing on restitution collection issues and officials from five of six USAOs told us that FLUs conducting training and outreach is a very effective practice. In addition, probation officials we interviewed in each of the six federal judicial districts we selected stated that ongoing communication between USAO staff and probation officers is effective to ensuring victims are identified and receive full and timely restitution. Probation officials from one court district emphasized the importance of a good working relationship with the USAO, stating that the probation office and USAO are better able to ensure victims and their losses are accurately identified and defendants’ ability to pay is adequately addressed when working collaboratively. A probation official from another office said that probation officers regularly coordinated with the USAO’s FLU, and this coordination was particularly important on cases involving complex financial crimes, where the offender has a complicated financial portfolio. Further, probation officials from five of six probation offices also stated that attending training conducted by the FLU is a very effective practice. EOUSA and selected USAO officials told us that while these practices may be useful in some circumstances, they may not be effective or applicable in all cases or in all districts. Specifically, practices DOJ recommends may be effective when offenders have the ability to pay restitution but are simply unwilling to do so; however, USAO officials in five of six offices stated that these practices cannot mitigate the fact that many offenders lack the ability to pay restitution because they lack assets and income. Additionally, while EOUSA guidance recommends that FLU staff contact co-defendants or victims for information on the whereabouts or assets of offenders who owe restitution, officials from three USAOs told us this was not effective. According to one official, although co- defendants are sometimes eager to share information, the information is usually unreliable. USAO officials also identified some recommended practices as not applicable to their district. For example, EOUSA recommends that FLU units request Asset Investigation assistance from EOUSA for complex cases involving large amounts of valuable assets. However, USAO officials in a small, rural district with whom we spoke stated that the types of cases their office prosecutes tend not to be the type of financial cases that warranted use of this resource. DOJ Could Improve Oversight of the Collection of Federal Restitution, Including the Use of Forfeited Assets to Pay Restitution Debt DOJ Does Not Have Measures or Goals to Assess Performance in the Collection of Restitution DOJ has identified improving debt collection—including court-ordered restitution—as a major management initiative in its 2014-2018 Strategic Plan. However, it does not have any measures or goals in place to assess its performance in meeting this initiative or meet requirements that it evaluate its performance in seeking and recovering restitution as required by statute. In 2001, we recommended that DOJ adequately measure its criminal debt collection performance against established goals to help improve collections and stem the growth in uncollected criminal debt. DOJ concurred with this recommendation, and as of fiscal year 2003, annually assessed each district based on established collection goals for that district. However, as of September 2017, DOJ no longer evaluates each district based on established goals. EOUSA officials stated that DOJ no longer uses these performance goals and that the agency did not maintain records for when or why it stopped. EOUSA officials stated that while the agency does not have any measures or goals to assess USAOs’ performance in improving debt collection, including the collection of federal restitution, they are working with DOJ’s Justice Management Division to develop a suite of analytical tools to monitor the collection of debt across all offices. According to DOJ, some of these analytical tools have been implemented and additional tools will be implemented by March 2018. EOUSA officials stated that these tools will help the agency determine which cases are most likely to result in significant collections and the types and timing of enforcement actions that generate maximum debt recovery results. EOUSA officials further stated the analytical tools will allow the agency to compare districts’ efforts based on a variety of factors (e.g., caseload, staff size, and enforcement actions). These analytical tools may provide EOUSA with valuable insight into the present condition of the collection of restitution across USAOs, but they will not provide DOJ with a baseline performance standard that could be used to indicate if USAOs’ efforts to collect restitution debts are having a measurable impact in meeting DOJ’s objective of improving debt collection. Additionally, EOUSA conducts evaluations of each USAO every 4 years, which include a review of FLU operations, but EOUSA officials stated that these reviews do not include oversight of the collection of restitution. Among other aspects of USAO operations, these internal evaluations review the extent to which each FLU is complying with statutory and DOJ requirements related to debt collection, has sufficient program resources, and adequately manages its caseload. However, DOJ and EOUSA officials told us that it did not plan to use these internal evaluations to meet the Justice for All Reauthorization Act of 2016 requirement to evaluate each USAO on its performance in seeking and recovering restitution for victims. Specifically, the officials stated that these internal evaluations are not an appropriate mechanism to meet the law’s requirements because the internal evaluations do not specifically review the seeking and recovery of restitution for victims. According to DOJ officials responsible for the internal evaluation program, these evaluations are largely intended to provide onsite management assistance and analysis of how the USAO allocates its administrative and legal personnel resources rather than the office’s efficacy in collecting restitution. Consistent with requirements outlined in the Government Performance and Results Act Modernization Act of 2010 (GPRAMA), performance measurement is the ongoing monitoring and reporting of program accomplishments—particularly towards pre-established, objective and quantifiable goals—and agencies are to establish performance measures to assess progress towards those goals. While GPRAMA is applicable to the department or agency level, performance measures and goals are important management tools at all levels of an agency, including the program, project, or activity level. Agencies can use performance measurement to make various types of management decisions to improve programs and results, such as developing strategies and allocating resources, including identifying problems and taking corrective action when appropriate. Further, the Justice for All Reauthorization Act of 2016 requires DOJ to evaluate each USAO in its performance in recovering restitution for victims. DOJ and EOUSA officials told us that DOJ does not require USAOs to establish performance measures or goals to assess their progress in improving the collection of restitution. DOJ and EOUSA officials also told us that each USAO could develop performance goals but that they were unaware of the extent to which USAOs did so, and further, they do not track the extent to which USAOs met performance goals. Additionally, these officials stated that because each USAO faces different constraints in its ability to collect restitution, establishing a uniform and consistent performance measure and goal would be challenging. EOUSA officials noted that some USAOs may have more resources, such as more FLU staff or specialized asset investigators, available to pursue collections as compared to other offices and therefore offices with fewer resources could have difficulty meeting a performance goal. Further, EOUSA and USAO officials stated that the extent to which DOJ can collect on a debt is heavily influenced by factors outside of the agency’s control, such as an offender’s ability to pay. USAOs could use information provided by performance measures and goals—such as an office’s ability to meet a performance goal—to make managerial decisions to help address these constraints, such as by increasing the allocation of staff resources. Further, to avoid comparing USAOs to a nationally set performance goal that does not account for specific constraints faced by each office, DOJ could—as it did in fiscal year 2003—require each USAO to establish its own objective, quantitative collection goals based on historical, district-specific collection statistics. Finally, as previously discussed, each USAO already accounts for external factors that affect the collectability of a debt, such as an offender’s ability to pay, by suspending collection action on debts it identifies as uncollectible. Therefore, any performance measures and goals developed could be based solely on debts that the USAO already has determined to be collectible. Stakeholders we interviewed—including officials from one USAO, probation officials in two districts, and officials with DOJ’s Office of Crime Victims—noted that receiving restitution is both emotionally and financially important to victims. Specifically, officials from one USAO and one probation office noted that while many victims may never receive the full amount of restitution ordered, receiving even a minimal amount of restitution is a symbolic victory and that it is important for victims to know the government is making efforts to collect restitution on their behalf. The legislative history of the MVRA echoes these sentiments, providing that even nominal restitution payments have benefits for the victim of crime, and that orders of restitution are largely worthless without enforcement. Yet, according to our analysis, $10 billion of restitution debt DOJ identified as collectible remained outstanding at the end of fiscal year 2016. Further, the extent to which USAOs collected restitution varied widely— from a high of one USAO district collecting nearly 350 percent of all collectible debt in fiscal years 2014 through 2016 to a low of one district collecting less than one percent of collectible debt in the same period. Without performance measures, including the establishment of goals, DOJ cannot assess if this variation is due to factors outside the control of USAOs or due to management deficiencies that require corrective action. Developing performance measures and goals for each USAO related to the collection of restitution would allow DOJ to assess its progress in achieving its major management initiative in improving debt collection— including debts owed to victims as court-ordered restitution. Doing so would also better position DOJ to meet the requirements of the Justice for All Reauthorization Act of 2016 to evaluate offices in their performance in recovering restitution on behalf of victims and to use performance information to improve the practices of offices as needed. DOJ Could Improve Its Information on Forfeited Assets Available for Victims Although asset forfeiture and restitution are separate parts of a criminal sentence, DOJ guidance states that using forfeited assets to benefit victims is a way that DOJ can help ensure eligible victims of crime are compensated for their losses. Further, DOJ regulations and policy require that eligible victims receive compensation from forfeited assets before certain other uses, such as official use or equitable sharing. However, while DOJ tracks the amount of compensation provided to victims through forfeited assets, it does not have assurances that forfeited assets are being used to compensate victims to the greatest extent possible. According to DOJ information, the agency made payments of about $595 million to eligible victims other than owners of the property from the Assets Forfeiture Fund from fiscal years 2014 through 2016, or 15 percent of $3.9 billion in paid expenditures during this period, as shown in table 5. As table 5 shows, DOJ can account for cases in which forfeited assets were used to compensate eligible victims who were not owners or lienholders. However, DOJ does not have information on the overall universe of victims who could have been eligible to receive compensation from forfeited assets. Further, it does not have insight into any reasons why funds from forfeited assets were not used for these victims. Specifically, DOJ officials stated that the department collects information on whether victims have been identified in cases associated with forfeited assets, and if restitution is anticipated in these cases, but it does not track the extent to which these victims were ultimately compensated using forfeited assets. Further, DOJ also does not collect information on reasons why victims were not compensated using funds from forfeited assets. While DOJ is required to use forfeited assets to compensate victims before using those assets for certain other purposes, the agency is unable to provide assurances that it is always doing so because it does not have information on the overall universe of victims or reasons why victims were not compensated using forfeited assets. As a result, DOJ does not have a basis to know whether the $595 million provided to victims from fiscal years 2014 through 2016 is the maximum amount of compensation the agency could have provided to victims using forfeited assets. Full use of forfeited assets for victim compensation has long been, and continues to be, a goal of DOJ. In 2005, an interagency task force—led by DOJ and including the Department of Treasury, Office of Management and Budget and AOUSC—developed a strategic plan to improve the collection of criminal debt. Among other goals included in its strategic plan, the task force stated a goal of examining how asset seizure and forfeiture procedures can be used to maximize recoveries for victims. More recently, DOJ reported in its 2014-2018 Strategic Plan that it would make every effort to recover full and fair restitution for victims using the federal forfeiture statutes to preserve and recover criminal proceeds. Specifically, DOJ stated that using federal forfeiture statutes to recover full and fair restitution for victims is one part of its strategy to protect the rights of the American people and enforce the rule of law. Finally, DOJ officials told us they considered providing compensation to victims as one goal of the Asset Forfeiture Program and EOUSA stated in guidance that asset forfeiture is the most widely available and effective tool to seize assets for restitution purposes. Standards for Internal Control in the Federal Government call on federal managers to design control activities to achieve the agency’s objectives. These controls can include using quality information to make informed decisions, evaluate the entity’s performance in achieving key objectives, and address risks. DOJ officials told us that they do not track the extent to which victims were not compensated using forfeited assets because USAO staff are not required to request that these assets be used for victim compensation. DOJ officials explained that staff are required to indicate in the agency’s forfeited asset database, the Consolidated Asset Tracking System, if victims exist in cases associated with forfeited assets and if restitution is anticipated in these cases. However, these officials stated that staff are not required to then compensate these victims using the forfeited assets or to indicate why these assets were not used for this purpose. DOJ officials told us that decisions to compensate victims using forfeited assets are best left to the judgment of the USAO staff familiar with the case, such as the prosecuting attorney or asset forfeiture staff. DOJ officials pointed to informal communication and coordination among prosecutors, the FLU, and the Asset Forfeiture unit in each USAO as a means to provide compensation to victims as appropriate. However, communication and coordination among these groups has been a challenge for USAOs, as the DOJ Inspector General found in a June 2015 review of DOJ’s debt collection program. Similarly, during our current review, EOUSA and USAO officials we spoke with identified communication and coordination as an area for improvement. EOUSA officials told us that while they thought that FLU staff and Asset Forfeiture unit staff were collaborating more frequently to use forfeited assets to collect restitution debts since the issuance of the DOJ Inspector General’s report, the extent of collaboration between these two units still varied across USAOs. Further, officials we talked to in two USAOs and one probation office noted that USAO staff could improve their use of forfeited assets for restitution payments. For example, officials in one probation office noted that it was their practice to identify forfeited assets that could be used for compensation in the PSR because they had observed that USAO staff were frequently not applying such assets to victim compensation. While DOJ may allow USAO staff to use discretion when requesting restoration or alerting victims to assets available for compensation, increasing the agency’s understanding of the extent to which assets could have been—but were not—used for victim compensation, and the reasons for those decisions, does not affect that discretion. There are legitimate reasons why victims might not be compensated using forfeited assets; for example, the assets may have other owners or lienholders that must be compensated prior to victims, or offenders may have other means by which to pay victims restitution. However, there are also instances where victims may have not received compensation through forfeited assets as a result of unintentional circumstances. For example, according to DOJ’s Asset Forfeiture Manual, forfeiture actions can proceed faster than the parallel criminal case. Consequently, assets might be equitably shared, placed into official use, or remitted to victims who file petitions long before restitution is ordered, and therefore would not be available for other victims who wait for restitution to be ordered after an offender is sentenced. To avoid this outcome, DOJ recommends that USAOs coordinate to ensure the retention of property for victim compensation. However, although DOJ officials responsible for leading DOJ’s asset forfeiture efforts highlighted the need for expedient coordination when USAO staff are considering using forfeited assets to compensate victims, they stated this may not always occur. As a result, otherwise eligible victims may not always be compensated through forfeited assets. By gathering information about the extent to which assets were used for victim compensation—including when they were not used and reasons why not—DOJ could have a better understanding of potential instances where victims could be, but are not, receiving compensation through forfeited funds and could take steps to address them accordingly. Options for gathering such information could include doing a one-time retrospective study of forfeited assets with victims or anticipated restitution to determine the extent that assets were used for victim compensation, or creating a tracking mechanism through its forfeited assets database, or another system. Gathering information on the extent to which forfeited assets were used for victim compensation, including when not used and reasons why not, could position DOJ to take action to increase the use of these assets for victim compensation if warranted. These actions could include providing funds for increased asset forfeiture staff in USAOs, providing additional training or changing policies or procedures for using forfeited assets to compensate victims. Fully and systematically understanding the extent to which issues, such as a lack of coordination within USAOs, result in victims not being compensated using forfeited assets would give DOJ a basis upon which to develop improvements to the Asset Forfeiture Program. Such information would also provide DOJ and staff at all USAOs with information to evaluate its performance in achieving one of the goals of the Asset Forfeiture Program and taking action to meet the agency goal of protecting the rights of the American people—including the right to full and fair restitution for victims. Conclusions Restitution serves the criminal justice goal of holding offenders accountable and, to the extent possible, restoring victims of federal crimes to their prior position had the crime not occurred. Many victims are unlikely to receive any meaningful portion of court-ordered restitution owed to them because of offenders’ inability to pay these debts. However, the fact that restitution is difficult to collect does not negate the important responsibilities of the judiciary and DOJ to properly manage and oversee all aspects of the restitution process. By law, courts are to state why they did not order restitution and provide that information to USSC. While this information was collected and recorded in USSC data for most offenders, we found that this information was missing for thousands of offenders. It is important for the judiciary to ensure that this information is consistently collected and recorded to assist USSC in its continuous re-examination of its guidelines and policy statements and ensure that various sentencing practices are achieving their stated purposes. The judiciary could support USSC in this endeavor by determining why this information is missing. Results from this study could help inform the judiciary whether this issue rises to the level of an internal control deficiency and whether additional action can be taken to improve the transparency of sentencing decisions. While DOJ has delegated collection activities for restitution to USAOs, it could provide better oversight to ensure it is making reasonable efforts to collect restitution and meeting its responsibility to victims. USAOs have identified a significant portion of outstanding restitution debt as uncollectible, but they have also identified $10 billion of outstanding restitution debt that could be collected. Developing and implementing performance measures and goals for each USAO would allow DOJ to gauge USAOs’ success in collecting this restitution and, by extension, the department’s success in achieving its major management initiative to increase the collection of debt. Further, DOJ could use performance information to improve the practices of offices in seeking and recovering restitution, consistent with a requirement in the Justice for All Reauthorization Act of 2016. Finally, DOJ could gain greater visibility into the use of forfeited assets to compensate victims by gathering information on cases in which victims have been identified and restitution is anticipated but forfeited assets are not used, and any reasons why. Doing so would better position DOJ to take action to increase the use of forfeited assets to compensate eligible victims if warranted and to provide assurance that it is maximizing the use of asset forfeiture in satisfying restitution debts, one of the agency’s most effective mechanisms for satisfying restitution. Recommendations for Executive Action We are making three recommendations, including one to the judiciary and two to DOJ. Specifically: Judiciary officials, including AOUSC, USSC, and the Judicial Conference, should determine why USSC data on the reasons restitution was not ordered are incomplete. Additionally, if warranted based on this information, judiciary officials should take action to ensure USSC data records include all required information for orders of restitution. (Recommendation 1) To improve oversight of the collection of restitution we recommend that the Attorney General: Develop and implement performance measures and goals for each USAO related to the collection of restitution, and measure progress towards meeting those goals. (Recommendation 2) In cases where forfeited assets were not used to compensate victims, gather information on reasons why forfeited assets were not used for victims. If warranted based on this information, take action to increase the use of forfeited assets to compensate eligible victims. (Recommendation 3) Agency Comments We provided a draft of this report for review and comment to DOJ, the Judicial Conference of the United States, AOUSC, USSC, and the Federal Judicial Center. DOJ concurred with our recommendations and provided technical comments, which we incorporated as appropriate. AOUSC provided written comments, which are reproduced in appendix III. In its written comments, AOUSC noted that it would work with the USSC to address our recommendation. We are sending copies of this report to the appropriate congressional committees and the Attorney General, the Judicial Conference of the United States, the Directors of AOUSC, the Staff Director of USSC, the Federal Judicial Center and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you and your staff have any questions about this report, please contact me at (202) 512-8777 or goodwing@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions are listed in appendix IV. Appendix I: Number and Percentage of Federal Offenders Ordered to Pay Restitution, Fiscal Years 2014 through 2016 According to our analysis of data from the U.S. Sentencing Commission (USSC), 214,578 federal offenders were sentenced from fiscal years 2014 through 2016. Table 6 shows the number of offenders sentenced and the number and percentage of offenders ordered to pay restitution for each primary offense of conviction in fiscal years 2014 through 2016. Appendix II: Views on DOJ-Recommended Restitution Practices from Officials in Selected in U.S. Attorneys’ Offices The Department of Justice (DOJ) has identified and recommended numerous practices for federal prosecutors and Financial Litigation Unit (FLU) staff to use throughout the restitution process through various guidance documents. We conducted semi-structured interviews with officials from six U.S. Attorneys’ Offices (USAO) to obtain their views on the restitution process and the extent to which they believed DOJ- recommended restitution practices related to the restitution process were effective. In particular, we spoke with USAO officials from the District of Connecticut; the Southern District of California; the District of New Jersey; the Southern District of Ohio; the District of South Dakota; and the District of Wyoming. Tables 7 through 9 show the results of our semi-structured interviews. In particular, table 7 shows practices related to requesting restitution and the extent to which USAO officials found these practices effective. Table 8 shows practices related to facilitating orders of restitution and the extent to which USAO officials found these practices effective. Table 9 shows practices related to collecting restitution and the extent to which USAO officials found these practices effective. Each table also indicates practices that officials we interviewed considered as most important or effective for helping ensure victims receive full and timely restitution. Appendix III: Comments from the Administrative Office of the U.S. Courts Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, Chris Ferencik (Assistant Director); Kathleen Donovan (Analyst-in-Charge); Enyinnaya David Aja; David Alexander; Lacinda Ayers; Carla Brown; Emily Hutz; Janet Temko- Blinder; and Adam Vogt, made key contributions to this report.
Why GAO Did This Study One of the goals of federal criminal restitution is to restore victims of federal crimes to the position they occupied before the crime was committed by providing compensation. Various entities within the federal government are involved in the process of requesting, ordering, and collecting restitution for crime victims, including DOJ and the judiciary. The Justice for All Reauthorization Act of 2016 includes a provision for GAO to review the federal criminal restitution process for fiscal years 2014 through 2016. This report addresses, among other things: (1) the extent to which information is available on restitution requested by DOJ and ordered by courts; (2) the amount of restitution debt DOJ collected and the amount that remains outstanding; and, (3) the extent to which DOJ has conducted oversight on the collection of restitution. GAO analyzed laws, policies and procedures as well as USSC data on restitution orders and DOJ data on restitution collected from fiscal years 2014 through 2016. GAO also selected a non-generalizable sample of six federal judicial districts based on restitution collections and spoke with USAO officials and federal probation officers. What GAO Found Officials from selected U.S. Attorney's Offices (USAO) stated that they document requests for restitution in case files and employ other internal controls, such as the use of templates and forms, throughout the prosecution process to ensure that prosecutors request restitution as appropriate. GAO's analysis of U.S. Sentencing Commission (USSC) data—an agency within the judiciary—showed that information on restitution orders was available for 95 percent of all offenders sentenced from fiscal years 2014 through 2016. Specifically, 214,578 federal offenders were sentenced during this time period and restitution was ordered for 33,158, or 15 percent, of those offenders. Collectively, courts ordered these offenders to pay $33.9 billion in restitution. Most federal offenders sentenced during these years were sentenced for immigration or drug-related offenses. In interviews, USAO officials stated that these offenses do not typically have victims requiring restitution. GAO found that data on reasons why restitution was not ordered were incomplete for 5 percent of all offenders sentenced from fiscal years 2014 through 2016. Determining why data on restitution orders are incomplete may inform the judiciary of the cause of the incomplete data and any efforts needed to improve USSC data. GAO's analysis of Department of Justice (DOJ) data showed that USAOs collected $2.95 billion in restitution debt in fiscal years 2014 through 2016, see figure below. However, at the end of fiscal year 2016, $110 billion in previously ordered restitution remained outstanding, and USAOs identified $100 billion of that outstanding debt as uncollectible due to offenders' inability to pay. DOJ identified improving debt collection—including restitution—as a major management initiative in its 2014-2018 Strategic Plan. While DOJ is developing analytical tools to monitor the collection of restitution, it has not established performance measures or goals. Performance measures and goals would allow DOJ to gauge USAOs' success in collecting restitution and, by extension, the department's success in achieving a major management initiative. What GAO Recommends GAO is making three recommendations. GAO is making one to the judiciary to determine why data on restitution orders are incomplete. GAO is making two recommendations to DOJ, including one to implement performance measures and goals for the collection of restitution. The judiciary and DOJ concurred with the recommendations.
gao_GAO-18-545
gao_GAO-18-545_0
Background NIH Institutes and Centers and Biomedical Research NIH, which had total budgetary resources of $32 billion in fiscal year 2016, is comprised of the Office of the Director and 27 institutes and centers that focus on specific diseases, particular organs, or stages in life, such as childhood or old age. As the central office at NIH, the Office of the Director establishes agency policy and is responsible for overseeing the institutes and centers to ensure that they operate in accordance with NIH’s policies. The institutes and centers accomplish their missions primarily through extramural research programs. Most extramural research funding is provided for investigator-initiated research projects for which researchers, through their institutions, submit applications in response to NIH announcements. In addition to these announcements, the institutes and centers may issue more narrowly scoped solicitations, through request for proposals, for research targeting specific areas. All extramural research project applications are to follow NIH’s process of peer review, which includes two sequential levels of review. The first level involves non-governmental experts assessing the scientific merit of the proposed applications and assigning them a priority score. The second level involves advisory councils at the institute or center associated with the grant application, that, in addition to scientific merit, consider the institutes’ and centers’ missions and strategic plan goals and public health needs. Advisory councils review grant applications and their scores, and, based on this review, make recommendations about which grant applications should be awarded funding. The director of each institute or center makes the final extramural funding decisions. NIH investigators also conduct research through NIH’s intramural research program. These efforts accounted for approximately 10 percent of NIH’s total budgetary resources of $32 billion in fiscal year 2016. NIH employs about 3,600 investigators working in its own laboratories and clinics. In addition, this research relies on another 6,000 investigators at various stages of research training who come to NIH for a few years to work as non-employee trainees, including about 2,500 who are postdoctoral fellows. According to NIH officials, intramural investigators are generally not allowed to apply for extramural or private grants, because their salaries are funded with the agency’s appropriations. Career Path of Independent Extramural Investigators The career path to become an independent extramural investigator generally consists of students completing graduate level education (i.e., research doctorate or clinical doctorate), postdoctoral research, or medical residency. When postdoctoral research is completed, the researcher will generally seek opportunities to become an investigator at a medical research center or as a faculty member at a university and begin the process of obtaining academic tenure—that is, a full-time, permanent faculty position. Once the postdoctoral researcher becomes a faculty member, he or she can generally begin applying for large NIH research project grants. Some researchers may become affiliated with other types of research institutions and also apply for grants. Investigators in medical research centers and university faculty are generally dependent on external funding to cover the cost of their research. Although biomedical investigators may be funded by other federal agencies—such as the National Science Foundation—and nonfederal sources, studies have shown that NIH is the most likely source of government funding for biomedical research. NIH Grants NIH’s research support for extramural investigators includes research project grants, fellowships, training grants, and career development grants. Some of the main funding mechanisms provided to institutions by NIH that fund investigators beginning their research careers include the following extramural grants: Large grants. NIH awards large renewable research project grants: R01 and R01-equivalent (R01e) grants. According to NIH, in fiscal year 2016, the average size of large grants was typically in excess of $460,000 total. R01and R01e grants are NIH’s most common type of grant, according to NIH. They are generally the largest type of grant available to investigators beginning their careers and, for purposes of this report, are therefore referred to as “large” grants. Large grants provide 3 to 5 years of financial support for discrete, specified research projects. According to NIH, it is generally expected that within that period a project can be completed, results published, and sufficient time will remain for the investigator to prepare a subsequent application for a renewal or new award before funding ends. Smaller grants. While some non-R01 equivalent (non-R01e) grants may match or exceed the amount of some R01e grants, they are generally of a lesser amount and, for purposes of this report, are therefore referred to as “smaller” grants. According to NIH, in fiscal year 2016, smaller grants were, on average, amounts that ranged from about $61,000 to about $1.1 million total. These grants provide limited funding for a relatively short period of time to support a variety of exploratory or developmental projects, including pilot or feasibility studies, collection of preliminary data, and secondary analysis of existing data. Career development grants. Also known as K-series grants, these grants are intended to provide mentored research opportunities and career enhancement experiences to support investigators or postdoctoral fellows at various stages of their research careers. NIH’s data show that in fiscal year 2016, career development grants were, on average, about $178,000 total. Extramural Investigator Career Status NIH generally classifies the career status of an extramural investigator based on whether the investigator has received a large NIH research grant. NIH considers early career investigators to be those who meet the definition of early stage and intermediate stage investigators. NIH also recognizes established and “other” investigators among those who apply for research grants. Table 1 lists NIH extramural investigators’ career stages and descriptions of these stages. According to NIH, it generally takes an early stage investigator up to 2 years to develop a successful application for a large grant and receive funding. Typically, investigators devote between 6 months to 1 year to write their first large NIH grant application. Most of these grants, with a funding period of over 3 years, require significant preliminary data to support the proposed hypothesis contained in the application. In addition, the median average time elapsed for applicants to learn whether they have been awarded a grant is 270 days, or 9 months. According to NIH, because most investigators beginning their careers do not receive large NIH research grants on their first attempt, these investigators might apply for smaller grants. They may also apply for career development grants that are intended to provide mentored research or training opportunities. Concerns Regarding the Stability and Diversity of the Biomedical Research Workforce According to research by the National Academies of Sciences, Engineering, and Medicine, and others, the biomedical research workforce is growing older at a rate that is disproportionate to the general American labor force. Some stakeholders in the scientific community have voiced concerns that large NIH research grants that can launch early career investigators are often being awarded to established investigators rather than early stage and intermediate stage investigators. For example, a recent National Academies report pointed out that between 1998 and 2003, the NIH budget grew from $13 billion to $27 billion, but the percentage of grants awarded to investigators who were in the early stages of their careers steadily declined. Many in the field have reported on the need to support investigators who are researching varied biomedical issues in order to maximize the number of new discoveries. Further, stakeholders within the scientific research community have reported on the uncertain path that investigators may encounter early in their careers and the prospect that they will ultimately pursue other career options. Several reports have found that certain racial and ethnic groups are underrepresented in the biomedical research workforce and in science. These reports have also provided data on gender workforce disparities. For example, a 2011 publication by the National Academies of Sciences, Engineering, and Medicine showed that, in 2006, underrepresented minorities made up about 29 percent of the U.S. population, but, in 2007, were awarded about 5 percent of science and engineering doctorates. Other studies have shown significant research funding disparities for investigators from underrepresented groups that apply to NIH for large research grants, such as R01 grants. In 2011, NIH funded a study that examined the association between grant recipients and the applicants’ race and ethnicity. The study found that R01 applicants that self-identified as African American were 13 percentage points less likely than white applicants to receive these grants. After controlling for other variables— including educational background, training, previous research grants, and publication record—African American applicants were 10 percentage points less likely to be awarded such a grant than a white applicant. Further, while women comprise about half of the postdoctoral graduates for the biological sciences in the United States, studies have shown a disparity in the number of female investigators in senior science research positions at universities. This disparity may result in a smaller number of female investigators among NIH grant applicants and may further contribute to their underrepresentation in certain facets of science. However, we previously reported that once female investigators apply for NIH grants, their likelihood of receiving NIH grants is the same as their male counterparts. NIH Has Promoted Efforts to Support Early and Intermediate Stage Investigators, but Those Who Have Not Yet Received a Large NIH Research Grant Remain Less Competitive NIH Has Promoted Programs and Policies to Support Early and Intermediate Stage Extramural Investigators, but It Is Too Early to Assess Its Most Recent Initiative Over the last 10 years, NIH has introduced programs and policies to support extramural investigators competing for their first large NIH research grant that leads to research independence. NIH developed certain programs to fund extramural researchers with the goal of stabilizing the biomedical research workforce. These targeted programs were intended to promote support for extramural investigators that had not yet received a large NIH research grant. The various programs include both large and smaller research grants, career development grants, and student loan repayments. Of particular note are the NIH Director’s New Innovator Award, which is intended to support investigators beginning their research careers with reviewer-determined highly novel research; and the Director’s Early Independence Award, which is intended to support reviewer-determined exceptional investigators who wish to pursue independent research directly, forgoing the traditional postdoctoral training period. In addition, the Pathway to Independence Award provides investigators beginning their research careers with a mentored research experience, which may lead to independent research positions. Some institutes and centers have established their own programs to support investigators beginning their research careers. For example, a subset of the National Institute of General Medical Sciences’ “Maximizing Investigators’ Research Award program” targets funding for laboratories led by an early stage investigator. In addition, the National Institute of Arthritis and Musculoskeletal and Skin Diseases’ “Supplements to Advance Research from Projects to Programs,” supports intermediate stage investigators by providing supplemental funding to existing research projects to encourage broader innovation and exploration of high-risk ideas. In addition, NIH’s LRP is designed to help recruit and retain highly qualified individuals into biomedical research careers. This program provides student loan repayments in return for a commitment to engage in NIH mission-relevant and certain statutorily-defined approved research. We examined the funding rates of early stage and intermediate stage extramural and intramural investigators who applied for both initial and renewal LRP payments. LRP payments to extramural investigators: The LRP funding rate (awardees/applicants) for extramural investigators applying for total (both initial and renewal) payments between fiscal years 2013 through 2017 was about 50 percent. During this period, 8,186 extramural investigators applied for initial LRP payments and 3,206 received them; 5,131 extramural investigators applied for renewal payments and 3,426 received them. Therefore, the funding rates were 39 percent for initial applicants and 67 percent for renewal applicants. Early stage and intermediate stage investigators had similar funding rates in receiving LRP payments during the 5-year period, though there was some variation each year. Early stage and intermediate stage investigators seeking initial LRP payments had funding rates of about 40 percent and 35 percent, respectively. Both of these categories of investigators seeking renewal LRP payments had a funding rate of 67 percent. LRP payments to intramural investigators: The LRP funding rate (awardees/applicants) for intramural investigators applying for total (both initial and renewal) LRP payments from fiscal years 2013 through 2017 was about 87 percent; 397 intramural investigators applied for both initial and renewal LRP payments, and NIH funded 345 of the applicants. The funding rate for applicants seeking initial LRP payments during this 5-year period was about 83 percent, whereas the funding rate for those applying to renewal LRP payments was 90 percent. NIH also implemented policies to improve opportunities for early and intermediate stage extramural investigators. For example, to address the concerns about established investigators receiving a disproportionate share of research funds, NIH established its Early Stage Investigator Priority Policy in 2008. The policy specified that early stage investigator status would be considered a factor when applications were being selected for award. Studies have shown that under the Early Stage Investigator Priority Policy, grants being awarded to early stage investigators stopped declining and remained flat for several years. They also showed that the field of biomedical research continued to be very competitive for early stage investigators. However, some have expressed concern that these accomplishments are not sufficient. For example, according to a recent report by the National Academies of Sciences, Engineering, and Medicine, a variety of steps have been taken over the years to address the challenges facing early and intermediate stage investigators, but these efforts have not resolved the underlying problems that make it difficult for them to establish their careers. More recently, the Cures Act required that NIH implement the NGRI, which the agency established in August 2017. NIH’s Office of the Director, which oversees the initiative and its implementation, directed the NIH institutes and centers to reprioritize large NIH research grant support for early stage and intermediate stage investigators. The policy’s stated goal for fiscal year 2017 was to increase the number of large NIH research grants provided to both early stage investigators and intermediate stage investigators by 200 grants each compared to the number that were awarded in fiscal year 2016. These 400 grants would redirect approximately $210 million from NIH’s base budget to support additional early career investigators in the first year of NGRI’s implementation. However, with only one month to implement the policy, NIH did not meet this goal. From fiscal year 2016 to fiscal year 2017, the number of large NIH research grants awarded increased by 57 for early stage investigators and decreased by 2 for intermediate stage investigators. Similarly, the goal to increase funding for the additional 400 grants was not met; funding increased by about $107 million during this period. Given that this initiative is in the early stages and its goals were set late in fiscal year 2017, it is too early to fully assess the impact of this effort. According to NIH officials, the agency is in the process of reevaluating which investigators should be the focus of the NGRI initiative and may revise the program to include investigators whose careers are more advanced. NIH officials stated that the NGRI policy’s intention to direct more research funding to early stage investigators will remain in place. However, NIH’s NGRI Working Group no longer designates intermediate stage investigators—or what it calls “early established investigators”—as a distinct group. NIH’s current definition—that of being within 10 years of receiving a first large NIH research grant as an early stage investigator— includes investigators who could have completed their graduate level education (i.e., research doctorate or clinical doctorate), postdoctoral research, or medical residency between 15 and 20 years ago. According to NIH officials, NIH’s working group is considering broadening this definition even further. It is concerned that intermediate stage investigators, facing increasing pressure to secure additional sources of research funding to prevent the closure of their laboratories if their first large NIH research grant is not renewed, could lose all NIH support and become likely to leave the biomedical research workforce. Therefore, the working group is considering a different approach for all established investigators, with a focus on all meritorious investigators (regardless of career stage) who are doing high quality research, yet are still at risk for losing all NIH funding. Specifically, NIH officials said the working group plans to reevaluate ways that it can provide additional, prioritized support to these investigators in order to further their career trajectories. The working group may recommend to NIH that the NGRI be expanded to also target support for certain investigators whose careers are in more advanced stages, rather than just those in the early stages of their careers. In addition, NIH has not yet implemented the expansion of its LRP as directed by the Cures Act. The Cures Act amended the LRP by increasing the eligible annual loan repayment amount from a maximum of $35,000 to a maximum of $50,000. The act also gave the NIH Director the discretion to amend the research categories that are eligible for intramural or extramural loan repayment based on emerging scientific priorities or workforce needs. The agency has established a working group to provide recommendations to the NIH Director regarding any suggested structural changes and associated timelines for implementation. NIH officials told us that they are awaiting recommendations from this working group on how to use the agency’s new authorities. They said that they expect to implement program changes to the LRP, as permitted by the Cures Act, by fiscal year 2020. Investigators Who Had Received at Least One Large NIH Grant Had Higher Funding Rates for All Grant Types Compared to Those Who Had Not Our analysis shows that intermediate stage investigators are more successful at competing for grants than early stage investigators. Our examination of the trends of NIH grant data showed that the applicant funding rates (awardees/applicants) for investigators who had previously received an initial large NIH research grant was greater than the applicant funding rates for investigators who had never received such a grant. We analyzed 5 years of grant data to determine an overall perspective of funding rates from fiscal years 2013 through 2017. We found that intermediate stage and established investigators—groups comprised of investigators who had already received their first large grant award—had greater applicant funding rates for all three grant types compared to early stage and other investigators. For example, we found that in fiscal year 2017, the most recent year for which data were available, intermediate stage investigators had funding rates that were comparable to those of established investigators. Investigators that had not yet been awarded their first large NIH research grant—early stage investigators and other investigators—were not as successful when competing for large NIH research grants, small grants, or career development grants. (See table 2.) We also found that over time—from fiscal years 2013 through 2017— intermediate stage investigators and established investigators had greater applicant funding rates for all three grant types compared to early stage and other investigators. Of the investigators that had not yet been awarded their first large NIH research grant, early stage investigators were more successful in competing for NIH grants than the other investigators that were outside of the 10-year period of having completed their graduate level education (i.e., research doctorate or clinical doctorate), postdoctoral research, or medical residency. For instance, we found that early stage investigator funding rates ranged from about 5 to 11 percentage points lower than intermediate stage or established investigators for each of the five fiscal years examined. Similarly, other investigator funding rates ranged from about 12 to 14 percentage points lower than intermediate stage or established investigators for each of the five fiscal years examined. (See fig. 1.) Finally, we found that during this 5-year period, two of the four extramural investigator groups were more likely to receive large, small, and career development grants than the other two groups. Specifically, investigators beginning their research careers—the early stage and intermediate stage investigators—were more likely to receive these grants. Although early stage investigators were more likely than intermediate stage investigators to apply for smaller research grants (about 4,500 applicants compared to about 2,000 applicants, respectively) and career development grants (about 2,000 applicants compared to about 50 applicants, respectively), intermediate stage investigators were still more successful in competing for these grants, as well as the large NIH research grants. For more information on the trends in the number of grants awarded to early stage and intermediate stage investigators, by award type, for fiscal years 2013 through 2017, see appendix I. NIH Has Taken Steps to Support a Diverse Scientific Workforce, but Disparities Persist and Its Diversity Efforts Have Not Been Fully Evaluated NIH Established Working Groups and Programs to Support Investigators from Underrepresented Groups Over the last 7 years, NIH established advisory groups and other programs to determine how best to support extramural and intramural investigators from underrepresented groups. NIH’s Working Group on Diversity in the Biomedical Research Workforce was established in response to the 2011 NIH study that examined the association between R01 grant recipients and the applicants’ race and ethnicity. NIH directed the group to provide recommendations to improve retention of underrepresented minorities, the disabled, and scientists from disadvantaged backgrounds. In June 2012, the working group issued 13 recommendations, which, we found that NIH uses as the foundation of some NIH-wide efforts to diversify the extramural and intramural biomedical research workforce. Other advisory groups that have examined or are currently examining related topics include the following: NIH Working Group on Women in Biomedical Careers was established in 2007 in response to a report from the National Academies of Sciences, Engineering, and Medicine on barriers women in biomedical science experience in advancing their careers. It produced a workshop and report in 2008 on best practices for sustaining the careers of women in biomedical research; Addressing Gender Inequality in the NIH Intramural Research Program Action Task Force was established in 2016 in response to data showing women are underrepresented in top NIH research positions. It produced recommendations in 2017 aimed at ensuring that female and male investigators have equal opportunities in the intramural research program at NIH, among other things; and African-American/Black R01 Funding Disparities Working Group was established in response to the 2011 NIH study that found a funding disparity between blacks and whites applying for R01 grants. This group analyzed data on the funding rates of applicants that self- identify as African American or black compared to other racial groups. NIH has acted on some of the advisory groups’ recommendations. For example, in response to recommendations made by the Diversity in the Biomedical Research Workforce advisory group, the agency hired a Chief Officer of Scientific Workforce Diversity in 2014; implemented the three- tiered Diversity Program Consortium, which includes the Building Infrastructure Leading to Diversity program, the National Research Mentoring Network, and the Coordination and Evaluation Center; and established a permanent advisory group on diversity. NIH also developed a “toolkit” that includes training modules to educate intramural investigator search committee members on biases that can lead to a less diverse workforce, among other things. In fiscal year 2017, NIH created an Equity Committee to address recommendations made by the Addressing Gender Inequality in the NIH Intramural Research Program Action Task Force to further examine concerns about parity between male and female intramural investigators and other diversity issues. Other NIH-wide policies and programs may also help to attract, retain, and develop investigators from underrepresented groups. The 24 NIH institutes and centers that fund research and the Office of the Director provide funds for its investigators, called research supplements, to recruit graduate students, postdoctoral fellows, and others from underrepresented racial and ethnic groups, as well as those with disabilities and from economically disadvantaged backgrounds. These funds provide graduate students, postdoctoral fellows, and others an opportunity to conduct research and be mentored by an investigator supported by the specific NIH institute or center or office. Some stakeholders we interviewed said that the agency’s LRP also may help to retain investigators from underrepresented groups, noting that the student loan debt for African American or black graduate students is higher than that of white graduate students. Physicians from a professional organization we interviewed said that the LRP helps to attract physician scientists from underrepresented groups into research careers. Physicians we interviewed stressed the importance of the LRP to attract physician scientists into research careers, because these scientists often have significant medical school debt. Our analysis of extramural LRP data showed that, in 2017, African Americans or black, non-Hispanics had a funding rate of about 34 percent for receiving an LRP payment. White, non-Hispanic applicants had a funding rate for receiving an LRP payment of about 52 percent. More recently, the National Academies of Sciences, Engineering, and Medicine recommended that NIH make the LRP available to all individuals pursuing biomedical physician-scientist researcher careers, regardless of their research area or clinical specialty. They also suggested NIH increase the monetary value of loan repayment to reflect the debt burden of current medical trainees. Some stakeholders said that NIH’s family friendly policies, such as reimbursement for child care expenses and parental leave, may also help address work-life balance issues for female investigators that may otherwise forego some research duties to care for young children. Additionally, many—at least 17 of 27—of NIH’s institutes and centers have established their own policies and programs to attract, retain, and develop investigators from underrepresented groups. For example, the National Cancer Institute initiated the Continuing Umbrella of Research Experiences program to provide training and career development opportunities to enhance and increase diversity in the cancer research workforce. This program offers research opportunities and development to future and current scientists from underrepresented groups from middle school students to investigators who have yet to achieve research independence. NIH Research Funding and Workforce Data Shows that Disparities Persist for Underrepresented Groups Although NIH has implemented numerous diversity-related efforts, our analysis of NIH research grant funding and intramural workforce data from fiscal years 2013 through 2017 shows that some disparities persist for investigators from underrepresented racial and ethnic groups, and for female investigators. NIH Research Grant Applicants Our analysis of NIH data shows that investigators from underrepresented racial and ethnic groups comprise a small percentage of applicants. For example, in fiscal year 2017, applicants from underrepresented racial groups—that is, American Indian or Alaskan Native, African American or black, and Native Hawaiians and Pacific Islanders—were 0.2 percent, 1.8 percent, and 0.1 percent, respectively, of all applicants for large NIH research grants. Applicants from underrepresented ethnic groups— Hispanics or Latinos— comprised 4.3 percent of the applicants for large NIH research grants. (See table 3.) In contrast, white applicants were about 64 percent of all applicants for large NIH grants in fiscal year 2017. Investigators from underrepresented racial and ethnic groups also comprise a smaller number of applicants than other groups for smaller NIH grants and career development grants. Among grant applicants from underrepresented racial groups, African American or black applicants were consistently the largest group represented. For example, in 2017, among underrepresented racial groups, African American or black applicants were named as investigators on about 88 percent of applications for large NIH research grants, about 89 percent of applications for smaller NIH grants, and about 92 percent of career development grant applications. Hispanics and Latinos were about 5 percent of applicants for smaller NIH grants and about 6 percent of applicants for career development grants in 2017. According to data published by the National Science Foundation in 2017, women represent slightly more than half of all doctorates in biological sciences. However, from 2013 through 2017, women represented less than one-quarter of all tenured NIH intramural investigators. For example, in 2017, 191, (23 percent) of NIH’s 822 intramural tenured investigators were women. In addition, in 2017, 79, (37 percent), of NIH’s 211 tenure-track intramural investigators were women. Further, in fiscal years 2013 through 2017, nearly one-third of all extramural investigators that applied for large grants were women. (See table 4.) Nearly one-third of all applicants for smaller research grants, and close to half of all applicants for NIH career development grants, were women. (See app. II for information on the number of smaller and career development grant applicants by racial and ethnic groups and gender.) Stakeholders from 8 of the 12 entities we interviewed suggested potential reasons why the number of NIH research grant applicants among underrepresented racial and ethnic groups and for women may be limited. Attrition of biomedical science doctoral students and early career investigators from these groups is one explanation. Some stakeholders said that, while in graduate school, students from these groups may be discouraged from pursuing a biomedical research career as a result of implicit bias that they encountered with their mentors. Some stakeholders said lower numbers among women investigators is the result of decisions of some to start a family in the early stages of their careers, and further noted the difficulty in re-entering the biomedical research workforce. In addition, some stakeholders said that students from underrepresented groups may lack exposure to a sufficiently rigorous education in mathematics or the sciences prior to entering college, resulting in the low numbers of biomedical researchers from these groups. Others said the low numbers of investigators from these groups makes studying this issue difficult due to a small sample size. Additional administrative demands placed on individuals who pursue careers as investigators also affect the number of applicants. For example, some stakeholders said that once investigators from an underrepresented group attain faculty positions— particularly if there are few faculty members from such groups—they are frequently tasked with additional administrative duties. We were told that, often, they are selected because they may be one of a handful of members of underrepresented groups at some institutions. Their additional duties include participation on institutional committees as well as mentoring, particularly undergraduate or graduate students from underrepresented groups. In addition, representatives of one stakeholder group said that some research faculty from underrepresented groups feel additional pressure to participate in such activities, because their absence would be more apparent and they worry that this may adversely affect them. Stakeholders also told us that additional duties are time consuming and leave less time to devote to applying for grant funding. They said that some biomedical graduate students from underrepresented groups decide to pursue other fields, because of the competing demands associated with being an academic, such as grant writing and teaching responsibilities. NIH Research Grant Applicant Funding Rates Our analysis of NIH data from fiscal years 2013 through 2017 also shows that the funding rate for applicants from underrepresented racial groups applying for large and small NIH grants lags behind that of white applicants. For example, in fiscal year 2017, the applicant funding rate for large grants was about 17 percent for underrepresented racial groups and about 24 percent for Hispanics and Latinos. The funding rate for white applicants was about 27 percent. (See fig.2.) Among underrepresented racial groups, African American or black applicants consistently had a lower funding rate for large and smaller grants than well represented groups during this period (see table 5). The applicant funding rate for career development grants for underrepresented racial groups increased from about 22 percent to about 32 percent from fiscal years 2013 to 2017, and, for Hispanic and Latino applicants, from about 30 percent to about 36 percent during the same period. The applicant funding rate was about 34 percent for white applicants throughout this period. The large grant funding rate for female investigators was slightly lower than male investigators. (See fig. 3.) When looking exclusively at R01 grants, as opposed to all large grants, research has shown that women are less likely to have their initial R01 grant renewed. Our analysis of R01 grant renewal funding showed that, in fiscal year 2017, the R01 grant renewal funding rate for female applicants was about 31 percent compared to about 38 percent for male applicants. (See fig 4.) According to research by NIH, some applicants that are unsuccessful in obtaining an initial R01 grant may have greater success if they reapply; however, some stakeholders we interviewed said women, and some underrepresented racial groups, are less likely to reapply for an initial R01 grant if they are unsuccessful with their first attempt. (See app. III for information on the applicant funding rates for smaller grants and career development grants by gender.) Many stakeholders attributed the underrepresented groups’ lower funding rates to two factors. First, many stakeholders cited a perceived implicit bias within the peer review process, which they said may affect the funding rates for investigators from underrepresented racial and ethnic groups. They stressed that, many times, peer reviewers approve grants for investigators from top tier institutions that they are familiar with and are reluctant to provide high scores to grant applications from other institutions. Some stakeholders advocated for anonymizing grant applications to some extent to address this issue. NIH’s Center for Scientific Review—the center responsible for organizing peer reviews for grants—is conducting a study that anonymizes certain large grant applications, and a training module on implicit bias is currently being offered to NIH peer reviewers. In addition, NIH’s African American/Black R01 Funding Disparities Working Group has conducted an analysis on the R01 funding disparities for African American or black applicants from fiscal years 2010 through 2015, and is currently pursuing several efforts to address its findings. Lower grant application priority scores and application resubmission rates among African American or black applicants were among their findings. The working group is also pursuing a randomized control trial to assess the effect of mentoring and coaching on R01 resubmissions and award rates. Second, some stakeholders told us that only a very small percentage of biomedical science professors at top tier research schools are from underrepresented racial or ethnic groups. Some stakeholders suggested that many investigators from underrepresented groups seeking grants are affiliated with institutions outside of the top tier that may lack the infrastructure, grant writing support, and mentoring opportunities, which could help ensure their success. As a consequence, many investigators from underrepresented groups are at a disadvantage compared to their peers at top tier institutions, according to the stakeholders we interviewed. The Effect of NIH’s Efforts to Strengthen Diversity Is Unclear; Assessments of Some Targeted Efforts Are Incomplete, and Strategic Goals Lack Quantitative Metrics and Time Frames Although NIH has taken steps to address concerns about the diversity of the biomedical research workforce, its accomplishments have not been fully evaluated. Stakeholders Reported Mixed Views on NIH’s Efforts to Strengthen Diversity Positive comments from some stakeholders we interviewed included praise for the steps NIH has taken to diversify the biomedical research workforce, the value of the National Research Mentoring Network, and the research supplements and other training grants offered by NIH’s centers and institutes, which provide opportunities for students and postdoctoral fellows from underrepresented groups to work with established investigators. NIH’s support of conferences and programs, such as the Annual Biomedical Research Conference for Minority Students and the Institutional Research and Academic Career Development Award, was also well regarded by stakeholders. They also noted NIH’s commitment to diversity and willingness to investigate diversity issues through advisory groups, and commended the agency on working to address recommendations from the Working Group on Diversity in the Biomedical Research Workforce, including hiring a Chief Officer of Scientific Workforce Diversity. Some stakeholders were actively engaged in working with NIH on diversity issues. For example, some physicians from an organization we interviewed said they are working with the National Institutes on Minority Health and Health Disparities on issues related to research workforce diversity. Stakeholders, though, also offered less favorable views and characterized NIH’s efforts as stagnant, ineffective, or in need of better coordination. For example, some stakeholders suggested that for NIH’s National Research Mentoring Network, the matching of mentees to mentors could be improved or mentioned uncertainty about the program; questioned how often research supplements are utilized, or noted that better mentoring and follow-up after the postdoctoral fellow’s work is completed is warranted; reported that while their organizations initially collaborated with the scientific workforce diversity office, that office is not very active or communication eventually dissipated; expressed concern about NIH’s outreach to minority serving institutions and organizations, such as historically black colleges and universities, when it began creating programs like the Building Infrastructure Leading to Diversity program and the National Research Mentoring Network and for other efforts; and stressed that NIH should collaborate more with organizations that represent underrepresented groups, which have already implemented programs shown to be effective in engaging these communities in biomedical research. Multiple Assessments of Targeted Diversity Efforts Are Ongoing According to NIH officials, evaluations of various NIH efforts are ongoing and have not been completed. Some examples include the following: Data collection and analysis by the Diversity Program Consortium’s Coordination and Evaluation Center began in 2017, and is ongoing. In 2017, NIH’s Center for Scientific Review began conducting a study to anonymize R01 grant applications from African American or black and white applicants to detect potential reviewer bias during peer review. The results of this study are expected in 2019. An evaluation of the National Cancer Institute’s Continuing Umbrella of Research Experiences program, which provides training and career development opportunities to enhance diversity in the cancer research workforce, was submitted for publication in a scientific journal and is currently pending review. Some NIH institutes and centers have conducted evaluations of their specific diversity efforts. For example, in 2015, the National Institute of General Medical Sciences analyzed the research supplements provided to graduate students and postdoctoral fellows from underrepresented racial and ethnic groups between 1989 and 2006. The study found that about 65 percent of graduate students and postdoctoral fellows supported by the program entered research careers in academia, industry, and government research. About 41 percent of doctoral graduates and 45 percent of postdoctoral fellows supported by this program entered careers in academic research or teaching compared to about 43 percent of the U.S. doctoral degree workforce. In 2011, the National Institute on Aging evaluated its research supplement program and found that the NIH research grant applicant success rate of former participants from 2002 to 2010 was about 21 percent. The average research grant success rate for National Institute on Aging grants was about 18 percent during this same period. NIH Has Developed a Scientific Workforce Diversity Strategic Plan, but It Does Not Include Quantitative Metrics or Time Frames to Assess the Progress of Its Strategic Goals In 2016, NIH’s Chief Officer of Scientific Workforce Diversity established a 5-year strategic plan that describes the agency’s five workforce diversity goals and supporting objectives. The strategic plan includes goals and objectives that apply to both extramural and intramural investigators. During the course of our audit work, NIH updated this plan to describe progress made on each of its diversity goals, which are to: expand scientific workforce diversity as a field of inquiry, build and implement evidence related to diversity outcomes, understand the role of sociocultural factors in biomedical recruitment sustain nationwide workforce diversity with seamless career transitions, and promote the value of scientific workforce diversity. NIH officials provided us with performance measures that its scientific workforce diversity office will use to gauge the agency’s progress in achieving each of its five strategic plan’s goals. However, these items outline the particular areas that NIH plans to evaluate, rather than provide quantitative metrics, evaluation details, or time frames associated with any of the areas by which to evaluate progress in fulfilling the goals of the strategic plan. For example, for the first scientific workforce diversity goal “expand scientific workforce diversity as a field of inquiry” one of the performance measures is “number of publications stored in the scientific workforce diversity office’s online database.” Neither the strategic plan nor the additional documentation that NIH provided specifies a quantitative metric for the number of publications to be stored in its database and the time frame for doing so. Similarly, for the second scientific workforce diversity goal, to “build and implement evidence related to diversity outcomes” one of the performance measures identified by NIH is to compare the large grants awarded to African American or black scientists to those received by scientists who are white or from other racial and ethnic groups. However, there is no description in either the strategic plan or the additional documentation provided by NIH that indicates how and when these comparisons will be made, how the results of these comparisons will be assessed, and what will be considered as fulfilling this goal. All of the other areas or “performance measures” associated with each of the five goals also do not include such details or time frames. According to documentation provided by NIH its strategic plan does not explicitly list “specific metrics” because they will be defined within “the implementation phase of the plan.” However we are at the midpoint of the implementation of NIH’s 5-year plan, which covers the period of 2016 through 2020. As of May 2018, these specific metrics were not yet available. Without quantitative metrics, evaluation details, or time frames for assessing the agency’s performance against the five goals in its strategic plan, NIH will be unable to hold itself accountable for fulfilling its goals. This is inconsistent with best practices for strategic workforce planning, which call for agencies to monitor and evaluate their progress toward their human capital goals. These best practices also call for performance metrics to be specified at the outset to avoid a biased determination of what counts as “success” after the results are known. Further, this is inconsistent with federal internal control standards for monitoring, which require that an agency evaluate and document the results of ongoing monitoring to determine whether its management strategies are effectively supporting its objectives, or need corrective action. NIH’s establishment of goals and associated areas of future evaluation are positive steps, but absent specific measures by which to hold itself accountable, the agency will not have a basis to judge its success. Conclusions NIH’s ability to fulfill its mission of advancing scientific knowledge and innovation to enhance health, lengthen life, and reduce illness and disability is dependent on its success in sustaining a thriving and diverse workforce. For decades, concerns have been raised by the biomedical research community about NIH’s ability to support investigators beginning their research careers. Similar concerns have been expressed regarding support for investigators from groups underrepresented in the sciences, including those from racial and ethnic groups and women. While the agency has taken many steps during this time, disparities in its research grant funding persist. NIH has conducted some evaluations of individual programs and activities, but these have been relatively narrow in focus and the results of many efforts are not yet available. More recently, NIH has taken positive steps such as by establishing the position of Chief Officer of Scientific Workforce Diversity, who in turn, created a strategic workforce diversity plan and related goals and identified areas of future evaluation. However, NIH does not have quantitative metrics, evaluation details, and time frames to assess its progress in meeting its strategic workforce diversity goals. Without these elements, NIH’s ability to assess how its diversity strategic plan goals are being achieved is hindered. Thus, NIH is missing an opportunity to better position itself to support underrepresented groups and address longstanding disparities. Recommendation for Executive Action The NIH Director should develop quantitative metrics, evaluation details, and specific time frames to assess its current efforts to support investigators from underrepresented groups against its scientific workforce diversity strategic goals, and use the results of its assessment to guide any further actions. (Recommendation 1) Agency Comments We provided a draft of this report to HHS for comment. In its written comments, which are reproduced in appendix IV, HHS concurred with our recommendation and outlined the steps NIH is taking to implement it. Notably, for example, HHS indicated that NIH is establishing time frames to assess its progress in meeting its workforce diversity goals. HHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Health and Human Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Trends in the Number of Grants Awarded to Early Career Extramural Investigators by Award Type, for Fiscal Years 2013 through 2017 Table 6 provides details on the number of grants awarded, number of awardees and award type for early stage and intermediate stage investigators from fiscal year 2013 through fiscal year 2017. Appendix II: Total Number of Applicants for Smaller Grants and Career Development Grants for Fiscal Years 2013 through 2017 Tables 7 through 10 provide details on the demographics of NIH grant applicants during fiscal years 2013 through 2017. Appendix III: Applicant Funding Rates for Smaller Grants and Career Development Grants for Fiscal Years 2013 through 2017 Figures 5 and 6 provide details on the demographics of NIH grant applicants during fiscal years 2013 through 2017. Appendix IV: Comments from the Department of Health and Human Services Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Geri Redican-Bigott (Assistant Director), Carolina Morgan (Analyst-in-Charge), Jackie Hamilton, Toni Harrison, and Drew Long made key contributions to this report. Muriel Brown, Giselle Hicks, and Hayden Huang also made contributions to this report.
Why GAO Did This Study NIH's success depends on its ability to attract, retain, develop, and otherwise support biomedical investigators—including those employed in its intramural research program as well as those working in its extramural program at universities, academic health centers, and other research institutions. For decades, the agency has faced challenges in supporting early career investigators and those from underrepresented groups, including ethnic and racial minorities and women. The 21st Century Cures Act included provisions that NIH coordinate policies and programs to promote early research independence and enhance the diversity of the scientific workforce. The act also contained a provision that GAO examine NIH's efforts. GAO reviewed the actions NIH has taken to support (1) investigators beginning their biomedical careers; and (2) investigators from underrepresented groups and women. GAO analyzed NIH data from fiscal years 2013 through 2017 on grant funding for investigators by career phase and demographic status. GAO also reviewed relevant laws and NIH policies, programs, and initiatives, and interviewed NIH officials and stakeholders from the scientific research community. What GAO Found The National Institutes of Health (NIH), within the Department of Health and Human Services (HHS), plays a prominent role in the nation's biomedical research. While it employs investigators in its intramural research program, over 80 percent of its budget supports its extramural program, primarily through grant funding to investigators at other research institutions. Given this, NIH has a vested interest in supporting a robust national biomedical workforce, but the agency has acknowledged that the environment is highly competitive and many investigators find that it takes years to obtain the type and amount of funding that typically spurs research independence. GAO's analysis found that extramural investigators who had received at least one large NIH research grant during fiscal years 2013 through 2017 were more likely to receive such grants in subsequent application cycles than investigators who had not yet received such grants. In response to the 21st Century Cures Act, enacted in December 2016, NIH introduced an initiative to prioritize these grants for (1) early stage investigators, who are beginning their careers and have never received a large research grant, and (2) intermediate stage investigators, who are within 10 years of receiving their first large grant as an early stage investigator. However, it is too early to assess this new initiative, which was introduced in August 2017. NIH is currently considering revising the program to include investigators whose careers are more advanced. NIH implemented recommendations made by internal advisory bodies to support investigators from racial and ethnic groups considered by NIH to be underrepresented in biomedical research. GAO's analysis shows disparities for underrepresented racial and ethnic groups, and for female investigators, from 2013 through 2017. For example, in 2017, about 17 percent of investigators from underrepresented racial groups—African Americans, American Indians/Alaska Natives, and Native Hawaiian/Pacific Islanders combined—who applied for large grants received them. In contrast, about 24 percent of Hispanic or Latino applicants, an underrepresented ethnic group, received such grants. Asians and whites—well represented groups—were successful in receiving large grants about 24 and 27 percent of the time, respectively. Though women represent about half of all doctorates in biological science, GAO found that women investigators employed by NIH in its intramural program comprised about one-quarter of tenured investigators. NIH has taken positive steps such as establishing the position of Chief Officer of Scientific Workforce Diversity, who in turn created a strategic workforce diversity plan, which applies to both extramural and intramural investigators. The plan includes five broad goals for expanding and supporting these investigators. However, NIH has not developed quantitative metrics, evaluation details, or specific time frames by which it could measure the agency's progress against these goals. What GAO Recommends The Director of NIH should develop quantitative metrics, evaluation details, and time frames to assess NIH's efforts to diversify its scientific workforce against its diversity strategic plan goals, and take action as needed. HHS agreed with GAO's recommendation.
gao_GAO-18-480
gao_GAO-18-480_0
Background The 340B Program was created in 1992 following the enactment of the Medicaid Drug Rebate Program and gives 340B covered entities discounts on outpatient drugs comparable to those made available to state Medicaid agencies. HRSA is responsible for administering and overseeing the 340B Program. 340B Program Eligibility Eligibility for the 340B Program, which is defined in the Public Health Service Act, has expanded over time. Covered entities generally become eligible for the 340B Program by qualifying as certain federal grantees or as one of six specified types of hospitals. Eligible federal grantees include federally qualified health centers (FQHCs), which provide comprehensive community-based primary and preventive care services to medically underserved populations, as well as certain other federal grantees, such as family planning clinics and Ryan White HIV/AIDS program grantees. Eligible hospitals include critical access hospitals—small, rural hospitals with no more than 25 inpatient beds; disproportionate share hospitals— general acute care hospitals that serve a disproportionate number of low- income patients; and four other types of hospitals (see fig. 1). Some covered entities, typically hospitals and FQHCs, have multiple sites: the main site, which HRSA refers to as the parent site, and one or more other associated sites referred to as child sites. Child sites can include satellite clinics, off-site outpatient facilities, hospital departments, and other facilities. According to HRSA officials, to participate in the 340B Program and be considered part of the covered entity, the associated sites must meet program requirements and be registered with HRSA as a child site. Program Structure, Operation, and Key Requirements The 340B price for a drug—often referred to as the 340B ceiling price—is based on a statutory formula and represents the highest price a participating drug manufacturer may charge covered entities. Covered entities must follow certain requirements as a condition of participating in the 340B Program. For example, covered entities are prohibited from subjecting manufacturers to “duplicate discounts” in which drugs prescribed to Medicaid beneficiaries are subject to both the 340B price and a rebate through the Medicaid Drug Rebate Program. diverting any drug purchased at the 340B price to an individual who is not a patient of the covered entity. Under HRSA guidance defining this term, diversion generally occurs when 340B drugs are given to individuals who are not receiving health care services from covered entities or are receiving services that are not consistent with the type of services for which the covered entity qualified for 340B status. (See table 1 for more information on HRSA’s definition of an eligible patient.) Covered entities are permitted to use drugs purchased at the 340B price for all individuals who meet the 340B Program definition of a patient regardless of their financial or insurance status. Contract Pharmacies Covered entities may choose to dispense 340B drugs they purchase through contract pharmacies. The adoption and use of contract pharmacies in the 340B Program is governed by HRSA guidance. HRSA’s original guidance permitting the use of contract pharmacies limited their use to entities that did not have in-house pharmacies and allowed each entity to contract with only one outside pharmacy. However, March 2010 guidance lifted the restriction on the number of pharmacies with which a covered entity could contract. Since that time, the number of contract pharmacies has increased more than fifteen-fold, from about 1,300 to approximately 20,000. According to HRSA guidance, a covered entity is required to have a written contract in place with each pharmacy through which it intends to dispense 340B drugs, but is not generally required to submit its pharmacy contracts to HRSA. A covered entity that has more than one site at which it provides health care may enter into separate pharmacy contracts for the parent site and each child site, or one comprehensive pharmacy contract including all sites intending to use the pharmacy. It is up to the covered entity to determine which of its sites will be included in a contract with a pharmacy, and thus have what is referred to as a contract pharmacy arrangement with that pharmacy. Figure 2 provides an illustration of a covered entity that has four contract pharmacies but a total of six contract pharmacy arrangements, as not all of the entity’s sites have contracts with each of the pharmacies. Covered entities that choose to have contract pharmacies are required to register with HRSA the names of each of the pharmacies with which they contract. Covered entities may register their contract pharmacies in one of two ways: 1) only in relation to the parent site (use by child sites would be allowed as long as the sites were included in a comprehensive contract between the entity and the contracted pharmacies); or 2) separately for each site (parent and child) involved in a contractual arrangement with the pharmacy. As part of this registration, HRSA guidance specifies that covered entities must certify that they have signed and have in effect an agreement with each contract pharmacy and have a plan to ensure compliance with the statutory prohibitions on 340B drug diversion and duplicate discounts at their contract pharmacies. Like other pharmacies, when contract pharmacies fill prescriptions, they collect payments from the patient; if the patient has health insurance, the pharmacy will bill the insurer for the drug. In addition, each covered entity must determine which prescriptions are for eligible patients of the entity, and thus, can be filled with 340B drugs. One way that a covered entity could choose to do this is to employ a TPA to review all the prescriptions filled by a contract pharmacy to determine which, if any, prescriptions were issued by the covered entity to an eligible patient, and thus are eligible for the 340B discount. The covered entity then pays both the contract pharmacy and the TPA fees that they have negotiated for their roles in managing and distributing 340B drugs. These fees are typically deducted from the reimbursed amounts received from patients and their health insurers by the pharmacy and TPA, and then the balance is forwarded to the covered entity. (See fig. 3 for an example of how covered entities work with contract pharmacies and TPAs to dispense 340B drugs.) HRSA’s Oversight of Covered Entities In fiscal year 2012, HRSA implemented a systematic approach to conducting audits of covered entities that is outlined on its website. HRSA has increased the number of covered entities audited since it began audits in fiscal year 2012, and now audits 200 entities per year. (See table 2.) HRSA’s audits include covered entities that are randomly selected based on risk-based criteria (approximately 90 percent of all audits conducted each year), and covered entities that are targeted based on information from stakeholders such as drug manufacturers (10 percent of the audits conducted). The criteria for risk-based audits include a covered entity’s volume of 340B drug purchases, number of contract pharmacies, time in the 340B Program, complexity of its program, and history of violations or allegations of noncompliance associated with diversion and duplicate discounts. Among other things, HRSA’s audits include reviews of each covered entity’s policies and procedures, including those for overseeing contract pharmacies; an assessment of the entity’s compliance with respect to 340B eligibility status, the prevention of duplicate discounts and diversion, and other program requirements; and reviews of a sample of prescriptions filled during a 6-month period, including prescriptions dispensed by contract pharmacies, to identify instances of non- compliance. As a result of the audits conducted, HRSA has identified instances of non-compliance with program requirements, including violations related to drug diversion and the potential for duplicate discounts. Based on the audits for which results were posted on HRSA’s website as of February 8, 2018, 72 percent of the covered entities audited in fiscal years 2012 through 2017 had one or more findings of noncompliance. When an audit of a covered entity has a finding of noncompliance, covered entities are required to submit a corrective action plan within 60 days of the audit being finalized for HRSA approval. HRSA closes out the audit once the entity attests that the corrective action plan has been fully implemented and any necessary repayments have been made to affected manufacturers. About One-Third of Covered Entities Had One or More Contract Pharmacies, and Pharmacy Characteristics Varied As of July 1, 2017, about one-third of the more than 12,000 covered entities in the 340B Program had contract pharmacies, but the extent to which covered entities had contract pharmacies varied by type of entity. Overall, a higher percentage of hospitals (69.3 percent) had at least one contract pharmacy compared to federal grantees (22.8 percent). Among the six types of hospitals, the percentage that had at least one contract pharmacy ranged from 39.2 percent of children’s hospitals to 74.1 percent of critical access hospitals. Among the 10 types of federal grantees, the percentage with at least one contract pharmacy ranged from 3.9 percent of family planning clinics to 75.2 percent of FQHCs (see fig.4). Among covered entities that had at least 1 contract pharmacy, the number of contract pharmacies ranged from 1 to 439, with an average of 12 contract pharmacies per entity. However, the number of contract pharmacies varied by covered entity type, with disproportionate share hospitals having the most on average (25 contract pharmacies), and critical access hospitals having the least (4 contract pharmacies). (See fig. 5 for the distribution of contract pharmacies by covered entity type.) However, we found that a covered entity that contracts with a pharmacy may not actually use the pharmacy to dispense 340B drugs. For example, three covered entities that received our questionnaire told us that although they had one or more contract pharmacies registered with HRSA, they did not use those pharmacies to dispense 340B drugs. Moreover, officials from a covered entity we interviewed reported that while the entity maintained a contract with a specialty pharmacy, it had not dispensed 340B drugs through that pharmacy in several years. Officials explained that the covered entity maintained its contract and continued to register this pharmacy with HRSA because it would be financially beneficial should it have a patient fill a 340B-eligible specialty drug at this pharmacy in the future. The actual number of 340B contract pharmacy arrangements—the number of contractual arrangements between contract pharmacies and the sites of a covered entity—is unknown because HRSA does not require a covered entity to register pharmacies with each of its child sites. Rather, HRSA gives covered entities the option to register contract pharmacies only in relation to the parent site: child sites may use that pharmacy if included in the written contract between the entity and the pharmacy. Based on our analysis of HRSA data, 1,645 covered entities that had at least one child site registered their contract pharmacies only with their parent sites. These 1,645 covered entities had a total of 25,481 registered contract pharmacy arrangements. However, if the pharmacies were contracted to work with all of the covered entities’ sites—the parents and all the child sites—then these 1,645 entities could have as many as 866,388 contract pharmacy arrangements. Therefore, the number of contract pharmacy arrangements is likely higher than what is reported in HRSA’s database. Nearly 93 percent of the approximately 20,000 pharmacies that 340B covered entities contracted with as of July 1, 2017, were classified as community/retail pharmacies, less than 1 percent were classified as specialty pharmacies, and about 7 percent were other types of pharmacies including institutional and mail order pharmacies. Furthermore, the majority (75 percent) of 340B contract pharmacies were chain pharmacies, while 20 percent were independent pharmacies and 5 percent were other pharmacies. In contrast, slightly over half of all pharmacies nationwide are chain pharmacies and about one-third are independent. The five biggest pharmacy chains—CVS, Walgreens, Walmart, Rite-Aid, and Kroger—represented a combined 60 percent of 340B contract pharmacies, but only 35 percent of all pharmacies nationwide. Figure 6 shows how the types of pharmacies varied by type of covered entity. Critical access hospitals had a higher proportion of independent contract pharmacies (40 percent of their pharmacies) compared to other covered entity types (which ranged from 11 percent for disproportionate share hospitals to 21 percent for other federal grantees). Our analysis suggests that this is likely due, in part, to a larger proportion of critical access hospitals compared to other types of covered entities being located in rural areas; independent contract pharmacies are also more likely than other contract pharmacies to be located in rural areas. Across all covered entities, the distance between the entities and their contract pharmacies ranged from 0 miles (meaning that the contract pharmacy and entity were co-located) to more than 5,000 miles; the median distance was 4.2 miles. Table 3 shows the distribution of distances between covered entities and their pharmacies overall and by entity type. While there was a range in distances between covered entities and each of their pharmacies, about half of the entities had all their contract pharmacies located within 30 miles, but this varied by entity type. Specifically, more than 60 percent of critical access hospitals and FQHCs had all of their contract pharmacies within 30 miles. In contrast, 45 percent of disproportionate share hospitals had at least one pharmacy that was more than 1,000 miles away compared to 11 percent or less for grantees and critical access hospitals. (See fig. 7.) Selected Covered Entities Used Various Methods to Pay Contract Pharmacies and TPAs Contracts we reviewed between selected covered entities and contract pharmacies showed that entities generally agreed to pay their contract pharmacies a flat fee per 340B prescription, with some entities also paying additional fees based on a percentage of revenue. Selected covered entities and TPAs included in our review indicated two main methods entities use to pay for TPA services: 1) per prescription processed, or 2) per contract pharmacy. Contracts Reviewed Showed Covered Entities Agreed to Pay Contract Pharmacies a Fee per 340B Prescription; Some Also Agreed to Additional Fees Twenty-nine of the 30 contracts we reviewed between covered entities and contract pharmacies included provisions for the entities to pay flat fees for each eligible 340B prescription. For the remaining contract, the covered entity and the contract pharmacy were part of the same hospital system, and the contract provided that the entity would not pay fees for 340B prescriptions. In addition to payment of flat fees, 13 of the 29 contracts required the covered entity to pay the contract pharmacy a fee based on a percentage of revenue generated for each 340B prescription. Among the contracts we reviewed, more federal grantees than hospitals had contracts that included both flat fees and fees based on the percentage of revenue (see fig. 8). We found a wide range in the amount of flat fees covered entities agreed to pay pharmacies in the contracts we reviewed, though they generally ranged from $6 to $15 per 340B prescription. (See Appendix I for a description of fees listed in each of the contracts we reviewed.) The amount of the flat fees per 340B prescription varied by several factors according to our review, including covered entity type, type of drug, and patient insurance status: Flat fees were generally higher for hospitals than federal grantees. In general, hospitals’ flat fees were higher than those for grantees, with most flat fees ranging from $15 to $25 per 340B prescription for hospitals, compared to from $6 to $13 for grantees. Flat fees were sometimes higher for brand drugs. Three of the 29 contracts we reviewed specified different flat fees for brand and generic drugs. In 2 of these contracts flat fees were $5 or $7 higher for brand drugs. In the remaining contract, the fees for some brand drugs were substantially higher, ranging from $75 to $1,750 for brand drugs, compared to $0 for generic drugs. Additionally, some contracts we reviewed only specified a fee for brand drugs, and 4 of the contracts either excluded generic drugs from being purchased at the 340B price or limited the use of the 340B Program to brand drugs. Flat fees were different or substantially higher for certain specialty drugs. For 2 of the 29 contracts we reviewed, flat fees were for drugs to treat hemophilia. Given the different nature of hemophilia treatment drugs, fees for these drugs were different than those in the other contracts for other types of drugs, and provided for payments of $.06 and $.09 per unit of blood clotting factor. Additionally, 2 contracts contained substantially higher flat fees for specialty medications. In 1 contract, the flat fees were $125 per prescription for brand and generic human immunodeficiency virus drugs, and $1,750 for brand hepatitis C drugs. In another contract the flat fees were $65 for all specialty drugs, compared to $13 for other drugs. Flat fees were sometimes higher for 340B prescriptions dispensed to patients with insurance. Seven of the 29 contracts we reviewed specified different flat fees for prescriptions provided to patients with health insurance than for patients paying with cash or through a drug discount card provided by the covered entity. The flat fees entities would pay under these contracts ranged from $1 to $16 higher per 340B prescription dispensed to insured patients compared to patients not using insurance. As previously noted, in addition to requiring flat fees for dispensing prescriptions, 13 of the 29 contracts we reviewed included provisions for the covered entity to pay the pharmacy a fee based on the percentage of revenue generated by each prescription. These percentage fees only applied to prescriptions provided to patients with insurance, and ranged from 12 to 20 percent of the revenue generated by the prescriptions. Generally there were two methods for determining the amount of revenue generated. The first method used the reimbursement the pharmacy received for the prescription, while the second method used the net revenue after subtracting the 340B cost of the drug from the reimbursement received by the pharmacy. Selected Covered Entities Use Two Main Methods to Pay TPAs Officials from the two TPAs we interviewed and questionnaire respondents from the 39 covered entities that use TPAs described two main methods entities use to reimburse TPAs for 340B services: 1) a fee for each prescription processed by the TPA, and 2) a fee for each contract pharmacy for which the TPA processes 340B claims on behalf of the entity. Example of Fees between a Covered Entity and Third-Party Administrator (TPA) In the hypothetical example below, the TPA receives $85 from the contract pharmacy. This amount represents the total reimbursement for the 340B drug, less fees deducted by the contract pharmacy. Pursuant to an agreement with the covered entity, the TPA deducts a fee of $5, and forwards the remaining balance of $80 to the covered entity. This represents the total revenue the covered entity generated from the 340B drug. Officials with the two TPAs we interviewed told us that their agreements with covered entities most frequently involve covered entities compensating them based on a fee for each prescription they process on behalf of the entity. Officials from one of these TPAs described three different fee-per-prescription options they offer to covered entities, with the amount of the fees varying based on the option selected: A small fee, for example, 20 cents, for every prescription filled by the covered entity’s contract pharmacy, and reviewed and processed by the TPA. This includes prescriptions that may not have originated from the covered entity, and may not be 340B eligible, as contract pharmacies can also fill prescriptions for individuals who are not patients of the entity. A mid-sized fee, for example, $1.90, for each prescription filled by the covered entity’s contract pharmacy that the TPA reviewed and determined originated from the covered entity. These prescriptions may or may not be 340B eligible. A larger fee, for example, $5 to $7, for each prescription filled by the covered entity’s contract pharmacy that the TPA determined originated from the entity and is 340B eligible. The 39 covered entities that responded to our questionnaire and reported using a TPA most frequently reported paying their TPAs a fee per each prescription processed, but the exact method varied. For example, some covered entities said they paid their TPAs for each prescription regardless of whether it was determined to be 340B eligible, others limited the fees to prescriptions that were 340B eligible, and some reported paying TPAs for 340B-eligible prescriptions dispensed to an insured patient. (See table 4.) Among the 10 covered entities we interviewed, officials from 8 of these entities said they used TPAs; 5 said they pay their TPAs a fee per prescription, 1 reported paying a fee per contract pharmacy, and 2 reported using both options. Among the covered entities that used fees per prescription and told us the amounts of the fees they pay, the fees ranged from $3.50 to $10.00 per 340B eligible prescription or $3.95 per prescription regardless of whether the prescription was 340B eligible. For those that pay their TPA a fee per contract pharmacy, the fee was $25,000 a year per pharmacy. About Half of the Covered Entities Reviewed Provided Low-Income, Uninsured Patients Discounts on 340B Drugs at Some or All of Their Contract Pharmacies Of the 55 covered entities responding to our questionnaire, 30 reported providing low-income, uninsured patients discounts on 340B drugs dispensed at some or all of their contract pharmacies, and 25 said they did not offer discounts at their contract pharmacies. All 30 covered entities providing patients with discounts reported providing discounts on the drug price for some or all 340B drugs dispensed at contract pharmacies. Federal grantees were more likely than hospitals to provide such discounts and to provide them at all contract pharmacies (see fig. 9). Of the 30 covered entities that responded to our questionnaire that they provided discounts on the drug price, 23 reported providing patients the full 340B discount—the patients obtained drugs from contract pharmacies at the 340B price or less. In many cases, these covered entities indicated that patients received drugs at no cost. Some covered entities reported that patients would pay more than the 340B price, but less than the wholesale price of the drug or what a self-paying patient would pay, and others indicated they determined discounts for patients on a case-by-case basis. A larger number of federal grantees than hospitals (15 compared to 8) indicated their patients would pay the 340B price or less for their drugs at contract pharmacies where discounts were available. (See fig. 10.) In addition to providing discounts on the 340B drug price, some of the 30 covered entities also reported providing discounts on fees patients may pay to contract pharmacies for 340B drugs. Contract pharmacies may charge fees to dispense 340B drugs or cover administrative costs of participating in a covered entity’s 340B program, including costs associated with tracking drug inventories and ordering new drugs. In general, about two-thirds of the covered entities with patients who would be subject to dispensing or administrative fees at contract pharmacies reported providing discounts on the fees at some or all of their contract pharmacies. Hospitals were more likely than grantees to provide discounts on these fees when applicable. (See fig.11.) The 30 covered entities providing 340B discounts to low-income, uninsured patients reported using a variety of methods to determine whether patients were eligible for these discounts. Fourteen of the covered entities said they determined eligibility for discounts based on whether a patient’s income was below certain thresholds as a percentage of the federal poverty level, 11 reported providing discounts to all patients, and 5 said they determined eligibility for discounts on a case-by-case basis. For those 14 covered entities determining eligibility based on income as a percentage of the federal poverty level, the threshold used to determine who was eligible for discounts varied but most reported that patients with incomes at or below 250 percent of the federal poverty level would be eligible for discounts. (See table 5.) Covered entities reported making patients aware of the availability of discounts at contract pharmacies primarily through oral communication by staff located at either the entity or the pharmacy. In addition, the covered entities reported using a variety of methods to inform contract pharmacies about which patients were eligible for discounts, including through notes in patient medical records sent to the pharmacy or by placing codes on the patient’s prescriptions sent to or presented at the pharmacy. (See table 6.) Officials from one covered entity we interviewed said that it provides patients eligible for discounts with an identification card (which they referred to as a drug discount card) that patients present at the contract pharmacy; this card informs pharmacy staff of the specific discount amount. Officials from another covered entity said they place codes on electronic prescriptions which informs the pharmacy about discounts. Some covered entities that did not provide discounts on 340B drugs at their contract pharmacies reported assisting patients with drug costs through other mechanisms. For example, 6 of the 10 covered entities we interviewed said that while they did not provide discounts on 340B drugs dispensed at their contract pharmacies, they provide charity care to low- income patients, including free or discounted prescriptions. Additionally, 4 of the 25 covered entities that reported on our questionnaire that they did not provide discounts at their contract pharmacies said they provided patients with discounts on 340B drugs at their in-house pharmacies. Oversight Weaknesses Impede HRSA’s Ability to Ensure Compliance at 340B Contract Pharmacies HRSA does not have complete data on the total number of contract pharmacy arrangements in the 340B Program to inform its oversight efforts, including information that could be used to better target its audits. Additionally, weaknesses in HRSA’s audit process compromise its oversight of covered entities. Finally, the lack of specificity in HRSA’s guidance to covered entities potentially impedes covered entities’ oversight of contract pharmacies. HRSA Does Not Have Complete Data on Contract Pharmacy Arrangements to Use for Its Oversight HRSA does not have complete data on all contract pharmacy arrangements in the 340B Program to inform its oversight efforts. HRSA requires covered entities to register their contract pharmacies with the agency and recertify that registration annually. Contract pharmacies registered to each covered entity are recorded in a publicly available database, which according to HRSA, is used by various stakeholders to validate the eligibility of entities and confirm shipping addresses for each contract pharmacy eligible to receive 340B drugs on an entity’s behalf. However, because covered entities differ in the way they register their contract pharmacies, HRSA, and its publicly available database, does not have information on all of an entity’s contract pharmacy arrangements. Specifically, because HRSA does not require covered entities to separately register contract pharmacies to each child site for which a contractual relationship exists, HRSA does not have complete information on which sites of an entity have contracted with a pharmacy to dispense 340B drugs. Our analysis of HRSA data showed that the registration of contract pharmacies for 57 percent of covered entities with child sites only specified relationships between contract pharmacies and the parent site; thus HRSA may only have information on a portion of the actual number of 340B contract pharmacy arrangements. Additionally, manufacturers do not have complete information on which covered entity sites have contracts with a pharmacy to dispense 340B drugs, according to HRSA officials. Manufacturers could use such information to help ensure that 340B discounted drugs are only provided to pharmacies on behalf of a covered entity site with a valid 340B contract with that site. HRSA officials told us that the number of contract pharmacy arrangements recorded in HRSA’s database increases a covered entity’s chance of being randomly selected for a risk-based audit. However, since HRSA gives covered entities multiple contract pharmacy registration options, the likelihood of an entity being selected for an audit is dependent, at least in part, on how an entity registers its pharmacies as opposed to the entity’s actual number of pharmacy arrangements. Without more complete information on covered entities’ contract pharmacy arrangements, HRSA cannot ensure that it is optimally targeting the limited number of risk-based audits done each year to entities with more contract pharmacy arrangements. Federal internal control standards related to information and communication state that management should use quality information to achieve the entity’s objectives, such as by obtaining relevant data that are reasonably free from error and bias and represent what they purport to represent so that they can be used for effective monitoring. Without complete information on covered entities’ use of contract pharmacies, HRSA does not have the information needed to effectively oversee the 340B Program, including information that could be used to better target its audits of covered entities. Weaknesses in HRSA’s Audit Process Impede Its Oversight of 340B Program Compliance at Contract Pharmacies HRSA primarily relies on audits to assess covered entities’ compliance with 340B Program requirements, including compliance at contract pharmacies, according to HRSA officials; however weaknesses in its audit process impede the effectiveness of its oversight. As a result of its audits, HRSA has identified instances of diversion and the potential for duplicate discounts at contract pharmacies, among other findings of noncompliance. Specifically, through the audits conducted since fiscal year 2012, HRSA identified at least 249 instances of diversion at contract pharmacies and 15 instances of the potential for duplicate discounts for drugs dispensed at contract pharmacies, as of February 2018. HRSA had also identified 33 covered entities with insufficient contract pharmacy oversight. (See Table 7.) However, we identified two areas of weaknesses in HRSA’s audit process that impede its oversight of covered entities’ compliance with 340B Program requirements at contract pharmacies: 1) the process does not include an assessment of all potential duplicate discounts, and 2) the process for closing audits does not ensure all covered entities have fully addressed any noncompliance identified. Medicaid Delivery Systems States provide Medicaid services through either fee-for-service or managed care. Under fee-for-service, states reimburse providers directly for each service delivered. For example, a pharmacy would be paid by the state for each drug dispensed to a Medicaid beneficiary. Under a capitated managed care model, states typically contract with managed care organizations to provide a specific set of services to Medicaid beneficiaries (which could include drugs) and prospectively pays each organization a set amount per beneficiary per month to provide or arrange those services. Not all potential duplicate discounts are assessed. HRSA’s audits only assess the potential for duplicate discounts in Medicaid fee-for- service. They do not include a review of covered entities’ processes to prevent duplicate discounts for drugs dispensed through Medicaid managed care. The potential for duplicate discounts related to Medicaid managed care has existed since 2010 when manufacturers were required to pay Medicaid rebates under managed care, and currently, there are more Medicaid enrollees, prescriptions, and spending for drugs under managed care than fee-for-service. HRSA officials told us that they do not assess the potential for duplicate discounts in Medicaid managed care as part of their audits because they have yet to issue guidance as to how covered entities should prevent duplicate discounts in Medicaid managed care. They agreed that the lack of Medicaid managed care guidance for covered entities was problematic, and HRSA’s December 2014 policy release stated, “HRSA recognizes the need to address covered entities’ role in preventing duplicate discounts under Medicaid managed care, and is working with the Centers for Medicare & Medicaid Services (CMS) to develop policy in this regard.” According to HRSA, in the absence of formal guidance, covered entities should work with their states to develop strategies to prevent duplicate discounts in Medicaid managed care. However, 8 of the 10 covered entities we spoke with described challenges working with their states and local Medicaid managed care organizations to ensure that duplicate discounts were not occurring or expressed the need for more guidance from HRSA on how to comply with 340B requirements related to duplicate discount prevention. As a result of these challenges, some covered entities acknowledged that they did not have assurance that duplicate discounts were not occurring with their Medicaid managed care claims, while other entities told us that they did not seek discounts for the drugs of managed care patients due to compliance challenges. Federal internal control standards related to control activities and monitoring state that agencies should 1) implement control activities through policies, such as by determining the necessary policies based on the objectives and related risks for the operational process; and 2) establish and operate monitoring activities to monitor the internal control system and evaluate results, such as by establishing and operating monitoring activities that are built into each entity’s operations, performed continually, and responsive to change. In addition, federal law directs the agency to develop detailed guidance describing methodologies and options for avoiding duplicate discounts. Until HRSA develops guidance and includes an assessment of the potential for duplicate discounts in Medicaid managed care as part of its audits, the agency does not have assurance that covered entities’ efforts are effectively preventing noncompliance. As a result, manufacturers are at risk of being required to erroneously provide duplicate discounts for Medicaid prescriptions. Audit closure process does not ensure all identified issues of noncompliance are addressed. Under HRSA’s audit procedures, covered entities with audit findings are required to 1) submit corrective action plans to HRSA that indicate that the entities will determine the full scope of any noncompliance (beyond the sample of prescriptions reviewed during an audit); 2) outline the steps they plan to take to correct findings of noncompliance, including any necessary repayments to manufacturers; and 3) specify the timelines for implementing the corrective action plans. HRSA closes the audit when a covered entity submits a letter attesting that its corrective action plan, including its assessment of the full scope of noncompliance, has been implemented and any necessary repayments to manufacturers have been completed. However, we identified two specific deficiencies in HRSA’s approach. First, although HRSA requires that covered entities determine the full scope of noncompliance found in audits, it does not provide guidance as to how entities should make this assessment. Specifically, HRSA does not specify how far back in time covered entities must look to see if any related noncompliance occurred and instead, relies on each entity to make this determination. For example, a document from a fiscal year 2017 audit revealed that a covered entity that had participated in the 340B Program for 3 years only reviewed 5 months of claims to determine whether any other instances of diversion had occurred, diminishing the likelihood that its efforts identified the full scope of noncompliance. Additionally, until April 2018, HRSA did not require covered entities that were audited to communicate the methodology used to assess the full scope of noncompliance, or the findings of their assessments, including how many or which manufacturers were due repayment. Beginning April 1, 2018, HRSA requires covered entities subject to targeted audits to document their methodology for assessing the full scope of noncompliance. However, as previously noted, only 10 percent of the 200 audits HRSA currently conducts each year are targeted audits. Consequently, the vast majority of covered entities audited are not required to provide HRSA with information on their methodology for assessing the full scope of noncompliance. Furthermore, HRSA officials told us that they believe determining the scope of noncompliance is a matter between the covered entities and manufacturers. Thus, HRSA relies on manufacturers to determine the adequacy of a covered entity’s effort to assess the full scope of noncompliance. However, covered entities only contact the manufacturers that they determine were affected by the noncompliance based on the methodology they choose to apply; thus, it is unclear how manufacturers not contacted would be in a position to negotiate an acceptable assessment of the scope of noncompliance and any applicable repayment. Federal internal control standards related to control activities state that agencies should implement control activities through policies, such as by documenting policies in the appropriate level of detail to allow management to effectively monitor the control activity. As HRSA does not provide guidance on how covered entities are to assess the full scope of noncompliance and does not review most entities’ methodology for making such assessments, the agency does not have reasonable assurances that entities have adequately identified all instances of noncompliance. Second, HRSA generally relies on each covered entity to self-attest that all audit findings have been addressed and that the entity is now in compliance with 340B Program requirements. Beginning April 1, 2018, HRSA requires the 10 percent of covered entities that are subject to targeted audits to provide documentation that they implemented their corrective action plans prior to HRSA closing the audits. However, it still relies on the remaining 90 percent of audited covered entities to self- attest to their compliance with program requirements. HRSA officials told us they believe that a covered entity providing a description of the corrective actions is sufficient, and that the self- attestation of corrective action plan implementation provides HRSA with the information necessary to close the audit. However, aside from the self-attestation, HRSA’s only mechanism to ensure that the majority of audited covered entities have implemented their corrective action plans is to re-audit the entities—in other words, subject the entity to a targeted audit. To date, the agency told us that it has re-audited 21 covered entities, and based on those re-audits, determined that 1 entity did not fully implement its corrective action plan from the original audit. However, we found that of the 19 re-audited covered entities for which results were available, 12 had similar findings of noncompliance in their second audits, as were identified in their original audits (e.g., diversion findings in both audits), 3 of which were caused by the same issue, according to information provided to us by HRSA. Federal internal control standards for monitoring specify that agencies should establish and operate monitoring activities to monitor the internal control system and evaluate the results, for example by using ongoing monitoring to obtain reasonable assurance of the operating effectiveness of the service organization’s internal controls over the assigned process. By only reviewing evidence of corrective action plan implementation for the limited number of covered entities subject to targeted audits, HRSA does not have reasonable assurance that the majority of covered entities audited have corrected the issues identified in the audit, and are not continuing practices that could lead to noncompliance, thus increasing the risk of diversions, duplicate discounts, and other violations of 340B Program requirements. HRSA’s Guidance for Covered Entities’ Oversight of Contract Pharmacies Lacks Specificity HRSA guidance for covered entities on their oversight of contract pharmacies lacks specificity and thus provides entities with considerable discretion on the scope and frequency of their oversight practices. Specifically, HRSA’s 2010 guidance on contract pharmacy services specifies that covered entities are responsible for overseeing their contract pharmacies to ensure that drugs the entity distributes through them comply with 340B Program requirements, but states that, “the exact method of ensuring compliance is left up to the covered entity.” The guidance also states that, “annual audits performed by an independent, outside auditor with experience auditing pharmacies are expected,” but HRSA officials told us that covered entities are not required to conduct independent audits and instead are expected to do some form of periodic oversight of their contract pharmacies. Thus, according to HRSA officials, if a covered entity indicates that it has performed oversight in the 12 months prior to a HRSA audit, then HRSA considers the entity to have met HRSA’s standards for conducting contract pharmacy oversight regardless of what the oversight encompassed. Due, at least in part, to a lack of specific guidance, we found that some covered entities performed minimal contract pharmacy oversight. Officials from a grantee reported auditing claims of 5 randomly selected patients quarterly, despite treating approximately 900 patients each month. Officials from a critical access hospital that serves about 21,000 patients a year at its outpatient clinics reported that the annual independent audit of their hospital system reviewed five claims. Officials from two entities reported that they did not contract for an independent audit of their 340B Program, despite HRSA’s expectation to do so. Additionally, of the 20 covered entities whose audits we reviewed, 6 had no documented processes for conducting contract pharmacy oversight. The identified noncompliance at contract pharmacies raises questions about the effectiveness of covered entities’ current oversight practices. Specifically, 66 percent of the 380 diversion findings in HRSA audits involved drugs distributed at contract pharmacies, and 33 of the 813 audits for which results were available had findings for lack of contract pharmacy oversight. However, the number of contract pharmacy oversight findings may be limited by the fact that officials from HRSA’s contractor said that its auditors rely on verbal responses from entity officials about any internal review or self-audits conducted by the entity. This is despite the fact that HRSA officials told us that the agency requires auditors to review documentation of covered entities’ oversight activities. Federal internal control standards related to control activities state that agencies should implement control activities through policies, such as by documenting the responsibility for an operational process’s objectives and related risks, and control activity design, implementation, and operating effectiveness. The standards also specify that management should periodically review policies, procedures, and related control activities for continued relevance and effectiveness in achieving its objectives or addressing related risks. As a result of the lack of specific guidance and its numerous audit findings of noncompliance, HRSA does not have assurance that covered entities’ contract pharmacy oversight practices are sufficiently detecting 340B noncompliance. Conclusions The 340B Program provides covered entities with discounts on outpatient drugs and the ability to generate revenue on drugs purchased under the program. Use of contract pharmacies enables covered entities to increase the use of 340B drugs by expanding their distribution networks, thereby increasing the volume of 340B drugs dispensed and generating associated savings and revenue. The expansion of contract pharmacies presents an opportunity for entities to fill more prescriptions with discounted 340B drugs, but it also increases potential risks to the 340B Program, such as risks related to diversion and duplicate discounts. Although covered entities and HRSA have taken steps to ensure that 340B Program requirements are being met at contract pharmacies, HRSA’s audits continue to identify instances of noncompliance. As currently structured, weaknesses in HRSA’s oversight impede its ability to ensure compliance with 340B Program requirements at contract pharmacies. HRSA cannot ensure that its limited number of audits target covered entities with the most complex 340B programs, and thus the greatest risk of noncompliance, because the agency does not have complete data on entities’ contract pharmacy arrangements. Additionally, HRSA’s audit process does not adequately identify compliance issues, nor does it ensure that identified issues are corrected. HRSA’s audits do not assess compliance with a key 340B Program requirement (the prohibition regarding duplicate discounts) as it relates to Medicaid managed care, and HRSA does not provide audited entities with guidance for determining the full scope of noncompliance, which reduces the effectiveness of HRSA’s audits in identifying drug diversion and duplicate discounts. Moreover, where audits identify instances of noncompliance, HRSA’s process does not confirm that all covered entities successfully correct the deficiencies and take steps to prevent future noncompliance. Although HRSA made improvements to its process for targeted audits during the course of our review, the agency does not require most covered entities subject to an audit to provide evidence of corrective actions taken. Moreover, the lack of specificity in HRSA’s guidance to covered entities on the methods through which they should ensure compliance may impede the effectiveness of entities’ oversight. For example, without guidance instructing covered entities how to prevent duplicate discounts in Medicaid managed care, entities are left to individually navigate the policies and practices of states and private insurers. Furthermore, by not clearly communicating expectations for covered entities’ oversight of their contract pharmacies, HRSA faces the risk that instances of noncompliance, such as diversion, at contract pharmacies will not be identified and addressed. As the 340B Program continues to grow, it is essential that HRSA address these shortcomings. Recommendations for Executive Action We are making the following seven recommendations to HRSA: The Administrator of HRSA should require covered entities to register contract pharmacies for each site of the entity for which a contract exists. (Recommendation 1) The Administrator of HRSA should issue guidance to covered entities on the prevention of duplicate discounts under Medicaid managed care, working with CMS as HRSA deems necessary to coordinate with guidance provided to state Medicaid programs. (Recommendation 2) The Administrator of HRSA should incorporate an assessment of covered entities’ compliance with the prohibition on duplicate discounts, as it relates to Medicaid managed care claims, into its audit process after guidance has been issued and ensure that identified violations are rectified by the entities. (Recommendation 3) The Administrator of HRSA should issue guidance on the length of time covered entities must look back following an audit to identify the full scope of noncompliance identified during the audit. (Recommendation 4) The Administrator of HRSA should require all covered entities to specify their methodology for identifying the full scope of noncompliance identified during the audit as part of their corrective action plans, and incorporate reviews of the methodology into their audit process to ensure that entities are adequately assessing the full scope of noncompliance. (Recommendation 5) The Administrator of HRSA should require all covered entities to provide evidence that their corrective action plans have been successfully implemented prior to closing audits, including documentation of the results of the entities’ assessments of the full scope of noncompliance identified during each audit. (Recommendation 6) The Administrator of HRSA should provide more specific guidance to covered entities regarding contract pharmacy oversight, including the scope and frequency of such oversight. (Recommendation 7) Agency Comments and Our Evaluation HHS provided written comments on a draft of this report, which are reproduced in app. II, and technical comments, which we have incorporated as appropriate. In its written comments, HHS concurred with four of our seven recommendations, did not concur with three of our recommendations, and stated that it had concerns with some of the other information in our report. In concurring with four of our recommendations, HHS stated that HRSA is making changes to its audit process to strengthen oversight of the 340B Program. Regarding our recommendation related to guidance on duplicate discounts, HHS concurred, but commented that the recommendation did not account for the critical role that CMS would play in its successful implementation. We agree that CMS would play an important role in ensuring compliance with the prohibition on duplicate discounts in Medicaid managed care, which is why we recommended that HRSA coordinate with CMS on the guidance. HHS indicated that HRSA and CMS are strategizing on effective ways to address this issue. HHS also concurred with our recommendations to issue guidance related to identifying the full scope of noncompliance and covered entities’ oversight of their contract pharmacies, although it noted that HRSA would face challenges in issuing guidance related to areas where it does not have explicit regulatory authority. While we recognize that HRSA’s authority to issue regulations governing the 340B Program may be limited, our recommendations were focused on HRSA clarifying certain program requirements through whatever format the agency deems appropriate. Since the establishment of the 340B Program, HRSA has used interpretative guidance and statements of policy to provide guidance to covered entities regarding compliance with program requirements. HRSA has also used certain of its audit procedures, such as the template provided to covered entities for the development of corrective action plans, to provide such clarifications. Our recommendations are intended to expand the availability of information HRSA provides to covered entities to help them improve compliance with existing program requirements. As such, we continue to believe that further clarification, whether provided as interpretive guidance, audit procedures, or another format, is necessary to help ensure compliance with program requirements. Among the recommendations with which HHS did not concur was our recommendation to require covered entities to register contract pharmacies for each site of the entity for which a contract exists. HHS stated that its current registration process is responsive to our concerns for all covered entity types other than hospitals and health centers. However, as we note in the report, hospitals and FQHCs are typically the covered entity types that have multiple sites, and are generally more likely to have contract pharmacies. HHS cited administrative burden for both covered entities and HRSA as a reason not to require covered entities to provide more complete information about contract pharmacy arrangements. However, given that HRSA requires covered entities to register both their sites and their contract pharmacies with the agency, it is unclear why there would be significant additional burden for covered entities to indicate which of the previously registered sites had contracts with which contract pharmacies. It is also important to note that contract pharmacy use by covered entities is voluntary, and covered entities that choose to have contract pharmacies are required to oversee those pharmacies to ensure compliance with 340B Program requirements. Therefore, the use of contract pharmacies inherently comes with additional administrative responsibilities for the covered entity, and we believe that the requirement to register each contract pharmacy arrangement with HRSA should present limited additional burden on covered entities. Rather than implementing our recommendation, HHS stated that HRSA will make changes to its audit selection process; HRSA will assume that all contract pharmacies registered with the parent site would also be used by all sites of the covered entity prior to selecting entities for risk-based audits. Although this may be a good step forward, it does not provide information on the actual number of contract pharmacy arrangements for each covered entity. As such, we continue to believe that HRSA needs more complete information on contract pharmacy arrangements to best target its limited number of audits to covered entities with the most complex 340B programs. This is also important information to provide manufactures to help ensure that 340B discounted drugs are only provided to pharmacies on behalf of a covered entity site with a valid 340B contract with that site. HHS also did not concur with our two recommendations to require covered entities to specify their methodologies for identifying the full scope of noncompliance identified during their audits as part of their corrective action plans, and to provide evidence that these plans have been successfully implemented prior to HRSA closing audits. In its response, HHS noted that on April 1, 2018, HRSA implemented these requirements for entities subject to targeted audits (including re-audits), which represent 10 percent of all entities audited. However, HRSA indicated that implementing these requirements for all covered entities that are audited would create a significant burden for these entities. As we previously noted, HRSA already requires covered entities with audit findings to determine the full scope of noncompliance and to submit corrective action plans. Thus, it is unclear how requiring covered entities to include written descriptions of their methodologies for identifying the full scope of noncompliance, which should already be formulated, and to provide evidence that the corrective actions that entities developed have been implemented, would create significant additional burden for these entities. HHS also expressed concern that these additional steps would significantly delay the audit process and repayments to manufacturers. We recognize that reviewing these documents may create some additional work for HRSA and possibly require additional time to close audits. However, we believe this additional work and time is necessary for the audits to be effective at adequately identifying compliance issues and ensuring that those issues are corrected. Furthermore, these additional actions could reduce the need for re-audits which are burdensome in terms of cost and time, for both the covered entity and HRSA. Finally, HHS also expressed concerns about some of the other information included in the draft report. HHS stated that disclosing actual fees paid by covered entities to pharmacies and TPAs could cause disruptions in the drug pricing market and fluctuations in fees entities pay. Our report provides fees for a small and nongeneralizable sample of contracts, covered entities, and TPAs. For example, we provide contract pharmacy fees for 30 of the thousands of contracts that exist between covered entities and pharmacies. It is unclear how this information could cause disruptions in the drug pricing market or lead to fluctuations in fees covered entities may pay, and HHS did not provide any evidence to support its assertion. Additionally, HHS has raised questions about the effect of the 340B Program on drug pricing. As such, we believe that our discussion of fees brings enhanced transparency to the 340B Program, and provides Congress with important information it requested to gain a better understanding of the program and enhance its oversight. Regarding the distance between contract pharmacies and covered entities, HHS noted that the longest distance was for a specialty pharmacy that was registered for 17 days. As noted in our scope and methodology, our analysis was of covered entities and contract pharmacies participating as of July 1, 2017. Additionally, there were other contract pharmacy arrangements of similarly long distances. HHS also expressed concern that the draft report did not note that such specialty pharmacies may be needed due to restricted distribution by a manufacturer, which would be outside a covered entity’s control. In our report, we noted that the 340B database does not provide information on why a covered entity may choose to contract with a pharmacy that is located a long distance away. However, the report does include some potential reasons HRSA provided us as to why this may occur. HHS also commented that our table on the number and percent of covered entities audited does not fully reflect HRSA’s auditing efforts because it does not include the number of entity sites and contract pharmacies included within each audit. However, HRSA’s audits of covered entities generally do not include visits to multiple covered entity sites, or all contract pharmacies that distribute 340B drugs on a covered entity’s behalf. Additionally, while the audits include a review of a sample of 340B drugs distributed, that sample may not include prescriptions written at, or dispensed from, all of the covered entity’s sites or contract pharmacies. As a result, information in our report highlights the number of entities that were audited. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of HRSA, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at DraperD@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix III. Appendix I: Summary of Fees Included in 340B Pharmacy Contracts Reviewed Table 8 provides a brief description of the fees that covered entities pay pharmacies with which they contracted to dispense 340B drugs based on our review of 30 contracts. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contacts and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Michelle Rosenberg (Assistant Director), N. Rotimi Adebonojo (Analyst in Charge), Jennie Apter, George Bogart, Amanda Cherrin, David Lichtenfeld and Dan Ries made key contributions to this report. Also contributing were Julianne Flowers and Vikki Porter.
Why GAO Did This Study Covered entities can provide 340B drugs to eligible patients and generate revenue by receiving reimbursement from patients' insurance. The number of pharmacies covered entities have contracted with has increased from about 1,300 in 2010 to nearly 20,000 in 2017. GAO was asked to provide information on the use of contract pharmacies. Among other things, this report: 1) describes financial arrangements selected covered entities have with contract pharmacies; 2) describes the extent that selected covered entities provide discounts on 340B drugs dispensed by contract pharmacies to low-income, uninsured patients; and 3) examines HRSA's efforts to ensure compliance with 340B Program requirements at contract pharmacies. GAO selected and reviewed a nongeneralizable sample of 30 contracts between covered entities and pharmacies, 20 HRSA audit files, and 55 covered entities to obtain variation in the types of entities and other factors. GAO also interviewed officials from HRSA and 10 covered entities. What GAO Found The 340B Drug Pricing Program (340B Program), which is administered by the U.S. Department of Health and Human Services' (HHS) Health Resources and Services Administration (HRSA), requires drug manufacturers to sell outpatient drugs at a discount to covered entities so that their drugs can be covered by Medicaid. Covered entities include certain hospitals and federal grantees (such as federally qualified health centers). About one-third of the more than 12,000 covered entities contract with outside pharmacies—contract pharmacies—to dispense drugs on their behalf. GAO's review of 30 contracts found that all but one contract included provisions for the covered entity to pay the contract pharmacy a flat fee for each eligible prescription. The flat fees generally ranged from $6 to $15 per prescription, but varied by several factors, including the type of drug or patient's insurance status. Some covered entities also agreed to pay pharmacies a percentage of revenue generated by each prescription. Thirty of the 55 covered entities GAO reviewed reported providing low-income, uninsured patients discounts on 340B drugs at some or all of their contract pharmacies. Of the 30 covered entities that provided discounts, 23 indicated that they pass on the full 340B discount to patients, resulting in patients paying the 340B price or less for drugs. Additionally, 14 of the 30 covered entities said they determined patients' eligibility for discounts based on whether their income was below a specified level, 11 reported providing discounts to all patients, and 5 determined eligibility for discounts on a case-by-case basis. GAO found weaknesses in HRSA's oversight that impede its ability to ensure compliance with 340B Program requirements at contract pharmacies, such as: HRSA audits do not fully assess compliance with the 340B Program prohibition on duplicate discounts for drugs prescribed to Medicaid beneficiaries. Specifically, manufacturers cannot be required to provide both the 340B discount and a rebate through the Medicaid Drug Rebate Program. However, HRSA only assesses the potential for duplicate discounts in Medicaid fee-for-service and not Medicaid managed care. As a result, it cannot ensure compliance with this requirement for the majority of Medicaid prescriptions, which occur under managed care. HRSA requires covered entities that have noncompliance issues identified during an audit to assess the full extent of noncompliance. However, because HRSA does not require all the covered entities to explain the methodology they used for determining the extent of the noncompliance, it does not know the scope of the assessments and whether they are effective at identifying the full extent of noncompliance. HRSA does not require all covered entities to provide evidence that they have taken corrective action and are in compliance with program requirements prior to closing the audit. Instead, HRSA generally relies on each covered entity to self-attest that all audit findings have been addressed and that the entity came into compliance with 340B Program requirements. Given these weaknesses, HRSA does not have a reasonable assurance that covered entities have adequately identified and addressed noncompliance with 340B Program requirements. What GAO Recommends GAO is making seven recommendations, including that HRSA's audits assess for duplicate discounts in Medicaid managed care, and HRSA require information on how entities determined the scope of noncompliance and evidence of corrective action prior to closing audits. HHS agreed with four of the recommendations, but disagreed with three recommendations, which GAO continues to believe are warranted to improve HRSA's oversight as explained in the report.
gao_GAO-19-41
gao_GAO-19-41_0
Background SAMHSA defines a peer provider as “a person who uses his or her lived experience of recovery from mental illness and/or addiction, plus skills learned in formal training, to deliver services in behavioral health settings to promote mind-body recovery and resilience.” Generally, peer providers are known as “peer support specialists” in mental health settings. Peer support specialists are distinguished from traditional mental health service providers by their lived experience recovering from mental illness. People with serious mental illness generally receive longer term and more intensive treatment—either in a primary care or specialty setting—and peer support specialists may play a key role in the recovery process for these individuals. Peer support specialists work in a variety of settings, including clinical settings such as hospital emergency rooms, independent peer-run organizations, and on support teams in housing agencies that help eligible low-income families and persons with disabilities find rental housing. They can also deliver a varied set of services, including sharing of experience, goal-setting, developing coping and problem solving strategies to help individuals self-manage their mental illnesses, and linking individuals to desired resources like transportation or volunteer opportunities. Importantly, the services provided by peer support specialists complement, but do not replace, clinical services. Peer Support Specialist Certification Like other behavioral health specialties, the requirements for certifying peer support specialists vary by state, and certification bodies range from state government entities to independent non-profit organizations. The development of state-level peer support specialist certification programs was largely driven by another HHS agency, the Centers for Medicare & Medicaid Services, which in 2007 recognized peer support services as an evidence-based mental health model of care and established minimum requirements for states seeking federal Medicaid reimbursement for peer support services. One of these requirements is that peer support specialists complete a training and certification program as defined by the state. Another requirement is that peer support specialists receive supervision from a “competent mental health professional,” which may be provided through direct oversight or periodic care consultation. The state defines the amount, scope, and duration of the supervision as well as who is considered a competent mental health professional. States have used the flexibility allowed by the Centers for Medicare & Medicaid Services to create their own programs to certify peer support specialists. Some of these state peer support specialist programs are assessment-based certificate programs—programs that provide training and then evaluate whether applicants achieved the learning objectives of that training through an examination in order to receive certification. Other programs are professional certification programs—programs that evaluate applicants against predetermined standards of knowledge, skills, or competencies. In professional certification programs, the certifying body is independent from, and is not responsible for, the training process. SAMHSA and Peer Support Specialists SAMHSA supports the peer support specialist field through training, technical assistance, and grant funding. For example: From 2009 to 2014, SAMHSA partnered with stakeholders, such as the National Association for State Mental Health Program Directors, to gather nationally-recognized experts and stakeholders from across the United States for an annual meeting. These meetings, known as the “Pillars of Peer Support,” aimed to identify and create consensus around factors that facilitate the use of peer support services in state mental health systems of care. In 2015, SAMHSA developed core competencies defining the critical knowledge, skills, and abilities needed by anyone who provides peer support services through a technical assistance project. According to officials, the core competencies were developed in response to inconsistencies in the training and certification of peer support specialists that emerged as states began to develop their programs. SAMHSA’s core competencies reflected the five foundational principles of peer support identified by consumers and other stakeholders: services should be (1) recovery oriented; (2) person- centered; (3) voluntary; (4) relationship-focused; and (5) trauma informed. In addition to developing the core competencies, the project provides trainings and offers technical assistance to states, counties, providers, and other stakeholders. Funding for Peer Support Specialist Programs Although Medicaid provides the largest share of funding for state mental health agencies, followed by state funds, SAMHSA also provides grant funding that states can use for both the service and administrative components of their peer support specialist programs. For example, SAMHSA’s Center for Mental Health Services funds peer support programs through its administration of the Community Mental Health Services Block Grant, which provides flexible funding to the states to support services and related support activities for individuals with serious mental illness. While the Community Mental Health Services Block Grant accounted for less than 1 percent of total revenues received by state mental health agencies in fiscal year 2015, the flexibility of the funds allows them to be expended to pay for services that Medicaid and other health insurance will not pay for, such as training and developing standards. In fiscal year 2018, 40 states and the District of Columbia reported using the funds from the Community Mental Health Services Block Grant for peer support. SAMHSA also provides discretionary grants directly to domestic nonprofit organizations that aim to expand the capacity of peer support providers. These discretionary grants, including the Statewide Consumer Network Program grants, have helped establish recovery-oriented, consumer- driven services at the state level. SAMHSA also provides block and discretionary grants focused on substance use through its Center for Substance Abuse Treatment and Center for Substance Abuse Prevention, both of which have been used for peer recovery coaches. While most states use SAMHSA grants and state general funds to develop and sustain their peer support programs, as of 2016, 41 states and the District of Columbia were receiving federal Medicaid reimbursement for the services provided by peer support specialists. Georgia was the first state to receive federal Medicaid payment for peer support services in 1999, and additional state Medicaid programs began to provide coverage of peer support after the Center for Medicare & Medicaid Services issued guidance in 2007 on the requirements for federal payment for such services. In addition to meeting the minimum requirements for peer support services—including training and certification, supervision, and care coordination—states that bill for peer support services under the Medicaid program must comply with all Medicaid regulations and policies. Selected State Programs Generally Use Similar Processes for Certifying Peer Support Specialists, with Some Variation in Program Requirements Programs in all six states that we reviewed generally use the same process for screening, training, and ultimately certifying peer support specialists. See figure 1 for an illustrated example of this process. Although the six states’ programs generally use the same process for certifying peer support specialists, as of May 2018 the programs varied in the specific requirements applicants must meet for each of the three stages of certification: screening, training, and certifying. See appendix II for detailed information on state program requirements. Screening Requirements To determine applicants’ eligibility for peer support specialist certification, all six state programs we reviewed have screening requirements applicants must meet when applying for certification. These screening requirements include requirements related to education, lived experience with mental illness, prior work or volunteer experience, and letters of recommendation. The extent to which each screening requirement was used by each state varied, and the specifics of each requirement also varied across the six programs we reviewed (see fig. 2). Education. Five of the six states that we reviewed required a high school diploma or equivalent. Officials from four of these states indicated that this level of education was necessary given the skills needed by peer support specialists, such as reading comprehension and communication skills. In contrast, Oregon officials told us that they did not require a high school diploma or equivalent; however, the officials noted that most of their peer support specialists have at least a high school education. Mental health experience. While all six state programs we reviewed required applicants to have lived experience in recovery from mental illness, the programs implemented this requirement in different ways. Some required a mental health diagnosis, while others required a minimum length of recovery time or required applicants to have received services for a mental illness. Texas officials said they did not have a specified length of recovery requirement due to the difficulty of pinpointing the specific time a person began his or her recovery; rather, Texas required applicants to self-identify as having experience living in recovery. Prior work or volunteer experience. Three of the six state programs required applicants to have prior relevant work or volunteer experience, although the amount of experience required varies. For example, to start the certification process, applicants in Michigan must be currently working in a peer support specialist role and have been in that position for at least 10 hours a week for the past 3 months. In contrast, Georgia officials told us that they found this requirement to be a barrier for some individuals who have not been able to work; therefore, Georgia did not have this requirement. Letters of recommendation. Three of the six states required letters of recommendation as another way to assess applicants’ readiness to become peer support specialists. State officials stressed that the letter should be a personal, work, or volunteer reference, rather than a clinical reference. Training Requirements To ensure the competence of the peer support specialist workforce, all six state programs we reviewed required applicants to complete an initial training, which we refer to as “core training.” The core training is the initial training provided to applicants seeking to become certified peer support specialists and, while the curricula may vary by state or training vendor, its purpose is to convey the skills and competencies that peer support specialists need to enter the workforce. Topics covered during the training typically include ethics, recovery, sharing the recovery story, and communication skills. (See app. III for an example of a peer support specialist core training schedule.) While all six states require applicants to attend core training, the length, cost, and curricula of these trainings varied across the states, as figure 3 shows. Length of training. All six programs required at least 40 hours of in- person core training, with Georgia and Pennsylvania requiring more than 70 hours. The six states required at least a week of core training to allow sufficient time to cover a core curriculum of general peer-related information, such as the meaning and role of peer support services, and at times including role play, in an effort to develop the interpersonal skills needed for an effective peer leadership. Cost of training. All states but Florida charged applicants fees to attend training. Training fees varied by state, ranging from $85 in Georgia to $1,400 in Pennsylvania. These fees varied because what they covered also varied. For example, state program officials from Michigan told us that, among other things, the $600 fee covers the price of lodging for the core training, consultant fees, materials, and college credit hours that can be earned by attending the training and the graduation ceremony. In contrast, state program officials from Georgia told us that the $85 they charge covered the cost of producing the course manual and that all other costs are covered by the state. Training curriculum. Four of the six state programs had their own approved core training curriculum to be used for applicants, while the remaining two programs in Oregon and Pennsylvania allowed applicants to select from approved training vendors—each of which had its own training curricula. Certification Requirements To complete the certification process, all state programs we reviewed assessed applicants’ knowledge of the concepts taught in the core training through an examination. The applicants also had to sign and abide by a code of ethics. However, as of May 2018, the state programs varied as to who administered the certification examination, the type of code of ethics applicants were required to sign, the frequency with which certifications had to be renewed, and the continuing education requirements certified peer support specialists had to meet. (See fig. 4.) Examination. Four of the states we reviewed administered a single, statewide exam that applicants must pass before becoming certified, while in the remaining two states applicants had to pass an exam administered by the approved training vendor. The exams included multiple choice or essay questions. One training vendor responsible for conducting training in at least two states told us that the vendor included an oral evaluation component as part of the exam, in light of the communication and interpersonal skills needed for the peer role. Similarly, a state program official from Pennsylvania told us that observational assessments are also used to determine an applicant’s skills and knowledge. Code of ethics. Like other health professions, peer support specialists typically must agree to abide by a code of ethics. All six states we reviewed required peer support specialists to sign a code of ethics before becoming certified. Of the six states, the codes of ethics in Pennsylvania, Georgia, Michigan, and Texas were unique to peer support specialists, while Florida and Oregon used codes of ethics that also applied to other workforces, such as substance use disorder professionals and community health workers. Relatedly, five of the six states also had formal processes in place to investigate and take action in the event that a peer violated the code of ethics by, for example, disclosing confidential information. These actions range from reprimand to revocation of certification. Certification renewal. Three of the six states we reviewed required peer support specialists, once certified, to renew their certifications every 1 to 3 years, while the remaining three states awarded lifetime certifications. Continuing education. Five of the six states required certified peer support specialists to meet annual continuing education requirements, which ranged from approximately 10 hours per year to 36 hours every 2 years. According to some state program officials, requiring continuing education ensures continued competence in the field of peer support or provides specialized training, such as training for working with specific populations (such as veterans) or incorporating additional approaches or skill sets (such as training about the Wellness Recovery Action Plan). State Officials Generally Cited Six Leading Practices for Certifying Peer Support Specialists Officials from peer support specialist programs in selected states generally cited six leading practices for certifying peer support specialists. The 10 stakeholders—representing the perspectives of researchers, training or consulting organizations, associations, and advocacy organizations—we spoke with generally agreed that the six identified leading practices should be incorporated into programs that certify peer support specialists because the practices can lead to stronger quality of services for individuals with serious mental illnesses. Leading practice one: Systematic screening of applicants. Program officials in five of the six selected states cited the importance of systematic or detailed screening of applicants to become peer support specialists as a leading practice. All six state programs assessed applicants through a variety of approaches, including (1) using screening questions about the applicants’ understanding of the peer role, (2) conducting telephone interviews with applicants, (3) reviewing applications with a standardized tool or scoring rubric, and (4) having multiple people review applications for objectivity. Eight of the 10 stakeholders we interviewed confirmed that this was a leading practice, though some cautioned that these requirements should not unnecessarily exclude individuals with unique backgrounds or little work history. The Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury, which in 2011 explored how to most effectively apply peer support in the military environment as part of its ongoing mission, has similarly identified systematic screening with defined selection criteria as a best practice for peer support programs. While work or volunteer experience can be used as a screening requirement for applicants and was required by three of the states we reviewed, four of the stakeholders we interviewed commented that meeting these requirements can be challenging for individuals with a history of mental illness who may have been previously unable to enter the workforce. Research has shown that the stigma associated with mental illness is a significant barrier to work for individuals with mental illness and has shaped employer decisions about hiring or keeping a person with mental illness in the workplace. These workplace barriers, along with others, such as access to mental health treatment, contribute to the relatively low workforce participation of adults with serious mental illnesses. One stakeholder commented that peer support programs have a responsibility not to contribute to barriers in the workplace for individuals with mental illnesses. Our review shows that some of the peer support specialist programs in the six selected states are taking steps to address these barriers. For example, Florida recently changed its requirements and now provisionally certifies peer support specialists who meet all the certification requirements except for the requirement to have 500 hours of work or volunteer experience. After receiving the provisional certification, peer support specialists have 1 year to complete the work or volunteer hours necessary to upgrade to the full certification. Leading practice two: Conducting core training in-person. Program officials from five of the six selected states cited core training that is conducted in-person, as opposed to online, as a leading practice. Three program officials told us that core training should be done in-person to foster relationship building and experiential learning to develop the interpersonal skills a certified peer support specialist needs. All six state programs had in-person core training, regardless of whether the training was run by the state program itself or through approved vendors. For example, Michigan hosts its core trainings at a retreat center where participants are encouraged to stay for the week. Michigan program officials told us that this creates a place for training participants from across the state to network, discuss how their agencies work and the types of issues they face as peer support specialists, and share best practices. SAMHSA’s core competencies identify the importance of using active listening skills, understanding when to share experiences and when to listen, and using their own recovery story to inspire hope. All 10 stakeholders we interviewed confirmed that providing in-person training was a leading practice, though 3 commented that some of the knowledge segments could be done online. Five stakeholders we interviewed told us that observing the skills of peer support specialists during training or incorporating observation as part of the certification exam is important. One stakeholder explained that while written tests are a good measure of basic knowledge, the tests cannot fully assess the skills and competencies needed for certification. While 2 stakeholders cited the increased costs of delivering and grading exams with an observational component as the reasons many states use written exams only, 1 stakeholder noted that including an observational component is a more accurate assessment of whether or not people have developed needed skills. Another stakeholder commented that using a written test alone may allow individuals who are good test takers to become certified, even if they lack the interpersonal skills needed to be a peer support specialist. Leading practice three: Incorporating physical health and wellness into training or continuing education. Program officials from five of the six selected states cited the importance of emphasizing to peer support specialists that they should help others manage their physical health—in addition to their mental health—during core training or continuing education as a leading practice. All six of the selected states incorporated managing physical health conditions into their core training or continuing education. (See text box.) In these trainings, peer support specialists learn how to help others with access to needed care and prevention services, set personal health goals to promote recovery and a wellness lifestyle, and adopt healthy habits to prevent disease or lessen the impact of existing chronic health conditions. The need for physical health- related training was identified after a 2006 report found that individuals with serious mental illnesses were dying 25 years earlier than the general population, largely due to treatable medical conditions caused by modifiable risk factors, such as smoking and poor nutrition or obesity. SAMHSA identified educating peers about health, wellness, recovery, and recovery supports as a core competency. All 10 stakeholders we interviewed confirmed that emphasizing the importance of physical health was a leading practice, though 2 stakeholders commented that incorporating physical health and wellness into trainings should only be done as continuing education. Example of Leading Practice Three: Georgia Peer Support Whole Health and Wellness Georgia determined it was important to incorporate physical health and wellness into training for peer support specialists and was the first state to have related services— which it calls Peer Support Whole Health and Wellness—provided by certified peer support specialists covered by Medicaid. These peer support specialists—who complete additional training and are certified in Whole Health Action Management—receive medical technical support from registered nurses and are trained to work in both primary care and behavioral health settings. Georgia created the service using a SAMHSA- funded Transformation Transfer Initiative grant, which was designed to give states the opportunity to increase their efforts to make their state behavioral health delivery systems more consumer driven, among other things. The SAMHSA-Health Resources and Services Administration Center for Integrated Health Solutions adapted Georgia’s training, along with a training developed by New Jersey, to publish a Whole Health Action Management Peer Support Training Participant Guide in 2015. This adapted 2-day training aims to teach peers to use a person-centered planning process to create a whole health goal and how to engage in peer support, including Whole Health Action Management peer support groups, to meet that goal. Leading practice four: Preparing organizations to effectively use peers. Program officials from four of the six selected states cited efforts to ready provider organizations—such as hospitals or drop-in centers—to employ certified peer support specialists as a leading practice. State program officials told us that organizational readiness includes making sure staff understand the role of peer support specialists and can provide appropriate supervision. (See text box.) Five of the selected states have developed guidance or training for supervisors of peer support specialists. Nine of the 10 stakeholders we interviewed confirmed that this was a leading practice. SAMHSA identified using supervision effectively and engaging in problem-solving strategies with a supervisor as a core competency for this workforce. Example of Leading Practice Four: Michigan Peer Liaisons In order to help provider organizations understand the role of peer support specialists, Michigan created an informal peer liaison role at all 46 of the local Community Mental Health Services Programs tasked with coordinating mental health services. State officials told us that these peer liaisons have telephone calls and in-person meetings to provide informal feedback on technical assistance needs and share information on how certified peer support specialists are doing in their roles and responsibilities. According to state officials, peer liaisons have helped prepare mental health agencies to work with peer support specialists and have helped the state identify what new trainings should be developed to better help peer support specialists succeed in the workplace. Many of the stakeholders we interviewed highlighted the importance of having individuals in an organization who understand the peer support role. Eight of the stakeholders we interviewed told us that supervisors need to understand or be trained in the peer support role and skillset, with three stakeholders commenting that supervisors need to be specifically aware of the difference between peer support specialists and clinical providers. For example, to achieve this the training and certifying organization in Texas runs a twelve month program that helps provider organizations effectively implement peer support services. The program, which is designed as a learning community, focuses on changing organizational culture, defining and clarifying the peer support specialist role, and supervising these staff, among other things. Relatedly, three stakeholders told us that there should be more than one peer support specialist at each organization. One stakeholder noted that having multiple peer support specialists at an agency provides built in support and understanding of the peer role, which is important given that peer support specialists typically have the lowest level of power in an organization. Another stakeholder noted that putting a single peer support specialist in an organization can be isolating. Leading practice five: Continuing education requirements specific to peer support. Program officials from five of the six selected states considered it a leading practice to require, after certification, peer support specialists to take continuing education that is specific to the peer support role. This is to ensure that peers maintain their competency and are aware of new developments in the field. Five of the six selected states required certified peer support specialists to maintain their competence through continuing education, and all five of these states had a requirement that the continuing education be specific to the peer support role. (See text box.) All 10 stakeholders we interviewed confirmed that this was a leading practice. The Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury similarly identified as a best practice enabling continued learning through structured training. SAMHSA identified seeking opportunities to increase knowledge and skills of peer support as a core competency for peer support specialists. Example of Leading Practice Five: Pennsylvania Continuing Education Requirement As an added step to ensure that the peer support specialist workforce is competent, Pennsylvania places some of the burden on provider agencies for ensuring that certified peer support specialists meet continuing education requirements. The state requires its licensed provider agencies to develop a staff training plan to ensure that each certified peer specialist receives the continuing education they need. Pennsylvania also requires these agencies to provide opportunities for certified peer specialists to network with other certified peer specialists both within and outside the agency. The state monitors compliance with these requirements through annual inspections. State officials told us that this requirement serves as a safety net and assures them that certified peer support specialists are up to date in their training. Leading practice six: Engaging peers in the leadership and development of certification programs. Program officials from four of the six selected states cited having certified peer support specialists lead or participate in the certification process of applicants as a leading practice. State program officials told us that peers should lead in a variety of ways, including helping screen applicants, developing curricula, providing training, and serving as mentors or supervisors to other certified peer support specialists. For example, Michigan concurrently runs its continuing education courses and core training in the same location so that experienced peer support specialists can mentor new peers. Officials from all six selected states told us that certified peer support specialists in their states participate in some part of the certification process. (See text box.) The Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury similarly identified as a best practice leveraging the unique experiences and benefits peer support specialists offer as peers throughout a peer support specialist program, including in positions of leadership. All 10 stakeholders we interviewed confirmed that this was a leading practice. Example of Leading Practice Six: Oregon Traditional Health Worker Commission Through service on a statewide commission, peer support specialists in Oregon have a leadership role in developing the education and training requirements for certified peer support specialists and others. The Oregon Health Authority’s Traditional Health Worker Commission promotes the role, engagement, and utilization of traditional health workers—health workers who are certified by the state—in Oregon’s health care delivery system. The commission includes member representatives of each type of traditional health worker, including peer support specialists. In addition to developing the education and training requirements for peer support specialists and other types of traditional health workers, the commission developed the scope of practice to be used by provider organizations that employ peer support specialists. On an ongoing basis, the commission advises the Oregon Health Authority about the traditional health worker program and ensures that the program is responsive to consumer and community health needs. Oregon state officials consider having this advisory body with representation from the peer community to be a best practice, commenting that the commission provides the hands-on knowledge that the state can then implement through policy and rules. Agency Comments We provided a draft of this report to HHS for review and comment. The Department did not have any comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Health and Human Services, the Secretary of the Department of Defense, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff should have any questions about this report, please contact me at (202) 512-7114 or DeniganMacauleyM@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: List of Organizations and Individuals Interviewed State peer support specialist programs Stakeholders Appendix II: Summary of Peer Support Specialist Program Screening, Training, and Certification Requirements in Selected States Recovery experience Must have lived experience with a mental illness or substance use disorder and have been in recovery for a minimum of 2 years. Must have been in recovery for at least 1 year between diagnosis of mental illness or substance use disorder and application for training program. Must have been diagnosed with a mental illness and been in recovery for a minimum of 1 year. Must currently be or formerly have been receiving services for mental illness or substance use disorder. Must currently be or formerly have been receiving services for a mental illness. Minimum of 12 months of work or volunteer experience within the last 3 years Not required 2 (type unspecified) Must self-identify as being in recovery from a mental health challenge. For the purposes of this report, we use the term “peer support specialist” to describe individuals who use their own lived experience recovering from mental illnesses to support others in their recovery; however, each state may have different titles in place for the certified role achieved through their peer support specialist programs. Appendix III: Example of a Peer Support Specialist Core Training The training schedule below, developed by the Appalachian Consulting Group, illustrates the content areas that may be included in core training curriculum for peer support specialists seeking certification. The Appalachian Consulting Group’s curriculum was used in the first Medicaid-billable peer support specialist program in Georgia in 1999, and since then the curriculum has been used to train peer support specialists in 25 states. This training schedule is an example of the types of content that could be included in such training, and is not an endorsement of a particular training curriculum. Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tom Conahan (Assistant Director), Summar Corley (Analyst-in-Charge), JoAnn Martinez (Analyst- in-Charge), Kaitlin Asaly, Muriel Brown, Krister Friday, and Emily Wilson made key contributions to this report.
Why GAO Did This Study As the peer support workforce has grown, there has been increased attention to standardizing the competencies of peer support specialists through certification. The 21st Century Cures Act included a provision for GAO to conduct a study to identify best practices related to training and certification in peer support programs in selected states that receive funding from SAMHSA. This report, among other things, describes leading practices for certifying peer support specialists identified by program officials in selected states. GAO interviewed state program officials in six selected states and reviewed online, publicly available information about their peer support programs. GAO selected the states in part based on the state's certification program being well-established (at least 2 years old), use of SAMHSA funding for peer support, and stakeholder recommendations. The six selected states—Florida, Georgia, Michigan, Oregon, Pennsylvania, and Texas--are among the 41 states and the District of Columbia that, as of July 2016, had programs to certify peer support specialists. In addition to the state program officials, GAO interviewed SAMHSA officials and 10 stakeholders familiar with peer support specialist certification, including mental health researchers and officials from training organizations, among others. GAO provided a draft of this report to HHS for review and comment. The Department did not have any comments. What GAO Found According to officials from the Substance Abuse and Mental Health Services Administration (SAMHSA) within the Department of Health and Human Services (HHS), shortages in the behavioral health workforce are a key reason that individuals with mental illnesses do not receive needed treatment. In recent years, there has been an increased focus on using peer support specialists—individuals who use their own experience recovering from mental illness to support others—to help address these shortages. Program officials GAO interviewed in selected states generally cited six leading practices for certifying that peer support specialists have a basic set of competencies and have demonstrated the ability to support others.
gao_GAO-19-47
gao_GAO-19-47_0
Background RFS Greenhouse Gas Emissions Goals and Requirements EPA states that one goal of the RFS is to reduce greenhouse gas emissions. Specifically, the RFS is designed to reduce these emissions by increasingly replacing petroleum-based fuels with biofuels that have lower associated greenhouse gas emissions released throughout their lifecycle. Some of these greenhouse gas emissions are directly released at each stage of a fuel’s lifecycle, which, for biofuels, includes the emissions associated with growing the feedstock, transporting it, converting it to a biofuel, distributing the biofuel, and burning it in an engine. Other emissions are released indirectly through broad economic changes associated with increased biofuel use, such as changes in land use. The lifecycle greenhouse gas emissions from biofuels cannot be directly measured, so they are estimated using mathematical models that account for greenhouse gas emissions at each stage of the lifecycle. These models—in particular, Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation model—have been used by researchers for nearly 30 years. However, the complexity of estimating the lifecycle emissions associated with biofuels and the sensitivity of the models to assumptions limit the precision of the modeled results. The RFS established statutory greenhouse gas reduction requirements for specific types of biofuels. These types can be grouped into two broad categories—conventional biofuels and advanced biofuels—defined by the amount of reduction they are required by statute to achieve in lifecycle greenhouse gas emissions relative to the 2005 emissions baseline for gasoline or diesel. Conventional. Conventional biofuels from new facilities must achieve greenhouse gas emissions at least 20 percent lower than traditional petroleum-based fuels, which include gasoline and diesel. The dominant conventional biofuel produced to date is corn-starch ethanol. Advanced. Advanced biofuels must achieve lifecycle greenhouse gas emissions at least 50 percent lower than traditional petroleum-based fuels. Advanced biofuels may include a number of fuels, including fuels made from algae or sugar cane, but the category excludes ethanol derived from corn starch. This category includes the following subcategories: Biomass-based diesel: biodiesel or renewable diesel that has lifecycle greenhouse gas emissions at least 50 percent lower than traditional petroleum-based diesel fuels. Cellulosic: renewable fuel derived from any cellulose, hemicellulose, or lignin that is derived from renewable biomass and has lifecycle greenhouse gas emissions at least 60 percent lower than traditional petroleum-based fuels. Types and Volumes of Biofuels to Be Blended under the RFS The RFS established statutory requirements for the amount of biofuels that must be blended into gasoline. These amounts increase from 9 billion gallons in 2008 to 36 billion gallons in 2022. The RFS sets statutory volume requirements for each type of biofuel based on the categories described above, but EPA can waive those requirements and establish its own, if warranted. From 2010 through 2013, EPA used its waiver authority each year to reduce the volume requirement for cellulosic biofuel while keeping the total volume requirement for all biofuels at the statutory level. Starting in 2014, EPA set lower volume requirements for all advanced biofuels and lower total biofuel blending requirements. EPA cited, among other things, inadequate domestic supply as a reason for the waivers. Since 2014, the gap between RFS requirements for advanced biofuels and EPA requirements after waivers were issued has increased. Figure 1 compares RFS statutory volumes for various types of biofuels with volumes that EPA established using the waiver authority. In 2018, the biofuel used most often to comply with the RFS has been conventional ethanol derived from corn starch. As we reported in 2016, production of cellulosic and other advanced biofuels has not progressed as initially expected under the RFS. Although, as we reported, advanced biofuels are technologically well understood, current production is far below the volume needed to meet the statutory targets for these fuels. For example, the cellulosic biofuel blended into transportation fuel in 2015 was less than 5 percent of the statutory target of 3 billion gallons. Given current production levels, most experts we interviewed told us that advanced biofuel production cannot achieve the statutory targets of 21 billion gallons by 2022. The shortfall of advanced biofuels is the result of high production costs, despite years of federal and private research and development (R&D) efforts. The federal government has supported R&D related to advanced biofuels through direct research and grants in recent years, with the focus of this R&D shifting away from cellulosic ethanol, an advanced biofuel that is not fully compatible with current vehicle engines and fuel distribution infrastructure, and toward other biofuels that are compatible with this infrastructure. Ethanol as a Fuel Additive Even before the establishment of the RFS, ethanol was used as an additive in gasoline. It serves as an oxygenate, to prevent air pollution from carbon monoxide and ozone; as an octane booster, to prevent early ignition, or “engine knock;” and as an extender of gasoline stocks. In purer forms, it can also be used as an alternative to gasoline in automobiles specially designed for its use. Approximately 99 percent of blended gasoline consumed in the United States is “E10”—a blend of gasoline with up to 10 percent ethanol. The use of ethanol as an oxygenate is linked to the demise of a petroleum derivative known as methyl tertiary butyl ether, or MTBE. MTBE had been used as an octane booster since the late 1970s, and was used in later years to fulfill the oxygenate requirements set by Congress in the 1990 Clean Air Act amendments. According to a report by the Congressional Research Service, MTBE contaminated drinking water, and about half of the states passed legislation to ban or restrict its use. Although MTBE was not restricted by federal law, gasoline refiners sought a substitute because of concerns over potential liability. To replace MTBE, refiners switched to ethanol. Congressional Research Service, MTBE in Gasoline: Clean Air and Drinking Water Issues (updated Apr. 14, 2006). State Ethanol Mandates Five states passed and put into effect ethanol mandates similar to the RFS—Hawaii, Minnesota, Missouri, Oregon, and Washington. In Minnesota, Missouri, and Oregon these mandates required 10 percent of blended gasoline to be ethanol, while Washington required 2 percent ethanol in gasoline and Hawaii required that 85 percent of fuel sold in the state must contain 10 percent ethanol. Minnesota was the first to put an ethanol mandate into effect—in May 2003. Hawaii followed with an effective date of April 2006. The Missouri, Oregon, and Washington mandates were put into effect in 2008. Louisiana, Montana, and Pennsylvania also passed laws requiring ethanol blending mandates, but these mandates have not gone into effect because in-state ethanol production volumes have not reached levels required to trigger them. Tax Credits The federal government has supported the development of a domestic biofuels industry not only through the RFS but also through tax credits. The Energy Tax Act of 1978, among other things, provided tax incentives designed to stimulate the production of ethanol for blending with gasoline. These blending incentives were restructured as part of the Volumetric Ethanol Excise Tax Credit (VEETC) in 2004. In 2009, we found that the VEETC and the RFS may have been duplicative with respect to their effects on ethanol consumption. We and others found that the VEETC was no longer stimulating additional ethanol consumption. The blending incentives in the VEETC expired in December 2011. There are also federal tax incentives to promote the production and use of advanced biofuels. These include the Biodiesel Income Tax Credit, which provides a $1 per-gallon tax credit for producers of certain forms of biodiesel or renewable diesel. Separately, the Second Generation Biofuel Producer Tax Credit provided advanced biofuel producers a tax credit of up to $1.01 per gallon of advanced biofuel produced and used domestically. Available Evidence and Analysis Indicate That the RFS Was Likely Associated with Modest Gasoline Price Increases outside the Midwest and Modest Decreases within the Midwest Evidence from studies, interviews with experts, and our analysis suggest that the nationwide RFS was likely associated with modest price increases outside of the Midwest. Likely variations in these gasoline price effects depended, in part, on state-by-state variation in the costs to transport and store ethanol. For example, the Midwest was already producing and blending ethanol, so it had lower transportation costs and had already built necessary storage infrastructure. Other regions began blending ethanol later as rising volumes of ethanol required under the RFS forced more ethanol into the system and as states began blending ethanol. These states incurred new transportation and storage infrastructure costs, which likely resulted in higher gasoline prices compared to those in the Midwest states or states that had not yet begun to blend ethanol. Overall, it is likely that as the expanded blending requirements of the RFS caused non-Midwestern states and localities to begin blending ethanol, these states and localities experienced increased gasoline prices of a few cents per gallon compared to what they otherwise would have been. Experts, Stakeholders, and Studies Indicate that the RFS Likely Caused Changes in Retail Gasoline Prices that Varied by Region According to the experts we interviewed as well as the studies we reviewed, the RFS likely caused small changes in retail gasoline prices that varied by region. The experts, stakeholders, and studies identified two main ways in which the RFS may have affected prices. Specifically, the RFS may have (1) increased transportation and storage costs in regions outside the Midwest, and, (2) caused an initial increase in refining investment costs that over the long term reduced refining costs for gasoline. Transportation and Storage Costs The RFS may have affected retail gasoline prices by increasing transportation costs in certain regions. Retail gasoline consists of two components—ethanol and blendstock, which is the petroleum-based gasoline that ethanol is blended with to make retail gasoline. Currently, blendstock and ethanol are typically transported in different ways. Blendstock can be shipped via pipeline, which is the most cost-efficient method of transporting fuel. However, ethanol is more corrosive and cannot be shipped in pipelines currently used for blendstock; as a result, it must be transported using costlier methods, such as rail, barge, and tanker truck. Ethanol is produced primarily in the Midwest, where most corn is produced. According to the studies we reviewed, this means that Midwest gasoline retailers, being closer to the supply of ethanol, may have been able to charge consumers lower prices for retail gasoline relative to non- Midwest gasoline retailers because of their lower transportation costs for ethanol. Similarly, higher transportation costs outside of the Midwest may have resulted in higher prices of retail gasoline in those regions. Figure 2 illustrates U.S. ethanol production in 2005, before the RFS became effective. In addition, the RFS may also have affected retail gasoline prices by increasing storage costs in certain regions. Because ethanol is more corrosive than blendstock, it must be stored differently. According to one study we reviewed, ethanol was being blended into gasoline in many locations in the Midwest prior to the establishment of the RFS. As a result, the Midwest already had the infrastructure needed to store ethanol. According to another study, in some places outside of the Midwest ethanol was typically not being blended into gasoline prior to the establishment of the RFS, and therefore costly infrastructure changes, such as installing different seals and gaskets in tanks, were needed so that retailers could store blended gasoline. For example, the California Energy Commission estimated the costs of such infrastructure changes to be approximately $60 million in California. Unlike transportation costs, the costs of infrastructure changes were incurred just once, according to industry stakeholders we interviewed; therefore the effect of such costs on retail prices would be expected to have diminished over time. Production Costs The cost of producing retail gasoline depends in part on the costs of its two components. The RFS may have affected the costs of blendstock and ethanol in various ways, and according to the experts we interviewed, past GAO work, and the studies we reviewed, these costs may have contributed to changes in gasoline prices. Blendstock. The RFS may have initially increased both refiners’ costs to produce blendstock compatible with ethanol blending and the costs of shipping and storing such blendstock; however, these costs may have decreased over time. More specifically, the RFS may have initially increased refiners’ costs because refiners had to change their configuration to produce a lower octane blendstock to accommodate ethanol blending. Many experts we interviewed stated that producing blendstock with a lower octane level required costly changes to refinery infrastructure and processes. However, according to these experts and stakeholders, since ethanol is relatively high in octane, blending ethanol into retail gasoline allows refiners to produce blendstock with a lower octane level. As a result, according to many of the experts we interviewed, after the initial investment by refineries to switch to the lower octane blendstock, refiners could produce that blendstock at lower cost. This would have led to higher initial costs but lower long-term costs once infrastructure costs had been capitalized. The higher initial cost is consistent with our past work in which we noted that shipping more types of blendstocks—the result of a proliferation of blendstocks adopted by states and localities to meet Clean Air Act standards—increases the costs of shipping and storing blendstocks at terminals for distribution to retail sellers. As a result, according to one expert familiar with our past work, as ethanol blending spread further and further away from the production center in the Midwest states, there were more types of blendstocks in the pipeline and storage terminals, which would have increased costs. This expert said that over the longer run and once ethanol blending had expanded to encompass the majority of gasoline sold in the United States, this effect would have disappeared because virtually all the blendstock flowing through the pipeline and storage system would be compatible with blending ethanol. Ethanol. It is unclear whether the RFS increased or decreased the cost of ethanol. One source we reviewed indicated that the RFS may have increased the cost of ethanol by increasing demand for corn, which would drive up the price of corn. On the other hand, one expert we spoke to stated that the RFS may have decreased the cost of ethanol in the long term by providing incentives for producers to invest in more efficient ethanol production processes, which would lower production costs over time. However, it is unclear what the longer-term effects of ethanol blending on gasoline prices have been. We believe this is because once all locations had made the infrastructure investments and most gasoline blendstock produced was consistent with blending ethanol then there would be two continuing effects: (1) the transportation and blending costs of ethanol, which would tend to push retail prices higher and depend on the distance traveled and the modes of transport, and (2) the lower cost of producing lower octane blendstock. The former effect might dominate for locations far from the production source of ethanol and for which more costly modes of transport were used while the lower blendstock costs might dominate for locations close to the production source of ethanol, those that have low transportation costs, or both. However, the data available to us do not allow us to test this long-term effect. Our Analysis of State Ethanol Mandates Also Found Gasoline Price Decreases in the Midwest and Increases Elsewhere We studied the effects of ethanol blending mandates in the five states that had such mandates prior to and including 2008; these mandates are similar to but preceded the RFS ethanol blending mandates on retail gasoline prices. We found that these state mandates were associated with gasoline price decreases in the two Midwestern states we evaluated and price increases in three non-Midwestern states. Specifically, during the period we studied, when the ethanol mandates in Minnesota and Missouri were in effect, our model estimates that, all else remaining equal, retail gasoline prices were lower by approximately 8 and 5 cents per gallon in these states, respectively, than they would have been without the mandates. By contrast, when the ethanol mandates in Hawaii, Oregon, and Washington were in effect, our model estimates that, all else remaining equal, retail gasoline prices were higher by approximately 8, 2, and 6 cents per gallon in these states, respectively, than they would have been without the mandates. These results are consistent with what other studies and experts found about the effects of blending ethanol with gasoline. Our model provides an indicator of the types of effects that the RFS likely had on retail gasoline prices as the increasing ethanol blending targets of the RFS began to push ethanol into more gasoline markets. Specifically, we can infer from the model that the RFS was associated with a modest gasoline price decrease in Midwest states. According to one expert familiar with our analysis and with the blendstock pipeline and storage system, expanding the volumes of lower octane feedstocks to the Midwest states would have the effect of reducing refining production costs because refiners serving the Midwest could do larger runs of lower octane blendstock and therefore benefit from economies of scale in refining runs. In addition, this would also have the effect of reducing pipeline and storage costs for blendstocks because larger volumes of lower octane blendstock could be shipped northward from the refining center in the Gulf of Mexico states to the Midwest. Larger volumes of uniform blendstock during pipeline shipping reduce costs compared to smaller shipments because different blendstocks intermix at the point they interface in a pipeline, and these mixed blendstocks either have to be downgraded and sold for less or pulled out entirely and re-refined to meet existing fuel standards. Conversely, we can infer from the model that the RFS was associated with modest gasoline price increases in states further from the Midwest producers as increasing ethanol targets caused those states to begin blending ethanol for the first time and for which more refining capacity had to convert to produce lower octane feedstock and ship it to more locations, thereby initially raising refining, pipeline, and storage costs as discussed previously in this report. The results of our analysis are also generally consistent with other work that examined the effects of different state ethanol-blending requirements on gasoline prices. For example, some states and localities started blending ethanol before the RFS made it effectively mandatory when these states and localities banned MTBE, an additive that increased the oxygen content of the fuel. When MTBE was banned, ethanol was typically added in its place. The one peer-reviewed study we identified that estimated the effects of the MTBE ban on gasoline prices found that in locations required to blend ethanol because of state MTBE bans, retail gasoline prices increased by 3 to 6 cents per gallon in non-Midwestern states, with larger price increases during times of high ethanol prices relative to crude oil prices. This study also found that retail gasoline prices in the Midwest may not have changed. While our own analysis, other studies we reviewed, and experts we spoke to cannot estimate precise price effects of the RFS on retail gasoline, we believe that collectively the evidence points to likely effects that varied by geographic region and that as RFS blending requirements rose and more and more non-Midwestern states and localities adopted ethanol blending, it is likely they saw modest increases in retail gasoline prices on the order of several cents per gallon. Conversely, as more and more states and localities blended ethanol and more refiners began producing larger runs of lower octane blendstock, the costs of acquiring this blendstock likely fell, and because Midwestern states had very low transportation costs for ethanol, their gasoline prices likely fell. The RFS Has Likely Had a Limited Effect on Greenhouse Gas Emissions to Date and Is Unlikely to Meet Its Future Greenhouse Gas Emissions Reduction Goals Most of the experts we interviewed generally agreed that to date the RFS has likely had a limited effect, if any, on greenhouse gas emissions. Further, the RFS is unlikely to meet the greenhouse gas emissions reduction goals envisioned for the program through 2022. Regarding the RFS and greenhouse gas emissions to date, experts noted that the effect has been difficult to assess precisely and we found disagreement among some experts about whether the effect has been positive or negative. However, most experts agreed that the effect—whether an increase or decrease—has likely been limited. Regarding meeting RFS greenhouse gas emission reduction goals through 2022, as we reported previously, although advanced biofuels, such as cellulosic ethanol, achieve greater greenhouse gas reductions than conventional biofuels, such as corn- starch ethanol, the latter are likely to continue to account for most of the biofuel blended into domestic transportation fuels under the RFS because they are economical to produce while most advanced biofuels are not. The Experts We Spoke with Generally Believe the Effect of the RFS on Greenhouse Gas Emissions Has Likely Been Limited to Date Of the 13 experts we interviewed, 10 generally agreed that the RFS has likely had a limited effect, if any, on greenhouse gas emissions to date. However, these experts said that the effect is difficult to assess precisely, and they disagreed on whether the limited effect has been positive or negative. Specifically, the experts commenting on the topic were roughly evenly split between increases or decreases in greenhouse gas emissions, with some saying there were negligible effects. Experts we interviewed said that the effect that the RFS has had on greenhouse gas emissions is difficult to assess precisely because it involves complex factors that are challenging to quantify, including the lifecycle emissions associated with biofuel use. The RFS’s reliance on corn-starch ethanol to fill biofuel mandates has limited the ability of the RFS to reduce greenhouse gas emissions. Specifically, as we reported in November 2016, most of the biofuel blended to date has been conventional corn-starch ethanol, which has a smaller potential to achieve greenhouse gas reductions compared with advanced biofuels. Because of this, several experts we interviewed for the November 2016 report raised concerns about the extent to which the RFS has achieved its design of reducing greenhouse gas emissions. Furthermore, because the RFS has not been responsible for all of the ethanol used in the United States since the program took effect, not all greenhouse gas reductions associated with ethanol use have been the result of the RFS. More specifically, most experts agreed that ethanol use was historically driven, in part, by favorable market conditions and other policies, including state biofuel mandates, ethanol tax credits, and the phaseout of MTBE as an oxygenate for gasoline. Most experts we interviewed said they believed that the RFS had some effect on biofuel production by creating a guaranteed market for biofuels. Although experts’ views differed on the amount of ethanol that would have been produced without the RFS, most of them said that ethanol production capacity would likely be lower today if the RFS had not helped to establish markets. For example, four experts and one industry stakeholder representative that we interviewed hypothesized that if the RFS were repealed, refiners would continue to blend ethanol into fuel, although two experts and one stakeholder representative acknowledged that less ethanol would probably be blended without the RFS. In contrast, one expert indicated that the RFS provides a safety net for the ethanol industry but that this safety net may not be needed anymore. In addition, according to EPA officials, the vast majority of the corn-starch ethanol used to date has been produced by so-called grandfathered plants—plants in operation or under construction before a certain date— that have been exempt from RFS emissions reductions requirements. The grandfathered plants have likely limited the ability of the RFS to achieve greenhouse gas emissions reductions, but this effect has likely changed over time. Early on, when a higher percentage of grandfathered ethanol plants used coal as an energy source and had older technologies, EPA estimates indicated that ethanol from such plants produced more greenhouse gas emissions than petroleum-based gasoline. However, most of the experts we interviewed told us that over time grandfathered plants have upgraded technology to remain economically competitive and have converted to natural gas as an energy source, resulting in industry- wide efficiency improvements that reduce greenhouse gas emissions. These experts indicated that such upgraded plants do not likely have significantly different emissions than the newer plants subject to RFS emissions reductions requirements. Little quantitative information is available to compare the difference between greenhouse gas emissions associated with grandfathered plants and those associated newer plants. Finally, experts we interviewed disagreed on whether ethanol produced today generally complies with the RFS statutory requirement to reduce lifecycle greenhouse gas emissions by 20 percent relative to those of petroleum-based gasoline, which affects the extent to which the RFS has influenced greenhouse gas emissions. Of the 11 experts commenting on the topic, approximately half said that ethanol produced today likely met the 20 percent RFS greenhouse gas reduction requirement. Most of these experts pointed to recent lifecycle analysis studies. Recent studies have found that, relative to petroleum-based gasoline, corn-starch ethanol could reduce lifecycle emissions by 19 to 48 percent. While there are limitations and uncertainty associated with all lifecycle analyses, most experts we interviewed said that the models used for lifecycle analyses have improved over time and can provide reasonably accurate estimates of certain components of direct lifecycle greenhouse gas emissions, such as emissions associated with the energy used for farming and for producing the biofuel in a plant. Of the roughly half of experts who said that corn-starch ethanol likely does not meet the RFS greenhouse gas reduction requirements, almost all pointed to the potential for indirect emissions associated with biofuel production and use. Indirect emissions are complex to estimate and a source of uncertainty in lifecycle estimates, but including them could offset emissions reductions. These indirect emissions can be produced as the result of broad economic changes associated with increased biofuel use, including the following: Indirect land use change. Indirect land use change occurs when using agricultural land to grow biofuel feedstocks causes the conversion of previously nonagricultural lands in the United States and elsewhere in the world to maintain world agricultural production of food, feed, and fiber. Fuel market effects. Though difficult to quantify, expanded biofuel use may lead to an unintended increase in the global use of transportation fuel and more greenhouse gas emissions, according to most of the experts saying that corn-starch ethanol does not meet greenhouse gas reduction requirements. For example, increasing biofuel use in one part of the world could increase the relative supply of petroleum in other parts of the world, thereby lowering petroleum prices and increasing use of petroleum products there. We Previously Reported That Limited Production of Advanced Biofuels Makes the RFS Unlikely to Meet Its Greenhouse Gas Reduction Goals In November 2016 we reported that, with the exception of biomass-based diesel, production of advanced biofuels was far below the volume needed to meet the statutory targets for these fuels (see fig. 3). For example, we reported that the cellulosic biofuel blended into transportation fuel in 2015 was less than 5 percent of the statutory target of 3 billion gallons. We found in another November 2016 report that the shortfall was the result of high production costs, despite years of federal and private R&D efforts. With regard to future advanced biofuel production, most experts we interviewed for the November 2016 report told us that such production cannot achieve the statutory targets of 21 billion gallons by 2022 because the investments and development required to make these fuels more cost-effective, even in the longer run, were unlikely in the investment climate at the time. Factors affecting this included the magnitude of investment and the expected long time frames required to make advanced biofuels cost competitive with petroleum-based fuels. Because the bulk of greenhouse gas emissions reductions were to come from such advanced biofuels, the expected emissions reductions have also not occurred. Historical Prices of RINs, Concerns regarding Their Effects on Fuel Prices, and EPA’s Actions to Mitigate These Concerns As mentioned previously, EPA uses RINs to regulate compliance with the RFS. Refiners or importers of transportation fuel in the United States are known as “obligated parties” and must submit RINs to EPA. The number of RINs that an obligated party must submit to EPA is proportional to the volume of gasoline and diesel fuel that it produces or imports and depends on the volumes of biofuel that must be blended with transportation fuels during the following calendar year as set by EPA. In accordance with EPA guidelines, a biofuel producer or importer assigns a unique RIN to a gallon of biofuel at the point of production or importation. When biofuels change ownership (e.g., are sold by a producer to a blender), the RINs generally transfer with the fuels. When a gallon of biofuel is blended or supplied for retail sale, the RIN is separated from the fuel and may be used by the obligated party to demonstrate compliance with the RFS or may be traded, sold, or held for use in the following year. Some vertically integrated refiners own blending operations, so they generate RINs that they can use to demonstrate compliance because they also blend their own fuel. Other refiners do not blend their own fuel and must purchase RINs to demonstrate compliance. The latter are called merchant refiners. Since biofuels supply and demand can vary over time and across regions, a market has developed for trading RINs. If a supplier has already met its required share and has supplied surplus biofuels for a particular biofuel category, it can sell the extra RINs to another entity or it can hold on to the RINs for future use. An obligated party that faces a RIN deficit can purchase RINs to meet its obligation. Historical RIN Prices In our March 2014 report on petroleum refining, we noted that the RFS had increased compliance costs for the domestic petroleum refining industry or individual refiners. We reported that, according to the U.S. Energy Information Administration, corn-based ethanol RIN prices were low—from 1 to 5 cents per gallon from 2006 through much of 2012— because it was generally economical to blend up to or above the level that the RFS required. However, in 2013, prices for these RINs increased to over $1.40 per gallon in July before declining to about 20 cents per gallon as of mid-November. Several stakeholders told us at the time that this increase in RIN prices was primarily due to RFS requirements exceeding the capability of the transportation fuel infrastructure to distribute and the fleet of vehicles to use biofuels, a situation referred to as the blend wall. EPA officials told us at the time that high corn prices, which made ethanol more expensive relative to gasoline, also contributed to higher RIN prices during this period. A refiner we spoke with at the time attributed the decline in RIN prices in the second half of 2013 to EPA’s statements expressing its desire to address the blend wall. In our report, we noted that while the RFS applies to all refiners in the same way, the effect of the rise in RIN prices may depend on each refiner’s situation. Figure 4 shows historical RIN prices for conventional, advanced, and biodiesel RINs. Since our March 2014 report, corn-starch ethanol RIN prices have experienced periods of volatility. One expert stated that this is because ethanol prices have become tied with biodiesel prices since the RFS has required levels above the 10 percent blend wall. EPA officials agreed that once the 10 percent blend wall was reached, ethanol RIN prices have often risen to the price of biodiesel RIN prices. More specifically, biodiesel RIN prices are strongly affected by expectations about whether the biodiesel tax credit will be allowed to expire, which has often happened. In fact, EPA has at times explicitly taken the existence of the biodiesel tax credit into account when making rulings related to the RFS. As a result, both biodiesel RIN prices and ethanol RIN prices experience volatility. In general, ethanol RIN prices have closely tracked biodiesel RIN prices for the last 5 years. As we noted in our March 2014 report on petroleum refining, prices for RINs reflect several factors, including the cost of renewable fuels compared with the petroleum fuels they displace and the stringency of annual blending requirements. One expert we spoke with during the course of the audit work for this report stated that uncertainty about the future of the RFS has also affected RIN prices. Effect of RINs on Retail Fuel Prices Three experts and three industry stakeholders we interviewed spoke directly about the effect of RINs on retail fuel prices. All three experts stated that if RINs have any effect on prices it is small, while two of those experts also asserted that it was possible that RINs had no effect on prices at all. These experts argued that in a perfectly competitive fuel market, the blendstock refiners increase the price of blendstock because they know that they will need to pay for the RINs. At the same time, the retail gasoline blenders are able to save costs related to ethanol because of the value they receive for selling the RINs. In practice, according to experts, the market may not be perfectly competitive, so it is possible that RINs add from 1 to 10 cents to the retail price of gasoline in some parts of the country. One industry stakeholder also expressed the opinion that RINs would have little to no effect on retail gasoline prices, citing the same argument. Two industry stakeholders indicated that RINs would increase retail gasoline prices, although they did not specify by how much. These stakeholders argued that RINs represent the cost of producing retail gasoline; because ethanol has historically had a higher cost per mile than gasoline (though not per gallon), the RINs would represent this increased cost and would be reflected in retail gasoline prices. An EPA analysis found that RIN prices did not have a significant impact on retail fuel prices and concluded that any expected impact would be very small. For retail gasoline, EPA made the same argument as experts and stakeholders cited above. Problems Identified with the RIN Market and Steps Taken by EPA to Address These Problems Although oil refineries and importers are the entities that are obligated to demonstrate compliance with the RFS, not all of them produce blended fuels. Thus, these entities cannot earn RINs themselves and need to purchase them on the RIN market. Our past work, as well as EPA analysis, has identified several issues of concern with RINs, including possible fraud in the market and concerns about the effect on small refiners, price volatility, and the point of obligation. Fraudulent RINs. As we reported in our November 2016 report on the RFS, some experts we spoke with at the time identified reducing RIN fraud and price volatility as a federal action that could incrementally encourage investment in advanced biofuels. Specifically, these experts said that a lack of transparency in the RIN trading market has led to an increased risk of fraud and increased volatility of RIN prices. Because RINs are essentially numbers in a computerized account, there have been opportunities for fraud, such as double counting RINs or generating RINs for biofuels that do not exist. For example, in our March 2014 report on petroleum refining we reported that EPA had issued several notices of violation alleging that five companies generated invalid RINs without producing qualifying renewable fuels. EPA officials told us that, since that time, EPA has made additional notices of violation, although many pertain to actions taken prior to March 2014. Since the start of the RFS, EPA has alleged that approximately 382,524,480 RINs are invalid. Furthermore, obligated parties that inadvertently purchase fraudulent RINs lose the money spent to purchase them, must purchase additional RINs to meet their obligations, and face additional costs. This has a disproportionate effect on small refiners, according to our November 2016 report. Whereas large obligated parties—in particular, vertically integrated refiners that typically own blending operations—can generate RINs by blending fuel, small refiners do not blend fuel, must purchase their RINs on the market to meet their obligations, and are therefore more likely to be adversely affected by fraudulent RINs. To address concerns over these issues, EPA established an in-house trading system called the EPA Moderated Transaction System (EMTS). EPA officials believe that this system provides significant capabilities over prior reporting tools used to implement the RFS, allowing enforcement to more quickly identify potential RFS violations versus entry errors that were common with pre-EMTS RFS reporting. EPA officials also informed us of a voluntary quality assurance program intended to provide obligated parties a program to ensure that RINs entering commerce are valid. However, EPA has maintained that verifying the authenticity of RINs is the duty of obligated parties. Distribution of compliance costs. In our March 2014 report on petroleum refining, we reported that, according to EPA, refiners experience the same compliance costs regardless of whether they are vertically integrated refiners or merchant refiners that purchase RINs for compliance. However, we also reported that the views of several stakeholders differed from EPA’s. In that regard, in a 2011 study, the Department of Energy reported that the degree to which a small refiner can actively blend refinery production with biofuels could contribute greatly to the economic hardship incurred from complying with the RFS. We noted that, while the RFS applies to all refiners in the same way, effects of rising or falling RIN prices may vary depending on each refiner’s situation. According to several stakeholders we interviewed at the time, RFS compliance had been most difficult for merchant refiners, because they did not blend their own fuel and had to purchase RINs from others, increasing their costs of compliance. Price volatility. Similarly, according to the experts we interviewed for our November 2016 report on the RFS, price volatility in RIN markets had adversely affected small refiners in particular and led to uncertainty among investors. While most RINs are bought and sold through private contracts registered with the EMTS, as we mentioned previously, RINs are also traded in markets. Some experts that we interviewed for the November 2016 report told us that price volatility may have been due, in part, to nonobligated parties speculating in these markets. Such price fluctuations introduced uncertainty for small refiners about the costs of compliance with the RFS because they had to purchase their RINs on the market. Placement of the point of obligation. In our November 2016 report on the RFS, we reported that according to some experts, blenders should be the obligated parties instead of importers and refiners. According to some of these experts, when EPA designed the RFS, it placed the obligation for compliance on the relatively small number of refiners and importers rather than on the relatively large number of downstream blenders in order to minimize the number of obligated parties to be regulated and make the program easier to administer. However, these experts told us that obligating refiners and importers has not worked to incentivize investors to expand infrastructure to accommodate higher ethanol blends. One expert we spoke with stated that because blenders are either retailers or sell to retailers, blenders would be better situated to pass RIN savings along to consumers. This in turn might encourage demand for higher ethanol blends and incentivize infrastructure expansion. Some experts told us at the time that EPA should make RIN market trading more open and transparent like other commodity markets, which could reduce the potential for fraudulent RIN activities and reduce RIN price volatility. EPA has taken some actions to address these issues. Specifically, EPA officials we interviewed for this report told us that EPA publishes a variety of aggregated information on its website each month to promote market transparency, including RIN generation and use, available RINs, RIN prices and trade volumes, RIN holdings, and small refinery exemption information. According to these officials, EPA also requires all RIN trades to be entered into EMTS from both the buy and sell sides, and only finalizes a transaction in the system if the buy and sell sides match. EPA officials said that transparency of aggregated RIN data helps the market function more efficiently and minimizes price volatility; however, they acknowledged that many factors contribute to RIN prices and RIN price changes, and it is impossible to attribute such changes to any single factor. Furthermore, according to EPA officials, the memorandum of understanding on RIN market manipulation that EPA has entered into with the Commodity Futures Trading Commission will also help make RIN markets more open and transparent. Finally, EPA officials stated that in response to a recent White House direction, EPA is currently drafting a regulatory proposal to implement market reforms and additional transparency measures to prevent price manipulation in the RIN market. According to EPA officials we interviewed for this report, EPA received several petitions requesting that it consider changing the point of obligation from refiners and fuel importers to fuel blenders. In November 2017, EPA denied the petitioners’ request. In the denial, EPA said that it does not expect a benefit of increased use of biofuels as a result of changing the point of obligation. Furthermore, it is EPA’s position that changing the point of obligation could increase the complexity of the RFS program and would likely disrupt both the RFS program and the fuels market. By law, small refineries were exempted from the RFS through compliance year 2010, and 24 small refineries were granted an exemption for compliance years 2011 and 2012. Beginning with the 2013 compliance year, small refineries have been able to petition EPA annually for an exemption from their RFS obligations. EPA states on its website that EPA may grant the extension of the exemption if EPA determines that the small refinery has demonstrated disproportionate economic hardship. According to EPA officials, the statute directs EPA to consult with the Department of Energy, and to consider the department’s Small Refinery Study and “other economic factors” in evaluating small refinery exemption petitions. EPA conducts its review of small refinery petitions on a case-by- case basis and applies these statutory criteria to its evaluations. According to EPA’s website, EPA’s decision to grant an exemption has the effect of exempting the gasoline and diesel produced at a refinery from the percentage standards, and the exempted refinery is not subject to the requirements of an obligated party for fuel produced during the compliance year for which the exemption has been granted. For the first few years, EPA data show that EPA granted roughly half of petitions; however, starting in compliance year 2016, the number of exemptions granted increased significantly. In compliance year 2016, EPA received 20 petitions and granted 19, with the final petition still pending. In compliance year 2017, EPA received 37 petitions and granted 29, with 1 declared ineligible or withdrawn and the remaining 7 still pending. The data show that this increase in granted exemptions correlates to an increase in estimated exempted volumes of gasoline and diesel, with the exempted amounts increasing from 3.07 billion gallons in compliance year 2015 (equivalent to an estimated 290 million RINs) to 13.62 billion gallons in compliance year 2017 (equivalent to an estimated 1,460 million RINs). To put these volumes into context, EPA data show that the total renewable volume obligation for compliance year 2015 was 17.53 billion gallons and for compliance year 2017 it was 18.91 billion gallons. Agency Comments and Our Evaluation We provided a draft of this report to the Departments of Agriculture and Energy, and to the Environmental Protection Agency, for review and comment. USDA, DOE, and EPA provided technical comments, which we incorporated where appropriate. USDA also provided written comments, which are reproduced in appendix IV. In summary, USDA expressed concerns in three areas. First, USDA disagreed with GAO’s conclusion that the RFS has had a limited effect, if any, on reducing greenhouse gas emissions. USDA asserts that scientific research shows significant effects on greenhouse gas emissions from blending ethanol into the nation’s fuel supply, based on the greenhouse gas benefits of ethanol produced using current technologies relative to gasoline. The objective of our work was to address the effect to date on greenhouse gas emissions that has been specifically attributable to the RFS, not whether blending ethanol into the nation’s fuel supply has effects on greenhouse gas emissions. We report that the RFS is not the only reason that ethanol is used in the fuel supply, and that ethanol would have been produced and used in the United States, even without the RFS. For example, as we noted in the report, ethanol blended into gasoline provides benefits as an oxygenate, to prevent air pollution from carbon monoxide and ozone; as an octane booster, to prevent early ignition, or “engine knock;” and as an extender of gasoline stocks. As a result, not all greenhouse gas reductions associated with ethanol use have been the result of the RFS. Drawing conclusions about the broader impact of ethanol on emissions generally was not our objective and is not appropriate for a report examining the impact of the RFS. Second, USDA criticized our methodology, which reported experts’ views on the effect of the RFS on greenhouse gas emissions. USDA stated that this methodology, by design, could not arrive at a consensus and did not synthesize the latest research. We chose our methodology, which relied on expert views supplemented by relevant reported research, because of its ability to yield more extensive, informative, and supportable answers to our objective than a narrower literature review, as suggested by USDA. More specifically, we reviewed much of the literature on this subject, and used the literature, along with referrals from other experts and recommendations from the National Academy of Sciences for prior GAO work, to assist in selecting experts whose expertise included knowledge of the relevant and most recent research on the issue. We selected respected experts representing all perspectives to span the disciplines required to answer our objective and to guard against drawing biased conclusions. Those experts were aware of all research, even that with conclusions contrary to their own. The studies that USDA cites do not represent a wide range of perspectives; they represent the views of a few studies focused specifically on the lifecycle emissions of ethanol. In addition, as we indicate, the perspectives we obtained from industry stakeholders were not used to support our findings on the effects of the RFS on greenhouse gas emissions, as USDA implies. Rather, stakeholders’ views were used to inform some of our examples and corroborate some aspects of the experts’ views—we attribute information to the stakeholders in these instances. The consensus we found among experts representing diverse perspectives was that the RFS has likely had a limited effect on greenhouse gas emissions to date and that the program is unlikely to meet its future greenhouse gas emissions reduction goals. Third, USDA commented that our conclusion that the RFS likely had modest impacts on gasoline prices should be augmented by a discussion of the volatility of gasoline prices. USDA’s comments appear to imply that the changes in prices we found are even smaller or less impactful on consumers because overall gasoline prices are themselves volatile. This is not an accurate interpretation of what we found. For example, increased prices in non-Midwest states represent additional expenditures on gasoline and consequent reductions in other household spending. Because a discussion of historic gasoline price volatility does not have bearing on the effect of the RFS on prices, we are not including it. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Agriculture and Energy; the Administrator of the Environmental Protection Agency; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Lists of Experts GAO Interviewed To determine what is known about the effect that the Renewable Fuel Standard (RFS) has had to date on 1) retail gasoline prices in the United States and 2) greenhouse gas emissions, we conducted semistructured interviews with 18 experts with expertise on these topics. Of the 18 experts we interviewed, 7 discussed the effect that the RFS has had on retail gasoline prices. Thirteen discussed the effect that the RFS has had on greenhouse gas emissions, though one expert declined to be identified. Two of the experts commented on the effect of the RFS on both prices and emissions. The specific areas of expertise varied among the experts we interviewed, so not all of the experts commented on all of our interview topics. The experts we interviewed for each topic are listed below. Experts Interviewed about the Effect of the RFS on Retail Gasoline Prices Experts Interviewed about the Effect of the RFS on Greenhouse Gas Emissions Dr. Antonio Bento, University of Southern California Dr. John M. DeCicco, University of Michigan Dr. Jason Hill, University of Minnesota Dr. Stephen Kaffka, University of California, Davis Dr. Madhu Khanna, University of Illinois Dr. Lee Lynd, Dartmouth College Dr. Steve McGovern, PetroTech Consultants, LLC Dr. John Miranowski, Iowa State University Dr. GianCarlo Moschini, Iowa State University Dr. Richard Plevin, University of California, Berkeley Dr. Wallace E. Tyner, Purdue University Dr. Michael Wang, Argonne National Laboratory One expert we interviewed declined to be identified. Appendix II: List of Industry Stakeholders Whose Representatives GAO Interviewed Appendix III: Technical Discussion of Econometric Model Estimating Effects of Ethanol Mandates on Retail Gasoline Prices This appendix describes the econometric model we developed to estimate the effect of the state ethanol mandates on retail gasoline prices, provides the results, and discusses limitations. Econometric Model In order to develop evidence of the likely effects of the Renewable Fuel Standard (RFS) on the incremental adoption of ethanol blending by states as RFS targets grew, we developed an econometric model to analyze the effect state ethanol mandates on retail gasoline prices. Specifically, we analyzed how state policies mandating certain levels of ethanol blending in retail gasoline affected retail gasoline prices in those states. We obtained retail gasoline price data from the Oil Price Information Service. The data identified the simple average price across each state for each grade of fuel—regular grade gasoline, midgrade gasoline, premium gasoline, and diesel. There also exist local fuel specifications, on top of state policies. Price data are only available at the state level, and we are not able to identify directly the effect of local fuel policies on prices. We therefore included controls that represent the percentage of retail stations in the state that are affected by the local specifications. To reduce distortion from dissimilar regulations and outliers, we did not include prices (1) from the state of California and (2) for products other than regular-grade gasoline. Therefore, the data we used for our analysis comprised prices collected from 49 states and the District of Columbia for the period of 2001 through 2010, for a total of 6,000 observations. Over the period 2001 through 2010, retail gasoline prices are highly correlated across states over time. Specifically, to illustrate, we ran a simple regression model of retail gasoline prices on year-month (fixed- effect) controls. The results show that over 90 percent of the variation in retail gasoline prices over time across states is explained by these simple year-month controls. This suggests nationwide factors explain much of the variation in retail gasoline prices across states over time. The available data are not sufficiently rich to allow us to reliably disentangle the separate effects on retail gasoline prices of various nationwide factors, such as, perhaps, changes in crude oil prices, demand for gasoline, and the roll-out of the RFS. Hence, below, we examine instead the (incremental) effect on state-level retail gasoline prices of state ethanol mandates that are effective at a time when the RFS was requiring relatively low levels of ethanol blending nationwide. Dependent Variable Our dependent variable in the model was the monthly average after-tax retail price in dollars per gallon of regular-grade gasoline. Explanatory Variables Our model included a variety of explanatory variables, including state ethanol mandates, other state and local ethanol policies and fuel specifications, and the Petroleum Administration for Defense District (PADD)-level gasoline inventory-sales ratios and refinery capacity utilization rates. State ethanol mandates. The variables of interest in the model were indicators for state ethanol mandates; the state ethanol mandate indicator variables take the value of one for any month in which that state has an effective ethanol mandate and take a value of zero otherwise. The mandates ranged in the percentage of ethanol they required to be blended into gasoline, from approximately 10 percent in Minnesota, Missouri, and Oregon to 2 percent in Washington, with Hawaii having a unique requirement that 85 percent of fuel sold in the state must contain 10 percent ethanol. Other state ethanol policies. We used as controls indicators for several other state ethanol policies to shed light on how these policies may have affected retail gasoline prices. Specifically, we controlled for state fleet requirements to use ethanol; direct ethanol incentives that reduce the cost of ethanol per gallon of fuel, such as tax credits or rebates; ethanol production incentives; and ethanol consumption incentives. Production incentives included financial incentives to produce ethanol, such as grants or payments to build or operate an ethanol plant or to grow ethanol feedstock. Consumption incentives included financial incentives to sell or use ethanol, such as grants or tax incentives to upgrade fueling infrastructure to sell ethanol or a tax credit to stations selling ethanol. We also controlled for state methyl tertiary butyl ether (MTBE) bans, as ethanol was the primary substitute that could be used in place of MTBE. Local-level fuel specification requirements. We controlled for local- level fuel specification requirements, such as the gasoline type, RVP levels, and oxygenated fuel requirements. Volume of inventory of gasoline relative to the volume of sales of gasoline. We used as a control the ratio of finished motor gasoline stocks to the sales of motor gasoline. This variable indicates when supply is high relative to demand and vice versa. Refinery capacity utilization rate. We controlled for refinery operable utilization rate, which represents the utilization of crude oil distillation units. This variable represents the balance between supply volume and costs of production. Both this variable and the inventory- sales ratio have been found to be endogenous in past work. State gas taxes. We control for the level of state gas taxes using data from the Department of Transportation’s Federal Highway Administration. Fixed effects. We used a set of indicator variables to account for fixed effects associated with time and individual states. Specifically, we used a set of state fixed effects to account for persistent differences between states, such as transportation costs of fuels to that state. Each model also included year-month fixed effects—one for each month in the data—to control for nationwide events, as well as state-calendar month fixed effects to allow seasonality to vary by state. The Model Our model can be written as follows: 𝑦𝑦𝑠𝑠𝑠𝑠𝑠𝑠= 𝛽𝛽0 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑦𝑦 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 +(𝑆𝑆𝑆𝑆𝑅𝑅𝑆𝑆𝐸𝐸𝑠𝑠×𝑒𝑒𝑅𝑅ℎ𝑎𝑎𝑎𝑎𝑒𝑒𝑎𝑎𝑎𝑎𝑎𝑎𝑅𝑅𝑎𝑎𝑎𝑎𝑅𝑅𝑒𝑒𝑠𝑠𝑠𝑠𝑠𝑠)′𝛽𝛽1 ′ 𝛽𝛽3+𝛼𝛼𝑠𝑠𝑠𝑠+ 𝛾𝛾𝑠𝑠𝑠𝑠 𝑦𝑦𝑠𝑠𝑠𝑠𝑠𝑠 is the dependent variable in our model; namely, the average after-tax price per gallon of regular grade gasoline at state 𝑒𝑒 in month 𝑎𝑎 and year 𝑅𝑅. 𝑆𝑆𝑆𝑆𝑅𝑅𝑆𝑆𝐸𝐸𝑠𝑠×𝑒𝑒𝑅𝑅ℎ𝑎𝑎𝑎𝑎𝑒𝑒𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑅𝑅𝑒𝑒𝑠𝑠𝑠𝑠𝑠𝑠 is a vector of interaction terms, where 𝑆𝑆𝑆𝑆𝑅𝑅𝑆𝑆𝐸𝐸𝑠𝑠 is a vector of dummies for each state with a mandate—Hawaii, Minnesota, Missouri, Oregon, or Washington—and 𝑒𝑒𝑅𝑅ℎ𝑎𝑎𝑎𝑎𝑒𝑒𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑅𝑅𝑒𝑒𝑠𝑠𝑠𝑠𝑠𝑠 an indicator that is equal to 1 for all months that an ethanol mandate is effective for that state, and zero otherwise. 𝐹𝐹𝑅𝑅𝑅𝑅𝑅𝑅𝑠𝑠𝑠𝑠×𝐹𝐹𝐹𝐹𝐸𝐸𝐹𝐹𝑅𝑅𝐸𝐸𝐹𝐹𝑆𝑆𝑠𝑠𝑠𝑠𝑠𝑠 is a vector of interaction terms where 𝐹𝐹𝑅𝑅𝑅𝑅𝑅𝑅𝑠𝑠𝑠𝑠 is a measure of the proportion of gas stations in a state likely affected by various fuel regulations in a given year, and 𝐹𝐹𝐹𝐹𝐸𝐸𝐹𝐹𝑅𝑅𝐸𝐸𝐹𝐹𝑆𝑆𝑠𝑠𝑠𝑠𝑠𝑠 is a 𝑋𝑋𝑠𝑠𝑠𝑠𝑠𝑠 is a vector of remaining control variables, including state 𝛼𝛼𝑠𝑠𝑠𝑠 is a set of state-calendar month fixed effects to account for 𝛾𝛾𝑠𝑠𝑠𝑠 is a set of month-year fixed effects to account for time-varying vector of indicator variables equal to one in those months that a state is subject to fuel regulations related to RVP levels, boutique fuels, reformulated gasoline, and oxygenated fuel. gasoline tax in cents per gallon, inventory sales-ratio, refinery utilization rate, and indicator variables for other state ethanol policies, including effective MTBE bans, fleet requirements, direct incentives, production incentives, and consumption incentives. permanent differences in a state’s average gasoline prices across months. factors affecting average gasoline prices for all states, such as fluctuations in crude oil prices. 𝜀𝜀𝑠𝑠𝑠𝑠𝑠𝑠 is an error term that is clustered by state. Our model assumes that after controlling for time-variant factors, the timing of state ethanol mandates going into effect is not correlated with unobserved time-variant factors that affect gasoline prices. When this assumption is satisfied, then our model may estimate the effect of state mandates on gasoline prices. Since ethanol mandates go into effect at different times—in 2003 (Minnesota), 2006 (Hawaii), and 2008 (Missouri, Oregon, Washington)—our quasi-experiment introduces variation in ethanol mandates across time and across states. We are able to address many concerns about omitted variable bias by including detailed state- calendar month fixed effects and month-year fixed effects. Results We estimate that all else remaining equal, when the ethanol mandates in the Midwestern states of Minnesota and Missouri were in effect, retail gasoline prices in those states were lower by approximately 8 and 5 cents, respectively, than they would have been without the mandates. We also estimate that all else remaining equal, when the ethanol mandates in Hawaii, Oregon, and Washington were in effect, retail gasoline prices in those states were higher by approximately 8, 2, and 6, cents, respectively, than they would have been without the mandates. The variables used in the model to control for effects other than ethanol mandates had the expected directional effect on price or else were not significant (using a 5 percent significance level). Our controls for the boutique fuel blends and the state gasoline taxes were significant and positive, suggesting that states with more stringent fuel specifications and higher gasoline taxes have a higher after-tax gasoline price. The estimated effect for refinery utilization rate is negative and statistically significant, suggesting that fuel prices decrease with refinery utilization rates because higher supply decreases prices. Although we might expect that fuel prices would decrease with the inventory/sales ratio because this indicates that supply is high relative to demand, it is also possible that when inventories are below a critical threshold, prices will rise regardless of how high inventories are relative to sales, as has been seen in prior work, so the positive coefficient in our model has precedent. See Kendix and Walls, “Oil industry consolidation and refined product prices: Evidence from US wholesale gasoline terminals” Energy Policy, vol. 38 (2010), pp. 3498-3507. Estimated coefficient -0.0071 (0.011) -0.0027 (0.015) -0.0034 (0.012) 0.0072 (0.015) (0.0085) Percentage of gasoline stations in the state selling fuel with less than 9 lbs. Reid vapor pressure (RVP) 0.070 (0.11) Percentage of gasoline stations in the state selling fuel with at least 9 lbs. RVP (0.040) Percentage of gasoline stations in the state selling boutique fuel 0.14*** (0.037) Percentage of gasoline stations in the state selling reformulated gasoline (0.45) Percentage of gasoline stations in the state selling oxygenated fuel 0.0029 (0.018) 0.0028 (0.092) (0.00050) 1.87*** (0.11) Legend: * = parameter estimate significance less than 10 percent; ** = parameter estimate significance less than 5 percent; *** = parameter estimate significance less than 1 percent. We tested alternate specifications, such as the following: Including different subsets of the explanatory control variables in the model. Treating the inventory/sales ratio and the refinery utilization rate as endogenous. Using pre-tax prices by subtracting state gasoline taxes from after-tax prices rather than including taxes as a control variable. Our results, including the magnitude and directional impact of the various state ethanol mandates, were not meaningfully affected across such specification tests. Limitations Our analysis had a number of limitations as listed below. We did not directly estimate the effect of the RFS on prices. The policy was nationwide and there are no reliable state-level data with which to measure state-level ethanol gasoline blend rates as the RFS was implemented over time. However, there is no reason to believe that other states that incrementally adopted the blending of ethanol as a result of increasing RFS targets would have experienced different effects. There may be some endogeneity in the timing of the adoption of the ethanol mandates. These policies are likely easier to pass through state legislatures when corn or ethanol prices are lower than oil or gasoline prices or when gasoline prices are high, but given that the effective dates are usually several years after the laws are enacted, this actual effective timing should be exogenous. We believe the state-level ethanol regulation data are comprehensive, but some regulations may not appear in the data. In our analysis, we include controls for ethanol mandates as well as several other types of ethanol incentives and fuel specification requirements. These variables control for the effects of related ethanol policies as well as variations in the cost of producing retail gasoline. We are certain that all state ethanol mandates were included in the model. However, our model may not perfectly control for all other regulations that could affect retail gasoline prices. Some control variables were not available at the state or monthly level. For example, some controls, such as the refinery capacity utilization rate, were available at the regional level only, so we had to parse out the regionally aggregated observations accordingly. As in any model, there is the possibility of misspecification or bias. Inappropriate assumptions about the functional form of the model, failure to deal with endogenous variables, or exclusion of relevant variables could also cause our estimated effects to deviate from the true effects. Some amount of this bias is present in almost all regression results, although the amount may not be very large. Appendix IV: Comments from the U.S. Department of Agriculture Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Karla Springer (Assistant Director), Stuart Ryba (Analyst in Charge), Luqman Abdullah, Benjamin Adrian, Jaci Evans, Ellen Fried, William Gerard, Cindy Gilbert, Anne Hobson, Jordan Kudrna, Joe Maher, Caroline Prado, Oliver Richard, Rachel Rhodes, Dan Royer, Barbara Timmerman, and William D. Walls made key contributions to this report.
Why GAO Did This Study Congress established the RFS in 2005 and expanded it 2 years later. The RFS generally mandates that transportation fuels—typically gasoline and diesel—sold in the United States contain increasing amounts of biofuels. In addition, the RFS is designed to reduce greenhouse gas emissions by replacing petroleum-based fuels with biofuels expected to have lower associated greenhouse gas emissions. The most common biofuel currently produced in the United States is corn-starch ethanol, distilled from the sugars in corn. EPA uses RINs associated with biofuels blended with petroleum-based fuels to regulate compliance with the program. In 2014, GAO found that refiners' costs for complying with the RFS had increased, and in 2016, GAO found that greenhouse gas emissions are unlikely to be reduced to the extent anticipated because production of advanced biofuels—which reduce greenhouse gas emissions more than corn-starch ethanol—has not kept pace with the yearly increases or the target of 21 billion gallons by 2022 called for by the statute. GAO was asked to review additional issues related to the effects of the RFS. This report examines what is known about (1) the effect the RFS has had to date on retail gasoline prices in the United States and (2) the RFS's effect on greenhouse gas emissions and whether the RFS will meet its goals for reducing those emissions. The report also provides information about RINs. To address the likely effects of the RFS on gasoline prices, GAO reviewed studies and interviewed experts and industry stakeholders, and conducted a statistical analysis of state ethanol mandates that were similar to the mandates of the RFS. GAO selected the experts based on their published work and recognition in the professional community. GAO selected stakeholders representing a range of perspectives, including stakeholders from the renewable fuels, petroleum, and agricultural industries, as well as from environmental groups. Because the RFS was implemented on a nationwide basis at the same time that other factors, such as the global price of crude oil and domestic demand for retail gasoline, were affecting retail gasoline prices across the nation, it is not possible to directly isolate and measure the effect the RFS had on gasoline prices nationwide given data available to GAO. Instead GAO developed and extensively tested an econometric model that estimated the effects on retail gasoline prices of state ethanol mandates. These state mandates are similar to the RFS but were put in place voluntarily by states before the RFS led to widespread ethanol blending in every state. This model estimated how ethanol mandates affected gasoline prices in these five states. These estimates suggest the RFS likely had effects in states that did not have state-wide mandates. These states incrementally blended ethanol because of the increasing volumes of ethanol required to be blended nationally by the RFS. Regarding the RFS's effect on greenhouse gas emissions, GAO interviewed 13 experts in government and academia. GAO selected these experts based on their published work, prior GAO work, and recommendations from other experts. During the course of the work, GAO gathered information on the topic of RINs through interviews, a review of relevant literature, and prior GAO work. GAO makes no recommendations in this report. In commenting on a draft of this report, USDA disagreed with GAO's finding that the RFS has had a limited effect on greenhouse gas emissions, citing research on the effects of ethanol on reducing emissions generally. GAO reported on the specific effects of the RFS on emissions. USDA also criticized GAO's methodology using experts' views. GAO employed that method to reach consensus among those with a range of perspectives. DOE and EPA did not comment on the draft report. What GAO Found Effect on prices. Evidence from studies, interviews with experts, and GAO's analysis suggest that the nationwide Renewable Fuel Standard (RFS) was likely associated with modest gasoline price increases outside of the Midwest and that these price increases may have diminished over time. Variations in these gasoline price effects likely depended, in part, on state-by-state variation in the costs to transport and store ethanol. For example, the Midwest was already producing and blending ethanol when the RFS came into effect, so that region had lower transportation costs and had already invested in necessary storage infrastructure. Other regions began blending ethanol later to meet the RFS's requirements, thereby incurring new transportation and storage infrastructure costs that resulted in gasoline prices that were several cents per gallon higher than they otherwise would have been. In addition, experts told GAO that the RFS caused an initial increase in refining investment costs that, over the long term, reduced refining costs for gasoline. Specifically, once all locations had made the infrastructure investments and most gasoline blendstock produced was consistent with blending ethanol then there would be two continuing effects: (1) the transportation and blending costs of ethanol, which would tend to push retail prices higher and depend on the distance traveled and the modes of transport, and (2) the lower cost of producing lower octane blendstock. The former effect might dominate for locations far from the production source of ethanol and for which more costly modes of transport were used, while the lower blendstock costs might dominate for locations close to the production source of ethanol and/or those that have low transportation costs. GAO's analysis of the effect that state ethanol mandates had on gasoline prices also showed gasoline price effects that differed in the Midwest and elsewhere. Specifically, during the period GAO studied, when the ethanol mandates in Minnesota and Missouri were in effect, all else remaining equal, retail gasoline prices were lower by about 8 and 5 cents per gallon in these states, respectively, than they would have been without the mandates. In contrast, when the ethanol mandates in Hawaii, Oregon, and Washington were in effect, GAO's model showed that retail gasoline prices were higher by about 8, 2, and 6 cents per gallon, respectively, than they would have been without the ethanol mandates. These results suggest that the RFS likely had gasoline price effects in other states that did not have state-wide ethanol mandates but that incrementally began blending ethanol as a result of increasing RFS requirements that by around 2010 had led to almost all gasoline sold in the United States being blended with 10 percent ethanol. Effect on greenhouse gas emissions. Most of the experts GAO interviewed generally agreed that, to date, the RFS has likely had a limited effect, if any, on greenhouse gas emissions. According to the experts and GAO's prior work, the effect has likely been limited for reasons including: (1) the reliance of the RFS to date on conventional corn-starch ethanol, which has a smaller potential to reduce greenhouse gas emissions compared with advanced biofuels, and (2) that most corn-starch ethanol has been produced in plants exempt from emissions reduction requirements, likely limiting reductions early on when plants were less efficient than they are today. Further, the RFS is unlikely to meet the greenhouse gas emissions reduction goals envisioned for the program through 2022. Specifically, GAO reported in November 2016 that advanced biofuels, which achieve greater greenhouse gas reductions than conventional corn-starch ethanol, have been uneconomical to produce at the volumes required by the RFS statute so the Environmental Protection Agency (EPA) has waived most of these requirements (see figure). Renewable identification numbers. EPA uses renewable identification numbers (RINs) to regulate industry compliance with RFS requirements for blending biofuels into the nation's transportation fuel supply. In GAO's March 2014 report on petroleum refining, GAO noted that the RFS had increased compliance costs for the domestic petroleum refining industry or individual refiners. GAO reported that corn-based ethanol RIN prices had been low—from 1 to 5 cents per gallon from 2006 through much of 2012—but in 2013, RIN prices increased to over $1.40 per gallon in July before declining to about 20 cents per gallon as of mid-November 2013. Since the March 2014 report, corn-ethanol RIN prices have experienced more periods of volatility. Most experts and stakeholders GAO interviewed recently stated that RINs had either a small effect on prices or no effect on prices, though a few disagreed. Finally, GAO's past work, as well as EPA analysis, has identified several issues of concern with RINs, including possible fraud in the market and concerns about the effect on small refiners, price volatility, and the point of obligation.
gao_GAO-18-200
gao_GAO-18-200_0
Background GSA maintains custody and control of real property for many civilian federal agencies and has a large portfolio of federally owned and leased properties that GSA rents to its federal agency customers. It is responsible for approximately 1,600 federally owned buildings, and the agency generally provides operations and maintenance services for building systems—such as heating, cooling, and lighting systems—used in building operations. According to GSA officials, their federally owned smart buildings are managed by a GSA building manager who oversees a private operations and maintenance services contractor. According to GSA officials, the agency began implementing what would become its smart buildings program around 2005 in response to numerous federal policies aimed at improving federal building energy and environmental management. These officials told us that the smart buildings program includes two key technologies: advanced utility meters and a computer software program known as “GSAlink.” According to GSA officials, outfitting buildings with these technologies allows for more precise monitoring of energy use and equipment operations in these buildings, and was initially based on the use of advanced utility meters to meet federal mandates. Later, this concept was expanded to include use of analytics, through GSAlink, aimed at reducing energy consumption and increasing the efficiency of operations and maintenance activities. According to GSA officials, GSA’s smart buildings use these technologies to connect and monitor multiple pieces of building equipment, such as heating and air conditioning system components. Further, according to these officials, the program is intended to achieve efficiencies in energy use and in operations and maintenance activities while also providing a comfortable workplace potentially conducive to improved tenant productivity. As GSAlink and advanced meters are Internet-connected, GSA officials told us that they implemented protections that are intended to help mitigate potential cyberattacks, including using firewalls. Advanced Utility Meters: In response to energy reduction and advanced metering requirements established in the Energy Policy Act of 2005—as well as subsequent amendments and an Executive Order—GSA began installing advanced meters in its federally owned buildings starting around 2005. Internet-connected advanced utility meters measure utility use in real-time, which GSA officials told us allows GSA’s building managers to identify opportunities to reduce energy use or anomalies that contribute to energy waste. For example, GSA officials said that advanced utility meters can be used to monitor energy consumption patterns and detect lights or other building systems being used after normal business hours. According to a senior GSA official, GSA currently has 675 advanced meters installed in the agency’s approximately 1,600 federally owned buildings. GSAlink: GSA officials told us that GSAlink is a computer software program that collects and analyzes data from advanced meters— including gas, electric, and water meters—and from a facility’s “building automation system” and uses this information to alert building staff to potential problems. Further, GSA officials said that GSAlink allows them to identify building problems that occur over time that may not be readily observable through the building automation system, which generally presents information to building personnel on how a building system is operating in real-time, not over a longer time frame. For example, GSA officials told us that GSAlink can collect data on the temperature and pressure of chilled water that is being circulated through a building’s cooling system and identify equipment that is operating outside of normal parameters or normal business hours when a building automation system may not be actively monitored. If GSAlink detects a potential issue, GSA officials told us the software creates a record so that building maintenance staff can investigate and remedy that issue. GSA building managers as well as GSA staff at the regional and national levels told us they can log in to GSAlink to check on the status of building system issues. According to GSA officials, the contract for GSAlink was awarded in 2012 and GSAlink is currently in use in 81 buildings, with at least one GSAlink- equipped building in each of GSA’s 11 regions. A senior GSA official told us that eighty of these buildings are also equipped with advanced meters. Further, in September 2017, this official told us that GSA contracted to equip 4 additional buildings with GSAlink. According to GSA officials, GSA generally plans to limit installation of GSAlink in additional buildings until more is learned about using the technology in the buildings in which it is currently installed. Figure 1 illustrates an example of a GSA smart building that includes advanced meters, GSAlink, and the building systems monitored by these technologies. Limited Quantified Information Exists on the Costs and Benefits of Key Smart Buildings Program Technologies The Smart Buildings Program’s Installation Costs Are Affected by Building Characteristics and Can Be Difficult to Quantify According to GSA officials, the approximate cost of equipping a building with smart building technologies ranged from between about $48,000 to $155,000. This includes costs for installing: advanced utility meters (approximately $25,000 to $55,000), and GSAlink (approximately $23,000 to $100,000). The cost of installing GSAlink depends on the condition of the building automation system to which GSAlink is connected as well as the number of individual building components (e.g., chilled water pumps, cooling tower fans, thermostats) to be monitored by GSAlink. GSA officials anticipate that advances in system architecture and reduced software licensing costs will lower the cost of future installations. For example, a senior GSA official told us in October 2017 that the cost to install GSAlink in four additional buildings—the most recent buildings in which GSAlink was installed—ranged between $23,000 and $25,000. In addition, GSA is undertaking a broader effort to upgrade building automation systems in its buildings to enable these systems and connected applications, such as GSAlink, to operate on GSA’s protected information technology network. According to GSA officials, GSA can only install GSAlink in buildings whose building automation system operates on GSA’s protected network. To date, GSA has upgraded building automation systems to operate on the agency’s protected network in approximately 400 buildings. GSA officials told us that the cost of these upgrades has varied by building and depends on several factors, including the size of the building, the complexity or condition of its building automation system, and its age. According to GSA officials, upgrading building automation system components to enable them to operate on the protected network has cost approximately $90,000 per building, on average. However, in some cases, these costs can be much higher; integrating older systems in larger buildings has cost up to $3 million, according to GSA officials. Further, according to GSA officials, accurately calculating smart building implementation costs can be difficult because GSA typically installs key technologies—that is, advanced meters and GSAlink—and makes upgrades necessary to install GSAlink in selected buildings incrementally, sometimes as part of other capital improvement projects. For example, the American Recovery and Reinvestment Act of 2009 and annual appropriations have provided funding to GSA for energy and conservation measures, including the purchase and installation of advanced meters. GSA Has Taken Steps toward Assessing Benefits of the Smart Buildings Program, but Efforts to Quantify Benefits Have Been Limited GSA officials we interviewed at the central office, regional, and individual building levels identified perceived operational benefits from implementing the smart buildings program, including that it (1) enables them to identify problems with building equipment or system operations more quickly and more thoroughly and (2) allows for their greater oversight of operations and maintenance services contractors relative to other GSA buildings. For example, according to GSA regional staff we spoke to, both advanced meters and GSAlink could detect if the cooling system was operating when tenants were not occupying the building, thereby allowing the building managers to adjust operations to avoid unneeded energy use and wear on the cooling system equipment. Regarding contractor oversight, GSA building managers stated that GSAlink allows the agency to better monitor operations and maintenance contractors’ performance, potentially yielding a better-run building with lower operations and maintenance costs. For example, GSA officials described how the analytic capability of GSAlink might allow building managers to precisely identify and address a problem with a building before that problem is noticed by tenants. This may result in, for example, a reduction in the number of maintenance service requests from tenants and contribute to lower building operating costs. In addition, GSA officials told us that GSAlink allows GSA building managers to confirm the information operations and maintenance services contractors present to them on the status of issues identified by GSAlink. Further, according to these officials, GSAlink allows building managers to monitor contractor compliance with GSA’s requirement that contractors address building issues identified by GSAlink within 30 days, thereby giving GSA officials closer oversight of contractor performance. GSA has taken some steps in the past to quantify the benefits associated with the smart buildings program. While those efforts have identified benefits, they have had some limitations. For example, in 2009—after having begun installing advanced meters but before installing GSAlink— GSA attempted to forecast benefits of the smart buildings program by commissioning a business case analysis. The business case concluded that GSA’s energy and operating costs could be reduced by a smart buildings program and that such a program would pay for itself in 1.7 years based on combined energy and operational savings. However, this business case’s estimates of the program’s benefits have limited usefulness for evaluating the current program because this study took place before the program was fully implemented and did not account for constraints affecting building operations. For example, a senior GSA official told us that GSA’s operations and maintenance service contracts are generally for multiple years at a fixed price, calling into question whether operational cost savings can be realized to achieve payback within the time frame estimated by the study. In addition, GSA’s service contractor developed an application within GSAlink that automatically estimates the costs that would be avoided by addressing each type of fault that GSAlink identifies. According to GSA officials, these estimates are imprecise and do not reflect actual avoided costs, which thereby precludes their use in quantifying program benefits. However, according to these officials, these estimates can be used to compare the relative benefits expected to be achieved by addressing identified faults and to prioritize maintenance and repair actions. GSA officials told us that they took steps in June 2017 to improve the accuracy of avoided cost estimates produced by this application, for example, by enabling adjustments to account for differences in weather conditions and building size, and plan to continue their efforts to adjust and refine this tool. In a separate study in October 2016, GSA—in collaboration with researchers at Carnegie Mellon University—analyzed the energy use changes associated with both capital upgrades and operational initiatives, including the use of smart building technologies. Capital upgrades include actions such as installing new energy-efficient building systems and equipment, whereas operational initiatives include, among other things, changes to building operations based on the analysis of advanced meter and GSAlink data. While the researchers concluded that the use of advanced meter and GSAlink data led to reductions in energy use, the researchers found that GSA’s utility consumption records were incomplete and that GSA records of capital upgrades often do not include key details, such as project start or completion dates, to indicate when GSA would have received the benefit derived from the capital project. This lack of complete data adds to the difficulty of estimating the reduced energy consumption attributable to specific factors, including use of advanced meters and GSAlink. GSA Does Not Have Documented, Clearly Defined Performance Goals or Measures to Help It Manage the Smart Buildings Program We have previously found that results-oriented organizations set performance goals to clearly define desired program outcomes and develop performance measures that are clearly linked to the performance goals. Program goals communicate what results the agency seeks and allow agencies to assess or demonstrate the degree to which those desired results are achieved. Performance measures also show the progress the agency is making toward achieving program goals. We have previously reported that performance measurement gives managers crucial information to identify gaps in program performance and plan any needed improvements. GSA has not documented the smart buildings program’s goals, contrary to leading practices we identified in our prior work, which call for program goals to clearly define desired program outcomes. GSA officials verbally described to us broad goals for the smart buildings program: (1) reducing energy consumption, (2) generating operations and maintenance cost savings, and (3) creating a comfortable work environment conducive to improved tenant productivity. However, GSA has not documented these goals—for example, in the agency’s performance plan or in other program documents. GSA officials could not provide a reason for why the agency has not documented the smart buildings program’s goals. Further, because GSA has not clearly defined its verbally expressed goals, it cannot demonstrate progress in achieving them. This lack of clearly defined goals is contrary to federal internal control standards, which state that agency management should define objectives in measurable terms so that performance toward those objectives can be assessed. GSA could potentially measure progress toward its stated smart buildings program goals of reducing energy consumption and generating operations and maintenance cost savings, if data were available to do so, as these goals seek to identify changes in quantifiable outcomes, specifically energy use and cost savings. However, GSA officials said that the agency cannot measure progress toward the stated goal of improving tenant productivity and comfort because of the subjective nature of individual tenant preferences, such as for office temperatures. This subjectivity is consistent with statements from the industry stakeholders we spoke with, who also said that identifying the existence of a causal relationship between a building’s environment and the productivity of its inhabitants is challenging. For example, an industry stakeholder we spoke to told us that different building occupants have different temperature or ventilation preferences and may accordingly be the most productive at different ambient temperatures, making it challenging to determine a building’s optimal temperature. Without documented, clearly defined goals, it will be challenging for GSA to determine what type of evaluative information it will need to monitor the progress of the smart buildings program. In addition, contrary to the leading practices we have identified in our previous work, GSA has not developed performance measures for the smart buildings program. According to these leading practices, performance measures allow for an assessment of progress toward achieving goals by including concrete, objective, and observable ways to measure the program’s performance and compare this with the program’s expected results. Further, federal internal control standards call for federal program managers to use quality information to achieve that program’s objectives and make informed decisions. However, GSA lacks quality information that can be used to measure program performance. As discussed in the previous section, GSA’s efforts to quantify the smart buildings program’s benefits, including energy reductions and cost savings, have been limited because GSA has had difficulty in compiling data that would allow it to do so. For example, GSAlink’s calculation of avoided costs estimated to be achieved by addressing identified faults is useful for prioritizing maintenance actions but not for measuring program performance because, according to GSA officials, the estimates lack precision and relation to actual costs. In addition, GSA’s October 2016 study on energy use reductions attributable to the program faced problems owing to incomplete records on utility consumption and capital upgrades. While we recognize that determining what data can be collected in a cost-effective manner and can be used to measure the performance of the smart buildings program may be difficult, without such data and measures, GSA lacks the ability to determine the program’s progress and make informed decisions about its current and future operations. GSA Faces Some Challenges in Implementing Smart Building Technologies and Is Taking Steps to Mitigate Them GSA Is Taking Actions That May Mitigate Challenges Related to Cybersecurity GSA faces cybersecurity challenges to its buildings, but is taking steps intended to mitigate these challenges. According to GSA officials, advanced meters and GSAlink operate in conjunction with Internet- connected building automation systems on the protected GSA information technology network. GSA regional staff and industry stakeholders we interviewed stated that cybersecurity presents challenges to those operating smart building technologies, including GSA. Specifically, because these building automation systems are connected to the Internet, they provide a potential pathway for cyberattacks on GSA’s network. According to our prior work, this connectivity could compromise security, hamper GSA’s ability to carry out its mission, or cause physical harm to GSA’s facilities or their occupants. GSA has taken several actions that are intended to help mitigate cybersecurity challenges to its buildings, including those that affect the smart buildings program: GSA has instituted policies and procedures addressing cybersecurity threats and known vulnerabilities in its building systems. In December 2015, GSA published an information technology security policy, defining the roles and responsibilities of GSA staff and establishing controls to ensure compliance with federal regulations, laws, and GSA directives. For example, this policy defines the role of the Federal Government Authorizing Official whose responsibilities include ensuring that monthly operating system scans, database scans, and web application scans are performed and that all vulnerabilities identified are resolved. According to a GSA senior official, under GSA’s Building Monitoring and Controls Program, which provides the infrastructure support needed to connect a building to GSA’s network, GSA is taking steps to mitigate the effects of potential external cyberattacks by moving building automation systems of GSA-controlled buildings away from public networks to GSA’s secured network. GSA officials told us that there are currently approximately 400 federally owned buildings on GSA’s secured network, which includes the 81 buildings equipped with GSAlink. According to GSA officials, a building automation system must be on GSA’s secured network before GSAlink can be installed. According to GSA officials, GSA also performs regular assessments to validate that GSAlink system controls comply with relevant statutes, such as the Federal Information Security Management Act of 2002, National Institute of Standards and Technology security standards, and GSA policies and procedures. In December 2014, we reported on GSA’s efforts to address cyber risks in federal buildings in compliance with relevant statute and guidance, finding that GSA had not conducted security control assessments for all of its systems in about 1,500 federally owned facilities. We recommended that GSA assess its building control systems in a manner fully consistent with federal law and related implementation guidelines. GSA has since implemented this recommendation. According to GSA documentation and officials, GSA conducts regular vulnerability scanning of the equipment and systems involved in the smart buildings program. For example, according to GSA regional staff, a recent vulnerability in the GSA system that manages maintenance requests was identified by GSA central office and was remedied through a software upgrade. GSA Is Taking Actions That May Mitigate Challenges with Stakeholder Support GSA faces smart building technology implementation challenges related to the limited technological proficiency of or lack of buy-in from some GSA building managers and operations and maintenance services contractors, but the agency is taking steps that are intended to engage these stakeholders and ensure they are learning to use the smart buildings program’s technologies. GSA regional staff acknowledge that there can be inconsistencies among building managers and operations and maintenance services contractors in terms of their familiarity and comfort with using computers and computer-based analytical tools. According to GSA officials, GSAlink proficiency and adoption varies by building and as such, some buildings may obtain greater benefits from the system than others. A lack of proficiency among building managers in smart building technologies not only affects GSA, but is also an industry-wide concern, according to industry stakeholders we interviewed. Industry stakeholders we interviewed stated that operations and maintenance services contractors are generally not well trained on smart building operations or the differences between managing a smart building and managing a traditional building. GSA regional staff and GSAlink’s support contractor we interviewed also identified operations and maintenance services contractors’ limited buy-in to the smart buildings technologies as a challenge affecting implementation of the program. According to GSA officials, this limited buy-in to the smart buildings technologies could potentially lead to loss of support for the program among operations and maintenance services contractors, posing a risk to the program’s successful implementation. GSA officials, regional staff, and GSAlink’s support contractor acknowledge it is important to demonstrate how GSAlink, for example, can make the operations and maintenance services contractors’ jobs easier. According to GSA officials, if GSAlink can help a building’s systems operate more efficiently, that improvement should result in less unscheduled maintenance and fewer work orders for the contractor. Additionally, industry stakeholders we interviewed suggested that operations and maintenance services contractors do not currently have a stake in whether a smart buildings program is successful. According to those we interviewed, GSA has taken several actions that are intended to help address these challenges: GSA officials and regional staff told us that GSA provided initial training to building managers and operations and maintenance services contractors when GSAlink was first installed. According to GSA officials, refresher training is available online through recorded training sessions. Additionally, GSA regional staff told us that knowledgeable GSA staff provide training to newly hired staff as needed. GSAlink’s support contractor staff told us that they lead regularly scheduled teleconferences with each smart building’s staff either monthly or quarterly depending on each building’s needs. At these meetings, the support contractor remotely accesses GSAlink data for a particular building to discuss the status of GSAlink notifications of building system issues and recommend adjustments to building equipment or systems to ensure optimal operations. GSA regional staff we spoke with stated that this meeting serves as a form of training and helps educate participants on how to use GSAlink. To ensure that building personnel are using smart buildings technologies, GSA officials told us that GSA’s central office monitors a key performance indicator requiring GSA building managers and operations and maintenance services contractors to address all GSAlink notifications of building system issues within 30 days. According to GSA officials, GSA central office and regional staff also have the ability to remotely monitor advanced meter and GSAlink data for individual buildings. According to a senior GSA official, new operations and maintenance services contracts will expressly require contractors to use smart building technologies as part of their efforts to optimally operate GSA buildings. Conclusions According to GSA officials, the agency’s smart buildings program is intended to allow its staff and contractors to more efficiently manage energy consumption and operations and maintenance actions aimed at promoting cost-efficient operation of building systems and creating a comfortable work environment for tenants in GSA’s buildings. Given GSA’s recent decision to expand the use of GSAlink technology, it is important that the agency be able to determine whether use of the technology achieves these intended results. However, without documented, clearly defined goals, performance measures linked to those goals, and quality information to measure progress, GSA is limited in its ability to make informed decisions about the smart buildings program’s current or future operations as it develops plans to enlarge the program to serve a greater proportion of its buildings portfolio. As a result, GSA risks continuing to expend resources on a program that the agency cannot demonstrate is meeting its intended objectives. Recommendations for Executive Action We are making the following two recommendations to GSA: The Administrator of the General Services Administration should establish clearly defined goals and related performance measures for the smart buildings program. (Recommendation 1) The Administrator of the General Services Administration should identify and develop data that can be used to measure progress in achieving the smart buildings program’s goals. (Recommendation 2) Agency Comments We provided a draft of this report to GSA for comment. In its written comments, reproduced in appendix II, GSA stated that it concurred with our recommendations and is developing a plan to address them. In addition, GSA clarified that the agency has been upgrading building automation systems across its buildings inventory for a variety of reasons, to include providing needed safeguards to comply with GSA’s information technology security protocols. GSA also provided information on the methodology used and results reported in its October 2016 study on energy savings realized from combined investments in advanced metering and GSAlink. We are sending copies of this report to the appropriate congressional committees and the Administrator of the General Services Administration. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Organizations Contacted Federal Government GSA Smart Buildings – Washington, DC GSA Smart Buildings – San Francisco, California Industry Stakeholders Appendix II: Comments from the General Services Administration Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Lori Rectanus, (202) 512-2834 or rectanusl@gao.gov. Staff Acknowledgments In addition to the contact named above, Michael Armes (Assistant Director); Daniel Paepke (Analyst in Charge); Edward Alexander, Jr.; Jenny Chanley; John de Ferrari; Peter Haderlein; Geoffrey Hamilton; Thomas Johnson; Nick Marinos; Malika Rice; Stephen Schluth; Elaine Vaurio; Jack Wang; Michelle Weathers; and Dave Wise made key contributions to this report.
Why GAO Did This Study To help comply with federal policies aimed at improving federal building energy and environmental management, GSA has implemented a smart buildings program nationwide in federally owned buildings under its custody and control. Two key technologies included in the program are Internet-connected advanced utility meters and an analytical software application, GSAlink, which alerts staff to potential building system problems, such as equipment operating outside of normal hours. GAO was asked to review GSA's smart buildings program. This report examines: (1) what is known about the costs and benefits of the program, (2) the extent to which GSA has developed performance goals and measures to help it manage the performance of the program, and (3) any challenges GSA faces in implementing the technologies used in the program and GSA's actions to mitigate those challenges. GAO reviewed relevant GSA documentation, interviewed officials at GSA's central and regional offices, and visited a sample of GSA smart buildings in San Francisco, California, and Washington, D.C. that were selected based on the high concentration of GSA smart buildings located in each city. What GAO Found Limited quantified information exists on the costs and benefits of the General Services Administration's (GSA) smart buildings program's key technologies. GSA officials stated that the approximate cost of equipping a building with these technologies ranged between about $48,000 to $155,000. However, they stated that accurately calculating installation costs is challenging because GSA typically installs these technologies in selected buildings incrementally and sometimes as part of other capital improvement projects. Additionally, GSA officials identified perceived operational benefits of the smart buildings program's key technologies, including that these technologies enable officials to more precisely identify building system problems and more closely monitor contractors. However, existing data on the smart buildings program are of limited usefulness in quantifying the program's benefits. For example, according to GSA officials, while data from an application within GSAlink that estimates avoided costs from addressing each fault that GSAlink identifies are useful for prioritizing maintenance actions, the imprecise estimates preclude their use as a measure of actual avoided costs in quantifying program benefits. GSA does not have documented, clearly defined goals for the smart buildings program, nor has GSA developed performance measures that would allow it to assess the program's progress. These omissions are contrary to leading practices of results-oriented organizations identified in previous GAO work. GSA officials verbally described broad goals for the smart buildings program to GAO, but the agency has not documented these goals. Further, because GSA has not clearly defined its verbally expressed goals, it cannot demonstrate progress in achieving them. For example, GSA officials said that the agency cannot measure progress for the stated goal of improving tenant productivity and comfort because of the subjective nature of individual tenant preferences, such as for office temperatures. Additionally, GSA has not developed performance measures to assess the program, and GSA's lack of data that can be used to quantify benefits of the program impedes its ability to measure the success of the program. Without clearly defined goals, related performance measures, and data that can be used to measure its progress, GSA is limited in its ability to make informed decisions about the smart buildings program. GSA faces challenges in implementing the smart buildings program and has taken steps to mitigate these challenges. Since smart building technologies are Internet-connected, they are potentially vulnerable to cyberattacks that could compromise security or cause harm to facilities or their occupants. GSA has taken actions intended to mitigate cybersecurity challenges, such as instituting policies to address threats and known vulnerabilities and moving Internet-connected building systems to GSA's secured network. Separately, according to GSA officials, GSA faces implementation challenges related to the limited technological proficiency of some GSA building managers and contractors or lack of buy-in from them. GSA is taking actions intended to address these challenges. For example, it has provided training to staff and contractors, and its central office monitors the extent to which staff address problems detected by the smart buildings program's key technologies. What GAO Recommends GAO recommends that GSA establish clearly defined performance goals and related performance measures for the smart buildings program, and identify and develop data to measure progress. GSA concurred with GAO's recommendations.
gao_GAO-19-19
gao_GAO-19-19_0
Background According to the National Inventory of Dams, as of January 2016 there are approximately 90,500 dams in the United States and about 2.5 percent of these (approximately 2,100 dams) are associated with hydropower projects. Hydropower projects are owned and operated by both non-federal entities—such as private utility companies, municipalities, and state government agencies—or federal government agencies—primarily the U.S. Army Corps of Engineers (the Corps) and the Bureau of Reclamation. Collectively, these dams associated with hydropower projects account for about 8 percent of the total electric generating capacity in the United States. Hydropower projects generally consist of one or more dams and other key components associated with hydroelectric power generation and water storage, and are uniquely designed to accommodate watersheds, geology, and other natural conditions present at the time of construction. These components include both those that allow operators to adjust reservoir water levels, such as spillways and gates, as well as those that produce and distribute electricity, such as transmission lines and powerhouses, among others. (See fig. 1.) The Federal Power Act provides for FERC’s regulatory jurisdiction over a portfolio of about 1,000 non-federal hydropower projects comprising over 2,500 dams. While FERC does not construct, own, or operate dams, it licenses and provides oversight of non-federal hydropower projects to promote their safe operation. Licensees are responsible for the safety and liability of dams, pursuant to the Federal Power Act, and for their continuous upkeep and repair using sound and prudent engineering practices. FERC officials in each of the agency’s five regional offices work directly with licensees to help ensure these projects comply with licenses and meet federal guidelines for dam safety. In addition, stakeholder groups such as the Association of State Dam Safety Officials can assist licensees in staying current on federal and state dam laws and regulations, dam operations and maintenance practices, and emergency action planning, among other things. FERC’s regulations, supplemented by its Operating Manual and Engineering Guidelines, establish a framework for its dam safety oversight approach. FERC’s Operating Manual provides guidelines for the FERC staff performing inspections that are aimed at ensuring that structures are safe, are being properly maintained, and are being operated safely. FERC’s Engineering Guidelines provides FERC staff and licensees with procedures and criteria for the review and analysis of license applications, project modification proposals, technical studies, and dam designs. For example, one chapter presents guidelines for FERC staff to use to determine the appropriateness and level of geotechnical investigations and studies for dams. The Engineering Guidelines states that every dam is unique and that safety analysis of each dam require that engineers apply technical judgement based on their professional experience. As part of FERC’s safety oversight approach, it assigns a hazard classification to each dam in accordance with federal guidelines that consider the potential human or economic consequences of the dam’s failure. The hazard classification does not indicate the structural integrity of the dam itself, but rather the probable effects if a failure should occur. Depending on the hazard classification, the extent of and the frequency of safety oversight activities can vary. Low hazard dams are those where failure —an uncontrolled release of water from a water-retaining structure—would result in no probable loss of human life but could cause low economic and/or environmental losses. Significant hazard dams are those dams where failure would result in no probable loss of human life, but could cause economic loss, environmental damage, or other losses. High hazard dams are those dams where failure would probably cause loss of human life. FERC has designed a multi-layered oversight approach that involves both independent and coordinated actions with dam owners and independent consultants. Key elements of this approach include ensuring licensees have a safety program in place, conducting regular safety inspections, reviewing technical analyses, and analyzing safety as a part of project relicensing. (See fig. 2.) Licensee’s dam safety program. According to FERC guidance, licensees have the most important role in ensuring dam safety through continuous visual surveillance and ongoing monitoring to evaluate the health of the structure. Beyond this expectation for continuous oversight, FERC requires licensees of high and significant hazard dams to have an Owner’s Dam Safety Program. FERC dam safety inspection. The dam safety inspection, also called operation inspection, is a regularly-scheduled inspection conducted by a FERC regional office project engineer primarily addressing dam and public safety. FERC’s Operating Manual establishes the frequency that a FERC engineer conducts dam safety inspections. Independent consultant inspection and potential failure mode analysis. FERC requires licensees to hire a FERC-approved independent consulting engineer to inspect and evaluate high hazard dams and certain types of dams above a certain height or size and submit a report detailing the findings. Additionally, FERC requires the licensee of a high or significant hazard dam to conduct a potential failure mode analysis. A potential failure mode analysis is an exercise to identify and assess all potential failure modes under normal operating water levels and under extreme conditions caused by floods, earthquakes, and other events. FERC relicensing of projects. FERC issues hydropower licenses for the construction of new hydropower projects, and reissues licenses for existing projects when licenses expire. Licensees may submit applications for a new license for the continued operation of existing projects as part of a process known as relicensing. During relicensing, in addition to the power and development purposes for which FERC issues licenses, FERC must evaluate safety, environmental, recreational, cultural, and resource development among other factors when evaluating projects, according to its guidance. In addition, FERC requires licensees to conduct various engineering studies related to dam performance in accordance with FERC safety requirements. Required engineering studies focus on dam performance as affected by hydrology, seismicity, and dam stability. Licensees may also produce engineering studies, such as a focused spillway assessment, for their own operations or at the request of FERC. FERC Staff Collect Safety Information during Inspections of Individual Dams, but FERC Has Not Analyzed Dam Safety across Its Entire Portfolio FERC Staff Generally Followed Guidance to Collect Information during Safety Inspections of Individual Dams That We Reviewed but Have Inconsistently Recorded Such Information Information Collection We found, based on our analysis of the 42 dam safety inspections we reviewed, that FERC staff generally conducted and collected information from these inspections consistent with guidance in its Operating Manual. According to FERC’s Operating Manual, staff’s approach to conducting these inspections and collecting information is to include preparing for the inspection by reviewing documents, conducting a field inspection of the dam and associated project components, and discussing inspection findings with licensees and with FERC supervisors. Preparation for inspection: We found that FERC staff generally met document review requirements in preparation for safety inspections of the 42 dams we reviewed. (See table 1.) According to the Operating Manual, FERC staff are to review safety-related information contained in documents such as potential failure mode analyses and hazard potential classifications. For example, we found that staff documented their review of the most recent independent consultant inspection report and potential failure mode analysis for each of the 16 high hazard dams we reviewed. FERC staff told us that they generally used checklists when preparing for these inspections. For example, some of the staff told us they tailor the checklist included in the Operating Manual, based on the dam’s type, characteristics, and hazard classification. Additionally, for each of the dams in our sample, staff stated that they prepared for the inspection by reviewing prior inspection reports and recommendations. Field inspection: We found that FERC staff generally met requirements for reviewing project components and documenting their findings from field inspections of the 42 dams we reviewed. (See table 2.) According to the Operating Manual, FERC staff are to conduct visual inspections of the dam, typically alongside the licensee, to assess the dam and project components by observing their condition and identifying any safety deficiency or maintenance requirement. Also during the inspection, FERC staff are to compare current conditions of the dam and project components to those described in prior inspection reports, and as applicable, collect information on the licensee’s progress towards resolving deficiencies and maintenance issues that can affect safety. To assess safety, FERC staff we interviewed stated that they primarily rely on their engineering judgment. Inspection findings: According to our interviews with FERC staff from selected projects, we found that staff generally followed FERC guidance in discussing inspection findings with licensees and supervisors prior to preparing inspection reports to document their findings. According to the Operating Manual, following the dam safety inspection, FERC staff are to discuss the inspection with the licensee, giving direction on how to address any findings. Additionally, upon returning to the office, staff are to discuss inspection findings with their supervisors who may suggest additional actions. FERC staff are then to develop a dam safety inspection report that documents observations and conclusions from their pre-inspection preparation and their field inspection and identifies follow-up actions for the licensee. We found that FERC staff prepared inspection reports to document findings from the 42 dam safety inspections we reviewed. In response to inspection findings, FERC requires licensees to submit a plan and schedule to remediate any deficiency, actions that FERC staff then reviews, approves, and monitors until the licensees have addressed the deficiency. Information Recording While we found that FERC staff conducted inspections and collected inspection findings consistently in the files we reviewed, FERC’s approach to recording information varies across its regions, thus limiting the usefulness of the information. FERC’s approach to recording inspection information relies on multiple systems to record inspection information and affords broad discretion to its staff on how to characterize findings, such as whether to track inspection findings as maintenance issues or as safety deficiencies. As related to systems for recording inspection information, FERC staff use the Data and Management System (DAMS), the Office of Energy Projects-IT (OEP-IT) system, as well as spreadsheets. In particular, according to FERC staff: Four out of FERC’s five regional offices use DAMS—which is primarily a workload tracking tool—to track plans and schedules associated with safety investigations and modifications as well as inspection follow-up items. FERC staff stated that since the inspection information in DAMS is recorded as narrative text in a data field instead of as discrete categories, sorting or analysis of the information is difficult. One regional office uses OEP-IT to track safety deficiencies while the system is more widely used across FERC to track licensees’ compliance with the terms and conditions of their licenses. Three out of FERC’s five regional offices also use spreadsheets and other tools that are not integrated with DAMS or OEP-IT to track inspection information and licensee progress toward resolving safety deficiencies. FERC staff said that use of these different systems to record deficiencies identified during inspections limits their ability to analyze safety information. For example, according to FERC officials, OEP-IT was not designed to track safety deficiency information and is not compatible with DAMS for use in tracking information on a national level. Furthermore, because spreadsheets and other tools are specific to the regional office in which they are used, FERC staff does not use the information they contain for agency-wide analysis. Concerning decisions on how to characterize inspection findings, FERC staff relies on professional judgment, informed by their experience and the Engineering Guidelines, to determine whether to track inspection findings as a safety deficiency or as a maintenance item, according to FERC officials. With input from their supervisors, FERC staff also determines what information to record and how to track the status of the inspection finding. For example, staff assigned to a dam at a FERC- licensed project in New Hampshire observed concrete deterioration on several parts of the dam and its spillway and asked the licensee to monitor all concrete surfaces, making repairs as necessary. According to staff we interviewed, regional staff and supervisors decided not to identify this as a deficiency to be tracked in DAMS because concrete deterioration is normal and to be expected in consideration of the area’s harsh winter weather. In contrast, staff assigned to a dam at a FERC- licensed project in Minnesota observed concrete deterioration on several parts of the project, including the piers and the powerhouse walls, and entered the safety item in DAMS as requiring repair by the licensee. FERC officials stated they are comfortable with the use of professional judgement to classify and address inspection findings because it is important to allow for consideration of the characteristics unique to each situation and how they affect safety. FERC’s approach to recording inspection information is inconsistent because FERC has not provided standard language and procedures about how staff should record and track deficiencies including which system to use. Federal standards for internal control state that agencies should design an entity’s information system and related control activities to achieve objectives and control risks. In practice, this means that an agency would design control activities—such as policies and procedures—over the information technology infrastructure to support the completeness, accuracy, and validity of information processing by information technology. FERC officials acknowledged that there are inconsistent approaches in where and how staff record safety deficiency information, approaches that limit the information’s usefulness as an input to its oversight. While the agency has not developed guidance, officials stated that FERC plans to take steps to improve the consistency of recorded information by replacing the OEP-IT system with a new system, tentatively scheduled for September 2018, that will have a specific function to track dam safety requirements. However, this new system will not replace the functions of DAMS, which FERC will continue to use to store inspection information. The two will exist as parallel systems with the eventual goal of the two systems’ sharing information. By developing standard language and procedures to standardize the recording of information collected during inspections, FERC officials could help ensure that the information shared across these systems is comparable, steps that would allow FERC to identify the extent of and characteristics associated with common safety deficiencies across its entire portfolio of regulated dams. Moreover, with a consistent approach to recording information from individual dam safety inspections, FERC will be positioned to proactively identify comparable safety deficiencies across its portfolio and to tailor its inspections towards evaluating them. FERC Has Not Used Inspection Information to Fully Assess Safety Risks across Its Regulated Portfolio of Dams While FERC uses inspection information to monitor a licensee’s efforts to address a safety deficiency for an individual dam, FERC has not analyzed information collected from its dam safety inspections to evaluate safety risks across the entire regulated portfolio of dams. For example, FERC has not reviewed inspection information to identify common deficiencies among certain types of dams. Federal standards for internal control state that agencies should identify, analyze, and respond to risks related to their objectives. These standards note that one method for management to identify risks is the consideration of deficiencies identified through audits and other assessments. Dam safety inspections are an example of such an assessment. As part of such an approach, the agency analyzes risks to estimate their significance, which provides a basis for responding to the risk through specific actions. Furthermore, in our previous work on federal facilities, we have identified that an advanced use of risk management involving the ability to gauge risk across a portfolio of facilities could allow stakeholders to comprehensively identify and prioritize risks at a national level and direct resources toward alleviating them. FERC officials stated that they have not conducted a portfolio-wide analysis in part due to the inconsistency of recorded inspection data and because such an evaluation has not been a priority compared to inspecting individual dams. According to officials, the FERC headquarters office collects and reviews information semi-annually from each of its five regional offices on the progress of outstanding dam investigations and modifications in those regions. FERC’s review is designed to monitor the status of investigations on each individual dam but does not analyze risks across the portfolio of dams at the regional or national level. For example, officials from the New York Regional Office stated they do not perform trend analysis across the regional portfolio of dams under their authority, but they compile year-to-year data for each separate dam to show any progression or changes from previous data collected from individual dams. A portfolio-wide analysis could help FERC proactively identify safety risks and prioritize them at a national level. FERC officials stated that a proactive analysis of its portfolio could be useful to determining how to focus its inspections to alleviate safety risks, but it was not an action that FERC had taken to date. The benefits of a proactive analysis, for example, could be similar to those FERC derived from the analysis it conducted in reaction to the Oroville Dam incident. To conduct this analysis, FERC required 184 project licensees, identified by FERC regional offices as having spillways similar to the failed spillway at the Oroville Dam, to assess the spillways’ safety and capacity. According to FERC officials, these assessments identified 27 dam spillways with varying degrees of safety concerns. They stated that FERC’s spillway assessment initiative was a success because they were able to target a specific subgroup of dams within the portfolio and identify these safety concerns at 27 dam spillways. FERC officials stated that they are working with the dam licensees to address these safety concerns. A similar and proactive approach based on analysis of common deficiencies across the portfolio of dams under FERC’s authority could also help to identify any safety risks that may not have been targeted during the inspections of individual dams and prior to a safety incident. FERC Applies Agency Guidance and Uses Professional Judgment to Analyze Engineering Studies of Dam Performance and Evaluate Safety Licensees and Their Consultants Develop the Engineering Studies Used to Assess Dam Performance As directed by FERC, licensees and their consultants develop and review, or update, various engineering studies related to dam performance to help ensure their dams meet FERC requirements and remain safe. FERC regulations and guidelines describe the types and frequency of studies and analyses required based on dams’ hazard classifications. For all high hazard and some significant hazard dams, existing studies are to be reviewed by each licensee’s consultants every 5 years, as part of the independent consultant inspection and accompanying potential failure mode analysis. According to FERC officials, for those significant hazard dams that do not require an independent consultant inspection and for low hazard dams, FERC’s regulations and guidelines do not require any studies, but in practice FERC directs many licensees to conduct them. FERC also may request engineering studies in response to dam safety incidents at other projects, or engage a board of consultants to oversee the completion of a study. For example, as previously noted, following the Oroville Dam incident in 2017, FERC requested a special assessment of all dams with spillways similar to the failed spillway at the Oroville Dam. To develop these studies, all six of the consultants we interviewed stated that they follow guidelines provided by FERC and other dam safety agencies. Specifically, they stated that they use FERC’s Engineering Guidelines, which provide engineering principles to guide the development and review of engineering studies. In recognition of the unique characteristics of each dam, including its construction, geography, and applicable loading conditions, the Guidelines provides consultants with flexibility to apply engineering judgment, and as a result, the approach that licensees and their consultants use and the focus of their reviews of engineering studies may vary across regions or projects. For example, one independent consultant we interviewed noted that seismicity studies are not highlighted during the independent consultant inspections for projects in the Upper Midwest in comparison to projects in other areas of the country because the region is not seismically active, but that inspections do look closely at ice loads during the winter months. To create these studies, we found that licensees and their consultants generally use data from other federal agencies and rely on available modeling tools developed by federal agencies and the private sector to evaluate dam performance. For example, many of the engineering studies we reviewed rely on data from the National Weather Service and the National Oceanic and Atmospheric Administration to estimate precipitation patterns and the U.S. Geological Survey to estimate seismic activity. In addition, licensees and their consultants use modeling tools and simulations, such as those developed by the Corps to estimate hydrology, to develop engineering studies. FERC staff noted that the engineering studies developed by licensees and their consultants generally focus on the analysis of extreme events, such as earthquakes and floods. In reference to extreme events, FERC staff said that both actual past events and likely future events are considered in determining their magnitude. FERC staff noted the probable maximum flood—the flood that would be expected to result from the most extreme combination of reasonably possible meteorological and hydrological conditions—as an example of a dam design criterion that is based on application of analysis of extreme events. In describing the efficacy of probable maximum flood calculations, FERC officials stated that they had not observed a flood that exceeded the probable maximum flood calculated for any dam and noted that their Engineering Guidelines provides a conservative approach to estimating the probable maximum flood and other extreme events. FERC officials stated that requiring a conservative approach to estimating extreme events helps to mitigate the substantial uncertainty associated with these events, including in consideration of emerging data estimating the effects of climate change on extreme weather events. Once developed, engineering studies we reviewed often remained in effect for a number of years, until FERC or the licensee and its consultant determined an update was required. For example, we found that the hydrology studies were 20 years or older for 17 of the 42 dams in our review, including for 9 of the 16 high hazard dams in our sample. FERC’s Engineering Guidelines states that studies should be updated as appropriate. For example, FERC’s Engineering Guidelines on hydrology studies state that previously accepted flood studies are not required to be reevaluated unless it is determined that a re-analysis is warranted. The Guidelines notes that FERC or the consultant may consider reanalyzing the study for several reasons, including if they identify (1) significant errors in the original study; (2) new data that may significantly alter previous study results; or (3) significant changes in the conditions of the drainage basin. FERC staff and consultants we interviewed stated that age alone is not a primary criterion to update or replace studies and that studies should be updated as needed depending on several factors including age, new or additional data, and professional judgment. Consultants we interviewed identified some limitations that can affect their ability to develop engineering studies for a dam. For example, they noted that some dams may lack original design information, used prior to construction of the dam, which includes the assumptions and calculations used to determine the type and size of dam, the amount of water storage capacity, and information on the pre-construction site geology and earthquake potential. FERC officials estimated that for a large percentage of the dams they relicense, the original information is no longer available. For example, according to the report from the independent forensic team investigating the Oroville Dam incident and as previously noted, some design drawings and construction records for the dam’s spillway could not be located and some other documents that were available were not included in the most recent independent consultant inspection report submitted to FERC. To overcome the lack of original design information, FERC told us that licensees and their consultants may use teams of experts, advanced data collection techniques, and other modern methods, where feasible, to assess the dam’s ability to perform given current environmental conditions. In cases where design or other engineering information is incomplete, consultants stated that they generally recommend the licensee conduct additional studies based on the risk presented by the missing information but also noted that the financial resources of a licensee may affect its willingness and ability to conduct additional studies. However, FERC officials stated that FERC staff are ultimately responsible for making decisions on whether additional engineering studies are needed to evaluate a dam’s performance. FERC’s Staff Reviews of Engineering Studies of Dam Performance Are Based on Its Engineering Guidance, and Professional Judgment Informs Aspects of Its Safety Oversight Approach FERC has established policies and procedures that use formal guidance, and permit the use of professional judgment, to evaluate and review engineering studies of dam performance submitted by licensees and their consultants. FERC officials in both the headquarters and regional offices emphasized that their role as the regulator is to review and validate engineering studies developed by the licensee and their consultants. FERC generally does not develop engineering studies as officials noted that dam safety, including the development of engineering studies, is primarily the licensee’s responsibility. To carry out their responsibility to ensure public safety, FERC staff stated they use procedures and criteria in the FERC Engineering Guidelines to review engineering studies and apply professional judgment to leverage their specialized knowledge, skills, and abilities to support their determinations of dam safety. FERC’s Engineering Guidelines provides a framework for the review of engineering studies, though the Guidelines recognizes that each dam is unique and allows for flexibility and exemptions in their use. Moreover, the Guidelines notes that analysis of data is useful when evaluating a dam’s performance, but should not be used as a substitute for judgment based on experience and common sense. Because FERC’s Engineering Guidelines allows for the application of professional judgment, the methods used to review these studies vary depending on the staff, the region, and individual dam characteristics. For example, FERC staff said that when they review consultants’ assumptions, methods, calculations and conclusions, in some cases they may decide to conduct a sensitivity analysis if—based on the staff’s judgment—they need to take additional steps to validate or confirm factors of safety for the project. FERC officials also stated that staff may conduct their own independent analyses, as appropriate, such as evaluating a major structural change to the dam or validating submitted studies. For example, as part of its 2016 review of the Union Valley Dam in California, FERC staff validated the submitted hydrology study by independently calculating key inputs, such as precipitation rates and peak floods, to evaluate the dam’s performance and verify the spillway’s reported capacity. In addition, FERC has established various controls to help ensure the quality of its review, including using a risk-based review process, assigning multiple staff to review the studies, and rotating staff responsibilities over time. We have previously found in our reporting on other regulatory agencies that practices such as rotating staff in key decision-making roles, and including at least two supervisory staff when conducting oversight reviews help reduce threats to independence and regulatory capture. Risk-based review process. FERC’s review approach is risk-based, as the frequency of staff’s review of these studies is based on the hazard classification of the dam as well as professional judgment. FERC relies on three primary engineering studies (hydrology, seismicity, and stability), and others as appropriate, which form the basis for determining if a dam is safe. In addition, FERC requires licensees to hire a FERC-approved independent consulting engineer at least every 5 years to inspect and evaluate high hazard and other applicable dams and submit a report detailing the findings as part of the independent consultant inspection process. In general, for the dams we reviewed, we found that FERC staff reviewed engineering studies for dams subject to independent consultant inspections (which are typically high or significant hazard dams) more frequently than those engineering studies associated with dams for which FERC does not require an independent consultant inspection (typically low hazard dams). For example, we found FERC staff had reviewed the most recent hydrology studies for all 22 high and significant hazard dams in our sample subject to independent consultant inspections within the last 6 years and documented their analysis. According to FERC officials, for dams not subject to an independent consultant inspection, FERC staff review engineering studies on an as needed basis, depending on whether the underlying assumptions and information from the previous studies are still relevant. For example, for the 20 dams in our study not subject to an independent consultant inspection, we found that most (15) of these studies were reviewed by FERC within the past 10 years, usually during the project’s relicensing. Multiple levels of supervisory review. As part of FERC’s quality control and internal oversight process, multiple FERC staff are to review the studies produced by the licensee and its consultant, with the number of successive reviews proportional to the complexity or importance of the study, according to FERC officials. FERC’s Operating Manual establishes the general procedure for the review of engineering studies. To begin the review process, the staff assigned to a dam is to review the engineering study and prepares an internal memo on its findings; that memo is then to be reviewed for accuracy and completeness by both a regional office Branch Chief, and the Regional Engineer. If necessary, Washington, D.C., headquarters office staff are to review and approve the final memo. Upon completion of review, FERC staff are to provide a letter to the licensee indicating any particular areas where additional information is needed or where more studies are needed to evaluate the dam’s performance. According to FERC officials, each level of review adds successive quality control steps performed by experienced staff. We have previously found in reporting on other regulatory agencies that additional levels of review increases transparency and accountability and diminishes the risk of regulatory capture. Rotation of FERC staff responsibilities. As part of an internal quality control program to help minimize the risk of missing important safety- related items, FERC officials told us they rotate staff assignments and responsibilities approximately every 3 to 4 years. According to FERC officials, this practice decreases the chance that a deficiency would be missed over time due to differences in areas of engineering expertise between or among staff. We have previously found in our reporting on other regulatory agencies that strategies such as more frequently rotating staff in key roles can help reduce the risk to supervisory independence and regulatory capture. Some FERC regional offices have developed practices to further enhance their review of these studies. For example, the New York Regional Office established a subject matter expert team that helps review dams with unusually complex hydrology issues. This team was created, in part, because FERC staff noted that some of the hydrology studies conducted in the 1990s and 2000s were not as thorough as they would have wanted, and warranted a re-examination. Currently, the New York Regional Office is reviewing the hydrology analysis associated with 12 dam break studies to determine if the hydrology data used in developing these studies were as rigorously developed and validated. According to the FERC staff in this office, utilizing a team of subject matter experts has reduced Regional Office review time and improved the hydrology studies’ accuracy. FERC staff in the New York Regional Office also told us that they are working with other regional offices on setting up similar technical teams. For example, FERC staff in the New York Regional Office have been working with the Portland Regional Office to set up a similar team. FERC procedures require the use of engineering studies at key points over the dam’s licensing period to inform components of its safety oversight approach, including during the potential failure mode analyses of individual dams as well as during relicensing. Potential failure mode analysis. The potential failure mode analysis is to occur during the recurring independent consultant inspection and is conducted by the licensee’s independent consultant along with other key dam safety stakeholders. As previously explained, the analysis incorporates the engineering studies and identifies events that could cause a dam to potentially fail. During the potential failure mode analysis, FERC, the licensee, the consultant, and other key dam safety stakeholders are to refer to the engineering studies to establish environmental conditions that inform dam failure scenarios, the risks associated with these failures, and their consequences for an individual dam. Further, according to a FERC white paper on risk analysis, FERC is beginning to use information related to potential failure modes as inputs to an analysis tool that quantifies risks at each dam. With this information, FERC expects to make relative risk estimates of dams within its inventory and establish priorities for further study or remediation of risks at individual dams, according to the white paper. Relicensing. During relicensing, FERC staff are to review the engineering studies as well as information such as historical hydrological data and extreme weather events, which also inform their safety evaluation of the licensee’s application. FERC officials also stated that as a result of their relicensing review, they might alter the articles of the new license before it is issued should their reviews indicate that environmental conditions affecting the dam’s safety have changed. FERC Summarizes Information from Required Sources to Evaluate Dam Safety during Relicensing We found that FERC generally met its requirement to evaluate dam safety during the relicensing process for the 42 dams we reviewed. During the relicensing process, we found that for the dams we reviewed, FERC staff review safety information such as the past reports, inspections, and studies conducted by FERC, the licensee, and independent consultants and determine whether or not a dam owner operated and maintained its dam safely. According to FERC staff, the safety review for relicensing is generally a summary of prior safety and inspection information, rather than an analysis of new safety information, unless the licensee proposes a change to the operation or structure. FERC’s review during relicensing for the high hazard and significant hazard dams we reviewed was generally consistent with its guidance and safety memo template, though the extent of its review of low hazard dams varied. (See fig. 3.) For example, for the 22 high and significant hazard dams we reviewed, the safety relicensing memos followed the template and nearly all included summaries of hydrology studies, stability analyses, prior FERC inspections, and applicable independent consultant reports. For the 20 low hazard dams, FERC staff noted that some requirements in the template are not applicable or have been exempted and therefore were not reviewed during relicensing. While low hazard dams were more inconsistently reviewed during relicensing, FERC staff also noted that there has been a recent emphasis to more closely review, replace, or conduct engineering studies, such as the stability study, for low hazard dams during relicensing. Moreover, FERC staff told us that the safety risks associated with these dams are minimal, as the failure of a low hazard dam, by definition, does not pose a threat to human life or economic activity. According to FERC staff, if a licensee proposed altering the dam or its operations in any way as part of its application for a new license, FERC staff would review the proposed change and may recommend adding articles to the new license prior to its issuance to ensure dam safety. FERC officials noted that, as part of their review, any structural or operational changes proposed by the licensee during relicensing are reviewed by FERC. These officials also noted that FERC generally recommends modifications to the licensees’ proposed changes prior to their approval and inclusion in the new license. However, FERC officials noted that, in some cases, additional information is needed prior to approving the structural or operational change to ensure there are no risks posed by the changes. In those instances, FERC may recommend that articles be added to the new license, that require the licensee to conduct additional engineering studies of the issue and submit them to FERC for review and approval. For example, during the relicensing of the Otter Creek project in Vermont in 2014, the licensee proposed changes to the project’s operation resulting from construction. As a result, FERC’s staff recommended adding a number of articles to the license, including that the licensee conduct studies to evaluate the effect of the change on safety and to ensure safety during construction. During relicensing, third parties—such as environmental organizations, nearby residents and communities, and other federal agencies, such as the U.S. Fish and Wildlife Service—may provide input on various topics related to the project, including safety. However, FERC officials said that very few third parties file studies or comments related to dam safety during relicensing. FERC’s template and guidance do not specifically require the consideration of such analyses as part of its safety review, and we did not identify any safety studies submitted by third parties for dams or reviewed by FERC in our sample. According to FERC officials, when stakeholders submit comments during relicensing, the comments tend to focus on environmental aspects of the project, such as adding passages for fish migration. Further, FERC is not required under the Federal Power Act to respond to any comments, including those related to dam safety, from third parties, according to FERC officials. However, according to FERC officials, courts have held that the Administrative Procedure Act precludes an agency from arbitrarily and capriciously ignoring issues raised in comments. Furthermore, these officials stated that if a court determines that FERC did not sufficiently address issues raised during the relicensing process, its orders are subject to being reversed and remanded by applicable United States courts of appeals. Moreover, FERC officials noted that the information needed to develop third party safety studies, such as the dam design drawings and engineering studies, are property of the licensee, rather than FERC. In addition, this information may not be readily available to third parties or the public if FERC designates it as critical energy infrastructure information, which would preclude its release to the general public. FERC staff we interviewed stated that there have been no instances where the Commission denied a new license to a licensee as a result of its safety review during relicensing. FERC staff stated that given the frequency of other inspections, including the FERC staff inspections, and independent consultant inspections, it is unlikely staff would find a previously unknown major safety issue during relicensing. FERC staff told us that rather than deny a license for safety deficiencies, FERC will keep a dam owner under the terms of a FERC license to better ensure the licensee remedies existing safety deficiencies. Specifically, FERC staff noted that under a license, FERC can ensure dam safety by (1) closely monitoring the deficiency’s remediation progress through its inspection program, (2) adding license terms in the new license tailored to the specific safety deficiency, and (3), as necessary, pursuing compliance and enforcement actions, such as civil penalties or stop work orders, to enforce the terms and conditions of the license. For example, prior to and during the relicensing of a FERC-licensed project in Wisconsin in 2014, FERC’s review identified that the spillway capacity was inadequate. While the project was relicensed in 2017 without changes to the spillway, FERC officials stated that they have been overseeing the plans and studies of the remediation of the spillway through their ongoing inspection program. However, if an imminent safety threat is identified during the relicensing review, FERC officials stated that they will order that the licensee take actions to remedy the issue immediately. Moreover, FERC officials noted that, if necessary, a license can be revoked for failure to comply with the terms of its license. Conclusions FERC designed a multi-layered safety approach—which uses inspections, studies, and other assessments of individual dams—to reduce exposure to safety risks. However, as the spillway failure at the Oroville Dam project in 2017 demonstrated, it is not possible to eliminate all uncertainties and risks. As part of a continuing effort to ensure dam safety at licensed projects, FERC could complement its approach to evaluating the safety of individual dams by enhancing its capability to assess and identify the risks across its portfolio of licensed dams. Specifically, while FERC has collected and stored a substantial amount of information from its individual dam safety inspections, FERC’s approach to recording this information is inconsistent due to a lack of standard language and procedures. By clarifying its approach to the recording of information collected during inspections, FERC officials could help ensure that the information recorded is comparable when shared across its regions. Moreover, the absence of standard language and procedures to consistently record inspection information impedes a broader, portfolio- wide analysis of the extent of and characteristics associated with common safety deficiencies identified during FERC inspections. While FERC has not yet conducted such an analysis, a proactive assessment of common safety inspection deficiencies across FERC’s portfolio of licensed dams— similar to its identification of dam spillways with safety concerns following the Oroville Dam incident—could help FERC and its licensees identify safety risks prior to a safety incident and to develop approaches to mitigate those risks. Recommendations for Executive Action We are making the following two recommendations to FERC: FERC should provide standard language and procedures to its staff on how to record information collected during inspections, including how and where to record information about safety deficiencies, in order to facilitate analysis of safety deficiencies across FERC’s portfolio of regulated dams. (Recommendation 1) FERC should use information from its inspections to assess safety risks across its portfolio of regulated dams to identify and prioritize safety risks at a national level. (Recommendation 2) Agency Comments We provided a draft of this report to FERC for review and comment. In its comments on the draft report, FERC said it generally agreed with the draft report’s findings and found the recommendations to be constructive. FERC said that it would direct staff to develop appropriate next steps to implement GAO’s recommendations. These comments are reproduced in appendix IV. In addition, FERC provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairman of FERC and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-2834 or vonaha@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Appendix I: Summary of the Federal Energy Regulatory Commission’s Actions to Help Ensure Licensee Compliance with Requirements Related to Dam Safety FERC seeks to ensure licensees’ compliance with FERC regulations and license requirements, including remediating safety deficiencies, by using a mix of preventative strategies to help identify situations before they become problems and reactive strategies such as issuing penalties. As part of its efforts, FERC published a compliance handbook in 2015 that provides an overall guide to compliance and enforcement of a variety of license requirements, including dam safety. The handbook includes instructions for implementing FERC rules, regulations, policies, and programs designed to ensure effective compliance with license conditions, which include dam safety, to protect and enhance beneficial public uses of waterways. FERC developed a range of enforcement actions, that include holding workshops to encourage compliance and issuing guidance, that increase in severity depending on the non- compliance issue. (See fig. 4.) More broadly, FERC’s guidance directs officials to determine enforcement actions and time frames for those actions on a case-by-case basis, depending on the characteristics of the specific compliance issue. According to FERC officials, many of these safety compliance discussions are handled informally. In addition, their compliance approach emphasizes activities that assist, rather than force, licensees to achieve compliance, according to its guidance. These activities include facilitating open lines of communication with licensees, participating in technical workshops, and publishing brochures and guidance documents, among other efforts. Also, according to these officials, FERC works with licensees to provide guidance and warnings of possible non-compliance matters, in order to avoid usage of any enforcement tools, if possible. According to FERC officials, any safety issues that endanger the public will result in immediate penalty or removal of the dam from power generation, but this action is not lightly taken. Additionally, the length of time between when a safety deficiency is identified and is resolved varies substantially depending on the specific project. As stated earlier in this report, FERC works with licensees to determine a plan and schedule for investigating safety issues and making any needed modifications. However, FERC officials stated that the majority of safety compliance issues are resolved within a month. However, FERC officials stated that if a licensee repeatedly does not take steps to address a compliance issue, FERC will explore enforcement actions through a formal process. According to officials, FERC’s enforcement options are based on authorities provided under the Federal Power Act and such options are flexible because of the variation in hazards, consequences, and dams. According to FERC officials, to ensure compliance with safety regulations, if a settlement cannot be reached, FERC may, among other things, issue an order to show cause, issue civil penalties in the form of fines to licensees, impose stop work or cease power generation orders, revoke licenses, and seek injunctions in federal court. Nevertheless, FERC officials stated that there is no specific requirement for how quickly the compliance issues or deficiencies should be resolved and that some issues can take years to resolve. For example, in 2004, the current licensee of a hydroelectric project operating in Edenville, Michigan, acquired the project, which was found by FERC to be in a state of non-compliance at that time. FERC staff made numerous attempts to work with the licensee to resolve the compliance issues. However, they were unable to resolve these issues and as a result issued a cease generation order in 2017, followed in 2018 by a license revocation order. In practice, FERC’s use of these enforcement tools to resolve safety issues has been fairly limited, particularly in comparison to other license compliance issues, according to FERC officials. Since 2013, FERC has issued one civil penalty for a safety-related hydropower violation and has issued compliance orders on eight other projects for safety-related reasons, including orders to cease generation on three projects. Appendix II: Information on Selected Models and Data Sets Used to Develop and Evaluate Dam Performance Studies For the 14 projects and 42 dams we reviewed, FERC licensees and their consultants used a variety of tools to develop engineering studies of dam performance (see table 3). These tools included programs and modeling tools developed by government agencies, such as the U.S. Army Corps of Engineers (the Corps), as well as commercially available modeling tools. FERC officials stated that they also used a number of the same tools used by its licensees and consultants. Similarly, for the 14 projects and 42 dams we reviewed, FERC licensees and their consultants used a variety of datasets to develop engineering studies of dam performance (see table 4). These datasets included data maintained and updated by various government agencies, including the United States Geological Survey and National Oceanic and Atmospheric Administration. FERC officials stated that they also used a number of the same datasets used by its licensees and consultants. Appendix III: Objectives, Scope, and Methodology This report assesses: (1) how FERC collects information from its dam safety inspections and the extent to which FERC analyzes it; (2) how FERC evaluates engineering studies of dam performance to analyze safety, and (3) the extent to which FERC reviews dam safety information during relicensing and the information FERC considers. This report also includes information on FERC actions to ensure licensee compliance with license requirements related to dam safety (app. I) and selected models and data sets used to develop and evaluate engineering studies of dam performance (app. II). For each of the objectives, we reviewed laws, regulations, FERC guidance, templates, and other documentation pertaining to FERC’s evaluation of dam safety. In addition, we reviewed an independent forensic team’s assessment of the causes of the Oroville Dam incident, including the report’s analysis of FERC’s approach to ensuring safety at the project, to understand any limitations of FERC’s approach identified by the report. We also reviewed dam safety documentation, including dam performance studies, FERC memorandums, the most recent completed inspection report, and other information, from a non-probability sample of 14 projects encompassing 42 dams relicensed from fiscal years 2014 through 2017. (See table 5.) We selected these projects and dams to include ones that were geographically dispersed, had varying potential risks associated with their potential failure, and had differences in the length of their relicensing process. We developed a data collection instrument to collect information from the dam safety documentation and analyzed data from the sample to evaluate the extent to which FERC followed its dam safety guidance across the selected projects. To develop the data collection instrument, we reviewed and incorporated FERC oversight requirements from its regulations, guidance, and templates. We conducted three pre-tests of the instrument, and revised the instrument after each pre-test. To ensure consistency and accuracy in the collection of this information, for each dam in the sample, one analyst conducted an initial review of the dam safety documentation; a second analyst reviewed the information independently; and the two analysts reconciled any differences. Following our review of the information from the dam safety documentation, we conducted semi-structured interviews with FERC engineering staff associated with each of the 14 projects and 42 dams to obtain information about FERC’s inspections, review of dam performance studies, and analysis of safety during the relicensing of these projects. Our interviews with these FERC staff provided insight into FERC’s dam safety oversight approach and are not generalizable to all projects. We also interviewed FERC officials responsible for dam safety about dam safety practices. In addition, to review how FERC collects information from its dam safety inspections and the extent to which FERC analyzes it, we also reviewed inspection data from FERC’s information management systems from fiscal years 2014 through 2017. To assess the reliability of these data, we reviewed guidance and interviewed FERC officials. We determined that the data were sufficiently reliable for our purposes. We compared FERC’s approach to collecting, recording and using safety information to federal internal control standards for the design of information systems and related control activities. We also reviewed our prior work on portfolio- level risk management. To evaluate how FERC evaluates engineering studies of dam performance to analyze dam safety, we reviewed FERC policies and guidance. We interviewed six independent consultants having experience inspecting and analyzing FERC-regulated dams to understand how engineering studies of dam performance are developed. We selected consultants who had submitted an inspection report to FERC recently (between December 2017 and February 2018) based on the geographic location of the project they reviewed and experience conducting these inspections, and the number of reports submitted to FERC over this time period. (See table 6.) Our interviews with these consultants provided insight into FERC’s approach to conducting and reviewing studies and are not generalizable to all projects or consultants. To evaluate the extent to which FERC reviews dam safety information during relicensing and the information it considers, we reviewed templates developed by FERC to assess safety during the relicensing and analyzed the extent to which staff followed guidance in these templates for the 14 projects and 42 dams in our sample. We also interviewed stakeholders, including the National Hydropower Association and Friends of the River to obtain general perspectives on FERC’s relicensing approach. Our interviews with these stakeholders provided insight into FERC’s approach to relicensing, and these views are not generalizable across all stakeholders. To review actions to ensure licensee compliance with license requirements related to dam safety, we reviewed FERC’s guidance related to compliance and enforcement and interviewed FERC officials responsible for implementation of the guidance. To review information on models and datasets used to develop and evaluate engineering studies of dam performance, we reviewed dam safety documentation associated with the projects in our sample (described previously), reviewed FERC documentation, and interviewed FERC officials. We conducted this performance audit from July 2017 to October 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix IV: Comments from the Federal Energy Regulatory Commission Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Andrew Von Ah, (202) 512-2834 or vonaha@gao.gov. Staff Acknowledgments In addition to the contact named above, Mike Armes (Assistant Director); Matt Voit (Analyst-in-Charge); David Blanding; Brian Chung; Geoff Hamilton; Vondalee Hunt; Rich Johnson; Jon Melhus; Monique Nasrallah; Madhav Panwar; Malika Rice; Sandra Sokol; and Michelle Weathers made key contributions to this report.
Why GAO Did This Study In February 2017, components of California's Oroville Dam failed, leading to the evacuation of nearly 200,000 nearby residents. FERC is the federal regulator of the Oroville Dam and over 2,500 other dams associated with nonfederal hydropower projects nationwide. FERC issues and renews licenses—which can last up to 50 years—to dam operators and promotes safe dam operation by conducting safety inspections and reviewing technical engineering studies, among other actions. GAO was asked to review FERC's approach to overseeing dam safety. This report examines: (1) how FERC collects information from its dam safety inspections and the extent of its analysis, and (2) how FERC evaluates engineering studies of dam performance to analyze safety, among other objectives. GAO analyzed documentation on a non-generalizable sample of 42 dams associated with projects relicensed from fiscal years 2014 through 2017, selected based on geography and hazard classifications, among other factors. GAO also reviewed FERC regulations and documents; and interviewed FERC staff associated with the selected projects and technical consultants, selected based on the frequency and timing of their reviews. What GAO Found The Federal Energy Regulatory Commission's (FERC) staff generally followed established guidance in collecting safety information from dam inspections for the dams GAO reviewed, but FERC has not used this information to analyze dam safety portfolio-wide. For these 42 dams, GAO found that FERC staff generally followed guidance in collecting safety information during inspections of individual dams and key structures associated with those dams. (See figure.) However, FERC lacks standard procedures that specify how and where staff should record safety deficiencies identified. As a result, FERC staff use multiple systems to record inspection findings, thereby creating information that cannot be easily analyzed. Further, while FERC officials said inspections help oversee individual dam's safety, FERC has not analyzed this information to identify any safety risks across its portfolio. GAO's prior work has highlighted the importance of evaluating risks across a portfolio. FERC officials stated that they have not conducted portfolio-wide analyses because officials prioritize the individual dam inspections and response to urgent dam safety incidents. However, following the Oroville incident, a FERC-led initiative to examine dam structures comparable to those at Oroville identified 27 dam spillways with varying degrees of safety concerns, on which FERC officials stated they are working with dam licensees to address. A similar and proactive portfolio-wide approach, based on analysis of common inspection deficiencies across the portfolio of dams under FERC's authority, could help FERC identify safety risks prior to a safety incident. Guidelines recognize that each dam is unique and allow for flexibility and exemptions in its use. FERC staff use the studies to inform other components of their safety approach, including the analysis of dam failure scenarios and their review of safety to determine whether to renew a license. What GAO Recommends GAO recommends that FERC: (1) develop standard procedures for recording information collected as part of its inspections, and (2) use inspection information to assess safety risks across FERC's portfolio of dams. FERC agreed with GAO's recommendations.
gao_GAO-19-228
gao_GAO-19-228_0
Background History of the Distressed Asset Stabilization Program The National Housing Act authorized HUD’s Office of Housing to accept assignment of and sell defaulted single-family mortgage loans. Additionally, Office of Management and Budget (OMB) Circular No. A-11 (2016) states that under the Debt Collection Improvement Act of 1996, credit agencies with over $100 million in loan assets are expected to sell defaulted loan assets that are more than 1 year delinquent, with some exceptions. The OMB Circular further states that the agency may not be required to sell loan assets if a serious conflict exists between selling loans and policy goals. In 2017, FHA insured over $1 trillion in single- family mortgage loans, including more than 200,000 loans in default. Consistent with the National Housing Act and OMB Circular, FHA uses DASP to reduce its backlog of defaulted loans by selling loans that are severely delinquent. As of 2016, loans must be at least 8 months delinquent to be eligible for sale through DASP. In addition, servicers must evaluate borrowers for all FHA loss mitigation options in order for loans to be eligible for sale through DASP. FHA has called its single-family forward loan sales program by different names over the years, but it became known as DASP beginning with FHA’s third loan sale in 2012. We use DASP throughout this report to refer to FHA loan sales, regardless of the timing or the program name. Between 2010 and 2016, FHA held a total of 16 sales, with between one and four sales annually. As seen in figure 1, the number of loans sold varied significantly among the sales. Figure 2 shows the extent to which FHA has sold defaulted loans in each state in 2013-2016. The map also indicates states with longer expected foreclosure timelines. The foreclosure process is governed by state laws and differs across states. FHA establishes expected timelines for completing foreclosure and acquiring title to the property in each state. As discussed later, the foreclosure process involves a number of costs, which may be higher in states with longer expected foreclosure timelines. Additional information on the loans sold through DASP can be found in appendix II. Loan Delinquency, Loss Mitigation, and Costs to the Mutual Mortgage Insurance Fund Associated with Different Loan Disposition Methods A loan becomes delinquent after the borrower misses a single payment and goes into default after it is at least 31 days—two full payments—past due, including when a borrower may miss payments sporadically over time without repaying the missed amount. Loan servicers—which can be large mortgage finance companies or commercial banks—are responsible for accepting payments from borrowers and managing mortgages. FHA requires the servicers to provide monthly reports on each loan with one or more missed payments through its Single Family Default Monitoring System (default monitoring system). Before initiating foreclosure actions, FHA requires servicers to contact the borrower, collect information on the borrower’s finances, and evaluate the borrower using the following ordered steps, referred to as the waterfall of loss mitigation priorities: informal forbearance through an oral agreement allowing for reduced or suspended payments for a period of 3 months or less; formal forbearance with written repayment plans, which combine a suspension or reduction in monthly mortgage payments with a repayment period; special forbearance of up to 12 months for borrowers who are unemployed; FHA-Home Affordable Modification Program (HAMP), which works to get a borrower to return to making regular payments (reperforming); FHA-HAMP offers qualified borrowers a loan modification that results in an affordable monthly payment amount that does not exceed 40 percent of the borrower’s gross monthly income by reamortizing the debt for a new 30-year term at a fixed interest rate at or below the market rate and, under certain circumstances, deferring the payment of principal through the use of a partial claim; and non-retention disposition methods, including a preforeclosure sale (also known as a short sale) in which the borrower sells a property and the mortgage is satisfied for less than the amount that is owed, or deed-in-lieu of foreclosure in which the borrower voluntarily transfers a property to FHA to release all mortgage obligations; FHA may also provide move-out incentive payments to borrowers for short sales and deeds-in-lieu of foreclosure. To qualify for most of these actions, borrowers must be in default. A servicer must evaluate a borrower for the loss mitigation options monthly, but a borrower may not qualify for any option. However, a borrower’s circumstances are fluid and eligibility can change. For example, borrowers who previously did not qualify for any loss mitigation options could be eligible to be evaluated for loss mitigation options again after starting a new job. FHA provides servicers with incentive payments of varying size for taking certain loss mitigation actions. FHA generally requires servicers to either use a loss mitigation option for which a borrower qualifies or initiate foreclosure within 6 months of the default date, but a loan also may become eligible for disposition through a DASP sale when loss mitigation has been exhausted and it meets other eligibility criteria. FHA provides servicers with a list of loan eligibility criteria in the servicer agreement for each sale. Servicers use the criteria to identify which loans are eligible for a DASP sale. For example, eligibility criteria include that a loan must be FHA-insured, have no more than four dwelling units, and have an unpaid principal balance (amount owed) greater than $20,000. Other criteria relate to length of delinquency, loan-to-value (LTV) ratio, and the condition of the property. Loans that qualify for loss mitigation or have a foreclosure date scheduled or completed during the sale period are not eligible for DASP. Information on changes to loan eligibility criteria throughout the history of the program can be found later in this report. Each of the disposition methods FHA uses when loss mitigation on defaulted loans is exhausted has different costs to FHA’s MMI Fund (see table 1). For the nonretention disposition methods of short sale, deed-in- lieu of foreclosure, third-party sale, or foreclosure—which we refer to as “out of home” methods—FHA pays a claim to the servicer in the amount of the unpaid mortgage balance and other expenses. In addition, for a deed-in-lieu of foreclosure or foreclosure—in which the property enters HUD’s inventory of real estate owned (REO) property—FHA also incurs costs associated with maintaining, repairing, and selling the property. This generally results in a greater loss to the MMI Fund. In the case of a DASP sale, FHA avoids interest and servicing costs during the foreclosure period as well as REO-related expenses, but incurs the cost of the difference between the unpaid balance and expenses and the amount FHA receives for the loan it sells. Process of the Distressed Asset Stabilization Program The loan sale process has three distinct phases: presale, due diligence and bid, and postsale (see figs. 3, 4, and 5, respectively). FHA contractors (the transaction specialist, the compliance analytics contractor, and the program financial advisor) facilitate and perform various tasks throughout these phases. The summary below reflects the process according to 2016 sales documents (the most recent DASP sales documents available), other supplemental information, and interviews with FHA officials and contractors. Figure 3 shows the presale phase. During this phase, FHA or its contractor notifies interested servicers and communicates loan eligibility criteria to servicers through the servicer agreement. Servicers that plan to participate in the sale identify a list of eligible loans, certify the accuracy and eligibility of the loans, and provide the list to FHA for review through the Claim Submission Report. The servicer uploads information on the loans submitted to FHA. FHA creates the submitted loan database, which includes each accepted loan’s current unpaid balance, payment history, and an estimate of the underlying property value. According to FHA staff, FHA reviews the eligible loans submitted by servicers and, with the advice of its transaction specialist contractor, groups them into pools based on geography and other factors. FHA sells loans in national pools or Neighborhood Stabilization Outcome (NSO) pools, for which purchasers must meet specific neighborhood stabilization outcomes for 50 percent or more of the properties in the pool. Next, an FHA contractor notifies prospective purchasers about the upcoming sale via email, and notices are posted in the Federal Register, industry publications, and newspapers. Purchasers can include private equity firms, hedge funds, rental housing companies, and nonprofit organizations. Prospective purchasers must submit to FHA a Confidentiality Agreement and a Qualification Statement. FHA reviews the documentation to determine whether the purchaser qualifies to participate in the sale. Figure 4 depicts the due diligence and bid phase of a DASP sale. During this phase, prospective purchasers receive access to the data room—a shared data website—to review materials including the loan information provided by servicers (due diligence materials); bid instructions; and sale agreement that describes representations, warranties, and postsale requirements, among other things. The servicer, FHA staff, and FHA contractors continue to verify the eligibility of the loans. Prospective purchasers place bids on each loan in a pool and deposit a percentage of their total bid amount. FHA evaluates the bids and selects the highest bidder for each pool based on the total of the loan-level bids. FHA then notifies that bidder and provides an executed purchaser agreement that describes postsale servicing and reporting requirements. Purchasers must agree to follow the terms of the purchaser agreement including avoiding finalizing foreclosures for 6 or 12 months (depending on whether the sale occurred prior to July 2015), evaluating borrowers for loan modification, and reporting outcomes to FHA. Figure 5 depicts the postsale phase. During the postsale phase, FHA provides the list of sold loans to the servicer and winning purchaser, which together determine servicing transfer dates. After bid day, servicers verify that loans continue to meet eligibility criteria for the sale and begin submitting insurance claims to FHA. Purchasers pay FHA for the loans that are sold, and servicers transfer loan information and complete mortgage files to the purchasers. When servicers submit claims to FHA for sold loans, they must report the reason any loans are not transferred. For example, a loan might not be transferred due to ongoing loss mitigation activity or another reason, such as no longer meeting delinquency eligibility criteria, and would remain with the servicer and FHA insured. Following the final transfer of loan documentation, servicing is transferred from the servicer to the purchaser. The servicer notifies the borrowers of the transfer of servicing and termination of their FHA mortgage insurance. Following the transfer, the purchaser sends the borrowers a similar notice of transfer and any required disclosures. Following the final settlement date, the purchaser submits the first of 16 quarterly reports on the status of the sale portfolio using the format provided in FHA’s Post-Sale Reporting tool. If a purchaser demonstrates a pattern of failing to report, FHA may disqualify the purchaser from future sales. During the first 12 months of the reporting period, purchasers must evaluate borrowers for a HAMP modification or a substantially similar modification. Additionally, the purchaser must avoid foreclosure for 12 months unless the home is vacant or there are extenuating circumstances. The purchaser agreement allows the purchaser 10 months starting with the servicing transfer date to notify HUD of any alleged breach of FHA’s representations and warranties on purchased loans. For example, a breach could be that a loan does not meet eligibility requirements, is not covered by a valid hazard insurance policy, or has an outstanding mechanic’s lien. After notifying the original servicer and reviewing any response, FHA determines whether there is a breach and the appropriate remedy. The breach remedy can include a cure of the breach (such as by the servicer paying an outstanding lien), reduction in claim payment, or repurchase by the servicer. The servicer has 60 days to comply with the remedy. If a breach results in the repurchase of the loan by the original servicer, the purchaser will transfer servicing back to the original servicer. Program Requirements and Processes for DASP Have Changed over Time FHA made changes to DASP by adding borrower protections and made efforts to increase the participation of nonprofit organizations. FHA also changed loan eligibility criteria and bidding processes to increase recoveries to the MMI Fund. Other changes included automating and streamlining processes. Some Changes Responded to Concerns about Borrower Protections and Nonprofit Participation FHA has added to DASP protections for borrowers and requirements to help stabilize neighborhoods in response to concerns raised by various stakeholders. For example, borrower protections included extending the moratorium on foreclosures from 6 months to 12 months and requiring the purchaser to offer a HAMP or substantially similar modification to qualified borrowers beginning with its July 2015 loan sale. In September 2016, FHA also added payment shock protection, which limited increases in a borrower’s interest rate to 1 percent per year following a 5-year reduced rate period. In an effort to stabilize neighborhoods, FHA added a requirement in 2016 prohibiting purchasers from walking away from vacant properties. In a hearing before the House Committee on Financial Services in July 2016, the HUD Secretary stated that the changes that FHA made to the program in 2015 and 2016 were designed with input from a broad range of stakeholders and were assessed for how well the changes would fulfill the agency’s goal of strengthening neighborhoods. In 2015, FHA made several outreach efforts to expand the participation of nonprofit organizations in DASP. These efforts included offering nonprofit organizations a “first look” at vacant REO properties, allowing purchasers to resell to nonprofit organizations, and conducting a webinar to educate and encourage the participation of nonprofit organizations. These efforts came about following a September 2014 report from the Center for American Progress and suggestions from other stakeholders that FHA make it easier for nonprofit organizations to participate in DASP. In 2016, FHA set a target that 10 percent of bids come from nonprofit organizations and local governments, including offering loans in targeted distressed areas. In 2015 and 2016, FHA offered nine pool sales directed at nonprofit organizations only. Some members of Congress expressed concern over FHA’s efforts to encourage participation of nonprofit organizations, stating that FHA would likely get lower bids than it would normally receive from private companies. Changes to Loan Eligibility Criteria and Bidding Were Intended to Increase Recoveries According to FHA officials, FHA changed its loan eligibility criteria for inclusion in DASP sales in order to decrease losses to the MMI Fund and to give servicers more time to work with borrowers on loss mitigation. FHA lists the eligibility criteria to qualify loans for FHA’s loan sale program in each servicer agreement. Our analysis of the servicer agreements from 2010 through 2016 showed that some criteria remained the same during the period, such as the requirement that servicers must have evaluated borrowers for all loss mitigation actions in accordance with FHA regulations or that loans in certain types of bankruptcy were ineligible. Other criteria changed during that period, including the following examples: Delinquency requirements for eligible loans changed from six full payments past due to eight full payments past due beginning with the first DASP sale in 2016; and FHA changed its eligible LTV ratio. Between the 2010 sale and the second DASP sale in 2012, FHA set a minimum LTV ratio for loan sales at 85 percent or higher—meaning that to qualify for sale, the ratio of the amount owed on the loan to the estimated value of the property was required to be 85 percent or higher. Beginning with the first DASP sale in 2015, FHA set minimum eligible LTV ratios by state—70 percent in New York and New Jersey and 85 or 100 percent for other states, with about half the states in each percentage category. FHA officials said that they analyzed loan-level bid amounts and found that they had greater recoveries relative to REO disposition on loans with shorter delinquencies and higher LTV ratios. According to the officials, this was because these loans had a higher probability of modification by purchasers. Further, they said that the changes in eligibility criteria related to delinquency and LTV ratio were intended to decrease losses to the MMI Fund. In addition, FHA lowered limits on loan-level bid pricing to minimize the potential negative effects of ineligible loans being removed from sales after bidding. Purchasers could use loan-level bid pricing to strategically take advantage of the expected removal of ineligible loans after bidding. Because a purchaser pays only for the loans that are actually transferred and some loans are removed from sales due to ineligibility, such as due to changes in loss mitigation or foreclosure status, FHA receives less in actual returns on the sale than the winning—highest—bid. For selected loan pools in the second sale in 2013 and the first sale in 2014, FHA analyzed the bid amounts of loans that became ineligible after purchasers had bid. Before the 2015 sale, FHA lowered its maximum purchasers’ loan-level bid amount from 200 to 175 percent of the unpaid balance of a loan. Other Changes Included Automating and Streamlining Processes FHA contractors deployed tools in 2015 and 2016 to automate previously manually intensive processes of collecting data and emails from about 30 different purchasers and tracking the status of sold loans. FHA, contractors, and purchasers we interviewed said that these processes improved data quality, efficiency, and communication among parties. A postsale reporting tool and data repository enables the contractor to send mass emails and target email reminders of upcoming due dates, including report deadlines, to purchasers that have not submitted required documents. In addition, the tool validates data by checking for logic and data type. A loan sale system conducts automated checks of data in the submitted loan database for completeness and accurate file layout. The system also checks whether all required documents are included on the shared data website that purchasers use to perform due diligence and determine bid amounts. The system automatically generates a report of errors that is sent to servicers. A web-based breach tracking tool that streamlines and centralizes tracking of loans that breach—that is, were transferred to purchasers but did not meet eligibility standards. The tool allows the purchasers to submit breach requests, notifies servicers automatically about pending breaches, and allows auction stakeholders to review breaches and update the status of the loan. FHA Lacks Specific Time Frames for Its Loan Eligibility Checks, Criteria for Holding Sales, and Documentation of Key Procedures and Performance Measures Multiple Entities Check Loan Eligibility, but the Timing of FHA’s Checks May Allow Ineligible Loans to Be Sold Servicers identify eligible loans for inclusion in a DASP sale, certify eligibility, and update loan information and remove ineligible loans prior to bid day. FHA staff and contractors described the various checks they conduct to generally verify a loan’s continued eligibility by reviewing the loan’s default status in FHA’s default monitoring system and in some cases other servicer data before a sale. Specifically, both FHA staff and the compliance analytics contractor conduct eligibility tests by checking each submitted loan’s default status. The transaction specialist contractor told us it conducts automated checks of the loan submission and related data that servicers submit to check for data completeness and valid formatting. Additionally, this contractor also checks that the loans match eligibility criteria and that all required documents were submitted. Starting in 2015, FHA officials told us that FHA and its three primary contractors began to verify that all loans submitted for sale had an eligible default status as part of their quality-control process. FHA officials said that any updates or changes servicers make to the status of submitted loans require the program financial advisor contractor to repeat its quality- control procedures. In addition, servicers are expected to ensure that loans meet eligibility criteria until the loan is sold and servicing responsibilities are transferred to the purchaser. The servicer agreement states that an eligible mortgage loan meets all eligibility criteria as of the date it is submitted for sale and continues to meet all such requirements as of the claim date. FHA officials said that servicers check eligibility at the loan submission date, approximately 3 weeks prior to the bid day when they update loan information, and at the claim date. Servicers should remove ineligible loans from the sale. In 2014, FHA required servicers to self-certify the accuracy of the default status of loans. FHA officials told us that it also has absolute discretion to exclude one or more loans from the sale. According to FHA officials, FHA has two different provisions in place to correct when a loan should not have been sold. One provision, as described earlier, allows the purchaser to initiate the breach process and the servicer either corrects the reason for the breach or FHA repurchases the loan. Another provision is the “claw-back” provision. Under this provision, FHA or the former servicer can require the purchaser to return the loan to FHA in exchange for the amount the purchaser paid for the loan. However, we found examples of potentially ineligible loans that were submitted for sale and were sold in DASP auctions. Of the 12,210 loans sold in 2016, a small percentage of loans (about 2.65 percent) did not meet eligibility criteria based on their default status on the date loans were submitted. The error rate was similar at the bid date for the 12,210 loans sold in 2016. In particular, about 2.67 percent of these loans did not meet eligibility criteria based on their default status on the bid date. These loans were ineligible for varied reasons, including because they did not meet FHA’s length of delinquency requirement, were involved in certain types of bankruptcy, or were undergoing loss mitigation and therefore should have remained under FHA insurance protection. Ineligible loans may have been sold because the status of loans changed after the servicer and FHA completed their eligibility checks. FHA’s staff and contractors conduct multiple eligibility checks concurrently during the presale and due diligence and bid phases—about 12 to 14 weeks before bid day according to FHA officials. These early checks conducted by FHA’s staff and contractors do not necessarily occur in a specific order or according to specific timelines. FHA officials told us that FHA relies on the servicers to perform eligibility checks a few weeks before bid day and again after the sale when the servicer submits the claim. However, the status of delinquent loans can be very fluid. According to our analysis of FHA data, 23 percent of loans from 2010 to 2016 were removed between the bid date and the claim date. FHA officials told us that servicers remove loans after FHA’s reviews to maintain compliance with representations and warranties under the servicer agreement. FHA officials also explained that loan removal was due to changes in loans’ eligibility status, such as entering into loss mitigation or the scheduling of a foreclosure sale. We reviewed a nongeneralizable sample of 10 loans that appeared to be ineligible and interviewed FHA officials about these loans. We found that some changes in the eligibility of loans could be missed due to the length of time between eligibility checks and data updates. The status of loans can change multiple times during a sale process. FHA requires servicers to self-report the status of defaulted loans on a monthly basis to the default monitoring system, usually within the first 5 days of the month, but servicers may report changes throughout the month if a loan’s status changes. However, FHA officials told us that the system updates once a month. FHA’s eligibility checks may have occurred before the updates were posted to the default monitoring system. FHA officials told us that FHA relies on the controls in place and contractual agreements with the servicers that require them to ensure that loans are eligible when submitted to FHA for sale and when they file a claim with FHA. As a result, FHA may not be aware of a change in loan eligibility that was reported in the default monitoring system after its eligibility checks were completed. Federal internal control standards require that management design control activities to achieve objectives and respond to risks. Control activities can be either preventive or detective. A preventive control activity prevents an entity from failing to achieve an objective or address a risk. Although FHA has implemented a number of controls to prevent ineligible loans from being sold, these controls may miss loans that change status after the eligibility check because FHA staff and contractors do not have a designated time in the process to conduct the eligibility check. Without spacing the timing of the various checks throughout the process, including some checks that occur closer to the bid date, FHA staff and contractors do not have the most reliable and updated data from which to make decisions regarding loan eligibility, and FHA could be selling some ineligible loans. If FHA sells a loan that is ineligible to be sold because of ongoing loss mitigation, it pays a claim for a loan that may become reperforming and never require a claim. Likewise, borrowers could lose access to benefits such as reevaluation for the suite of FHA loss mitigation options. FHA Has Not Documented All of Its Policies FHA has begun to centralize its existing written guidance, but policies for when program changes should be evaluated are not documented in this guidance. A July 2017 report from the HUD OIG found that HUD did not develop formal guidance or procedures for its single-family note sales program and recommended that the agency develop and implement formal procedures and guidance for DASP. FHA responded to the OIG that the operations of the DASP sales were documented in a series of procedures used internally by staff and externally by stakeholders. In May 2018, FHA officials told us that in response to the OIG’s recommendation, they were consolidating their current written procedures and guidance into one Asset Sales Handbook to centralize the information for internal and external stakeholders. (See app. III for a description of these documents.) FHA officials told us the key documents governing a DASP sale include the servicer agreement, purchaser agreement, detailed instructions for bid day, and specific requirements for qualified servicers. However, we found that if FHA were to compile these existing documents into an Asset Sales Handbook, it would still be missing some important program policies. As of February 2019, FHA officials confirmed that they had no written policies documenting when program changes should be evaluated. When FHA described its process for evaluating program changes, officials stated that the informal practice was to consider changes when planning for a new sale. However, as stated earlier, FHA made a number of changes in 2015 and 2016 but has not held a DASP sale since 2016. FHA officials said the date of the next DASP sale is unknown. FHA also experienced another period when no sales were conducted between 2005 and 2009. Federal internal control standards require that management implement control activities through policies. This includes documenting in policies the internal control responsibilities of the organization and periodically reviewing policies, procedures, and related control activities for continued relevance and effectiveness in achieving the entity’s objectives. For example, the standards state that if there is a significant change in an entity’s process, management reviews the process in a timely manner after the change to determine that the control activities are designed and implemented appropriately. However, FHA officials told us that they had not evaluated whether the most recent program changes were effective or should be revised because they were not planning a new sale yet. With several years between sales, written policies for regular consideration and review of program changes can help to ensure that FHA is reviewing the effectiveness of previous changes and controls and considering potential new changes in a timely manner. FHA Has Not Provided Clear Objectives or Measurable Performance Targets for DASP FHA has a DASP program objective of maximizing recoveries to the MMI Fund and has some specific targets to assess whether it is meeting this objective. On a quarterly basis, FHA measures how recovery for asset sales compares to foreclosure with REO dispositions and other disposition types, such as short sales and claims without conveyance of title. FHA officials explained that they maximize recovery by holding open and competitive auctions for nonperforming single-family loans, with the highest bidder as the winner. In addition, the Office of Risk develops a reserve price—an estimate of the expected REO recovery value of each loan in a sale and a benchmark for comparison with the bids received—to minimize the risk that FHA will not get the best recovery for the loan. In the past, when FHA received a bid below the reserve price, it opted to not sell the pool. As a result, the reserve price serves as a critical target in the agency’s determination of whether to sell. In contrast, FHA has not developed specific targets for meeting what appear to be additional DASP objectives, based on a variety of program documents and recent program changes. In 2016, for example, the HUD Secretary testified before Congress that DASP has a dual goal— ”support recoveries to the Fund while preserving homeownership and help stabilize neighborhoods.” Similarly, in HUD’s 2016 Post-Sale Report to the FHA Commissioner, HUD explained that it designed DASP “to maximize recoveries to the , and when possible, help keep borrowers—otherwise headed to foreclosure— in the home.” HUD’s recent changes to DASP likewise appear to recognize program objectives in addition to maximizing recoveries to the MMI Fund. When HUD extended the prohibition against foreclosure from 6 months to 12 months in 2015, for instance, a HUD press release stated that such changes “not only strengthen the program but help to ensure it continues to serve its intended purposes of supporting the MMI Fund and offering borrowers a second chance at avoiding foreclosure.” And when HUD changed DASP in 2016 to prohibit purchasers from abandoning low- value properties in high-foreclosure neighborhoods, it declared that this was done to help stabilize neighborhoods. Despite these repeated department statements that DASP has a “two- fold” goal and multiple “intended purposes,” FHA officials told us that preserving homeownership and stabilizing neighborhoods are “ancillary benefits”—positive consequences that flow from DASP’s objective of maximizing recoveries for the MMI Fund—but not objectives themselves. Because FHA does not consider homeownership preservation and neighborhood stabilization to be program objectives, the agency has not developed targets to meet them. FHA officials explained that they measure and monitor the extent to which purchasers meet requirements for NSO pools, for instance, by collecting loan outcome data from purchasers for 4 years. These purchasers must have no less than 50 percent of the loans in each NSO pool achieve outcomes such as keeping borrowers in their homes and properties occupied through rentals. However, FHA does not have a similar target for national pools, which represent about 80 percent of the sold loans. FHA requires purchasers of national pools to report on borrower outcomes quarterly for 4 years, but does not measure the extent to which these outcomes meet a specific target and are achieving program objectives. Prior GAO work identified key attributes of successful performance measures and indicated that performance measures should be clear, have measurable numerical targets, and demonstrate results. In addition, according to federal internal control standards, management should define objectives clearly to enable the identification of risks and define risk tolerances. This includes, for example, defining objectives in specific and measurable terms to allow for the assessment of performance toward achieving objectives. Although FHA officials told us that DASP has one objective with resulting “ancillary benefits,” it also cited these same benefits as additional program goals and purposes in the recent past. Without clarifying the program’s objectives in light of relevant laws, regulations, and agency statements and setting measurable targets to achieve these objectives, particularly for national pools, FHA cannot ensure that DASP is achieving optimal results. The Timing of DASP Sales Is Not Informed by Performance Data FHA has not used performance data to establish criteria for the timing of DASP sales. FHA officials said they have not set criteria for when to hold sales, such as the size of the portfolio of defaulted loans or other considerations. In contrast, Fannie Mae estimates the number of defaulted loans needed to be sold to achieve its goals and assesses market conditions to produce a detailed schedule of sales for the year. Our analysis of FHA’s default monitoring system data shows that several years after the housing crisis, FHA continues to insure a backlog of defaulted loans with six or more missed payments (see fig. 6). FHA officials stated that, in July 2018, FHA had about 300,000 defaulted loans, which is similar to the number of loans as in years when the DASP program was active. Most servicers we talked to told us that they preferred selling defaulted loans through DASP rather than taking them through the REO disposition process due to the servicing responsibility and costs associated with foreclosure. However, FHA officials told us that they did not know when the next sale would be. The GPRA Modernization Act of 2010 established an expectation that agencies use evidence and performance data in decision making. Specifically, the act changed agency performance management roles, planning and review processes, and reporting to ensure that agencies use evidence and performance data in decision making. Our prior work has stated that although the act’s requirements apply at the agency-wide level, they can also serve as leading practices at other organizational levels, such as component agencies, offices, programs, and projects. Because specific criteria for when to hold sales are not in place, FHA’s timing of and decisions to hold DASP sales were inconsistent. FHA held 16 DASP sales between 2010 and 2016. These sales occurred at varying frequencies. For example, FHA held between one and four sales per year, and the number of months between sales ranged from 2 to 10 months. Officials stated that DASP should be used to address a large buildup of defaulted loans and because of its lower loss severity compared with REO dispositions. Officials also told us they have not developed criteria because FHA operates DASP as a pilot program and continues to make changes after each sale. However, without analyzing the performance data of the portfolio of defaulted loans to identify criteria for the timing of DASP sales—even as a pilot program—FHA cannot make fully informed decisions about when to hold sales and may not be optimizing its use of the program in achieving its objectives. FHA Does Not Evaluate Loan Outcomes, and Sold Loans Experienced Foreclosure at a Higher Rate Than Unsold Loans in Some Cases FHA does not evaluate loan outcomes for loans sold through DASP and does not monitor the modifications offered by individual purchasers. Our analysis of FHA outcome data found that in aggregate, sold loans were less likely to avoid foreclosure than similar, unsold loans. However, our analysis also found that for some sales and some purchasers, sold loans were more likely to avoid foreclosure compared to unsold loans. A number of factors may contribute to differences in outcomes between sold and unsold loans by sale and purchaser, including increased postsale servicing and reporting requirements and the types of modifications offered by individual purchasers. FHA Does Not Compare Outcomes for Sold Defaulted Loans to Similar, Unsold Loans FHA does not use the data it collects to evaluate outcomes for loans sold through DASP compared to outcomes for similar, unsold loans. We reviewed a contractor report and FHA’s periodic reports on DASP outcomes and found that they lacked critical outcome information. Specifically, in 2017, a contractor analyzed home equity preserved as a result of the foreclosures avoided through DASP, and then estimated the effect of avoided foreclosures on surrounding areas. However, the contractor did not estimate the effect of foreclosure avoidance relative to unsold loans. Borrowers with unsold loans may also avoid foreclosure, for example, if their circumstances change and they become eligible for foreclosure mitigation options again. FHA’s periodic reports on outcomes also do not compare outcomes between sold and unsold loans. FHA officials told us they had not conducted such a comparison because they expect all loans eligible for sale to be foreclosed. A foreclosed mortgage with an REO property disposition results in the greatest losses to the MMI Fund. However, our analysis of FHA data does not support these claims. When we compared loans sold through DASP to unsold loans with similar characteristics, we found that some unsold loans achieved an outcome other than foreclosure—21 to about 34 percent at various times within a 4-year period. FHA officials also told us that they evaluate loan outcomes by tracking the extent to which purchasers are meeting NSO requirements. However, because about 80 percent of loans were not sold through NSO pools, FHA’s evaluation covers only about 20 percent of DASP loans. In addition, FHA’s NSO requirements are targeted toward achieving specific outcomes for a property or community—such as donating the property to a land bank—rather than an individual loan or borrower. Our analysis indicates that sold loans had higher foreclosure rates than unsold loans regardless of whether they were sold through national or NSO pools. estimated current loan-to-value ratio, and The matched comparison attempted to minimize differences between sold and unsold loans across these factors in order to isolate the effect on outcomes of being sold out of FHA’s insurance program. We have previously found that evaluations often involve creating a comparison group. Furthermore, HUD policy states that its evaluations use methods that isolate to the greatest extent possible the effects of the program from other influences. FHA could use loans not sold through DASP to estimate what outcomes would have been observed in the absence of the program and the associated losses to the MMI Fund. A process to evaluate outcomes for sold loans relative to similar, unsold loans could help FHA determine whether DASP is meeting its financial objective of maximizing recoveries to the MMI Fund and understand the extent to which DASP is helping struggling homeowners. FHA Does Not Monitor the Modifications Offered by Individual Purchasers or Collect All Data Needed to Evaluate Their Sustainability In its reports on DASP outcomes, FHA periodically reports at an aggregate level the change in monthly payments resulting from the modifications offered by purchasers. However, FHA does not track or report the change in payments by individual purchasers. A 2016 white paper prepared by the Department of the Treasury in conjunction with HUD and FHFA defined loss mitigation sustainability as offering solutions that work the first time. It further stated that modifications that provide meaningful payment reduction will decrease the chance of a homeowner redefaulting. Additionally, we reported in 2012 that the change in a borrower’s monthly mortgage payment is among the factors that can significantly influence the success of a modification. Since 2015, FHA has required purchasers to offer eligible borrowers HAMP-like modifications or substantially similar modifications designed to lower borrowers’ monthly payments to an affordable and sustainable amount. However, FHA does not monitor the extent to which individual purchasers complied with the requirement to offer payment-lowering modifications to eligible borrowers. We found that while the majority of the modifications offered to borrowers whose loans were sold in 2015 or later decreased monthly payments by more than 20 percent, about 8 percent of modifications increased or did not result in a change in payment. Not all borrowers are eligible for a payment-lowering modification, and, according to FHA officials, some modifications could increase monthly payments for borrowers with a large number of missed payments. As discussed later, our analysis found that outcomes can vary greatly by purchaser, and purchasers may not offer comparable modification options. See appendix IV for information on the types of modifications purchasers have used. Furthermore, FHA may not have the data it needs to evaluate whether payment-lowering modifications offered by purchasers remain sustainable. In the second 2016 sale, FHA began requiring that modified interest rates be fixed for at least 5 years and thereafter that they not increase by more than 1 percent per year. FHA also began requiring purchasers to report data related to interest rates for modified loans, including the modified interest rate and the number of years it would remain fixed. However, based on our review of reported modification information, none of the purchasers from this sale reported these data. Additionally, about 22 percent of the modifications offered to borrowers whose loans were sold in the 2015 sale or later included a deferment. Under deferment, borrowers are allowed to temporarily stop making payments toward some or all of their principal balance, interest, or other indebtedness, and deferment may result in a balloon payment at a later date. Other than type of deferment, FHA does not require purchasers to report details of the deferment or the effect on payments following the deferral period. As a result, we could not determine the long-term effect on monthly payments for many modifications offered by purchasers. Some advocacy group representatives we spoke with expressed concerns about purchasers offering unsustainable modifications. For example, one advocacy group representative told us that some purchasers may offer modifications that initially lower monthly payments but later adjust to levels that are higher than what they were prior to modification. FHA requires purchasers to report some information that would allow it to determine the types of modifications offered by individual purchasers as well as the sustainability of these modifications. As mentioned previously, FHA officials said they expect all loans eligible for sale to be foreclosed and consider any nonforeclosure outcome achieved by purchasers to be an improvement. This expectation may deter FHA from evaluating the modifications offered by individual purchasers or the sustainability of modifications. Federal internal control standards state that management should use quality information to achieve its objectives, which includes identifying information requirements needed to achieve the objectives, evaluating the data it receives from internal and external sources to ensure they are sufficiently reliable for use in making informed decisions, and using the data for effective monitoring. Without monitoring individual purchasers’ modifications or collecting key data elements, FHA cannot determine whether purchasers are meeting the postsale requirements or the extent to which eligible homeowners obtain sustainable modifications. Sold Loans Were More Likely to Experience Foreclosure Than Unsold Loans in the Aggregate, but Not for Later Sales and Some Purchasers Our analysis showed that sold loans were more likely to experience foreclosure than similar, unsold loans overall within a 48-month period after servicing transfer (see fig. 7). In the aggregate, the probability of experiencing foreclosure was greater overall for sold loans compared to unsold loans. For example, the probability of foreclosure 24 months after the servicing transfer date was 43 percent for sold loans and about 36 percent for unsold loans, a statistically significant difference. Additionally, we analyzed the probability that a borrower reperformed, received a temporary action such as forbearance or a trial modification, or received a short sale or deed-in-lieu of foreclosure—foreclosure avoidance outcomes. In the aggregate, the probability that sold loans avoided foreclosure ranged from about 15 to 24 percent at various times within a 3-year period beginning 12 months after the servicing transfer date. Foreclosure avoidance rates for unsold loans were higher, ranging from 21 to about 34 percent during this period. We found that sold loans were less likely to result in owners staying in their homes compared to unsold loans due to out-of-home actions (see fig. 8). The probability of reperforming was greater overall for unsold loans compared to sold loans. Additionally, unsold loans were more likely to receive an in-home temporary action. In contrast, sold loans were more likely to result in a short sale or a deed-in-lieu of foreclosure, through which borrowers avoid foreclosure but lose the title to their homes. See appendix VI for a comparison of reperforming, short sale or deed-in-lieu of foreclosure, and temporary action outcomes between sold loans and unsold loans. Although we found that sold loans were more likely to experience foreclosure in aggregate, for later sales, after about 12 months, rates of avoiding foreclosure were similar or greater for sold loans compared to unsold loans, and for some purchasers rates of foreclosure were similar or smaller for sold loans compared to unsold loans. For the second 2013 sale through the 2015 sale, we found that sold loans were less likely to avoid foreclosure compared to unsold loans (see fig. 9). In the 2016 sales, however, after about 12 months the sold loans were more likely to avoid foreclosure compared to similar unsold loans. Further, after an additional 12 months—24 months after the servicing transfer date—loans sold in the first sale in 2016 avoided foreclosure at a rate that was 5 percentage points greater than unsold loans. Loans sold in the second sale in 2016 were also consistently less likely to foreclose compared to unsold loans. We discuss potential explanations for these differences among sales in the section that follows. We also found differences in the rates of foreclosure and some outcomes that avoid foreclosure achieved by different purchasers (see fig. 10). For example, the probability of a loan reperforming 24 months after the servicing transfer date ranged from about 0.2 to about 25 percent for selected DASP purchasers. While most of these purchasers fell below the reperforming estimate of 18 percent for similar, unsold loans, one purchaser exceeded this rate. Foreclosure and short sale or deed-in-lieu of foreclosure probabilities 24 months after the servicing transfer date also differed among these purchasers, ranging from 31 to about 90 percent and from 8 to about 30 percent, respectively. These rates generally exceeded the foreclosure and short sale or deed-in-lieu of foreclosure estimates for similar, unsold loans (34 and about 9 percent, respectively). Purchasers told us that the outcome they pursue for a loan depends in part on the borrower’s preference. According to purchasers, for borrowers who want to keep their homes, the best option is to try to modify the loan and achieve reperformance status. Purchasers also said that for borrowers who do not want a modification or for whom a modification is not possible, they may pursue a short sale or deed-in-lieu of foreclosure, which have a less negative effect on borrowers’ credit than a foreclosure. Representatives of a consumer advocacy group and a research organization told us that foreclosure has the most negative effect on the borrower’s credit. A Fair Isaac Corporation (FICO) study found that, in some cases, foreclosure had a more negative effect on comparable borrowers’ credit profiles than a short sale or deed-in-lieu of foreclosure. FHA officials, purchasers, and servicers said that purchasers have more flexibility and are in a better position than FHA servicers to provide more generous mitigation options. A senior FHA official emphasized that purchasers have more financial flexibility because they generally buy the defaulted loans at a discount from FHA (that is, less than the unpaid principal balance). According to different DASP stakeholders, purchasers can forgive a portion of the principal, offer a deferment that is greater than 30 percent of unpaid principal extend the term of a loan beyond 30 years, reduce the interest rate below the current market rate, offer more than one modification in a 2-year period, and offer more generous terms for deeds-in-lieu of foreclosure and short sales. In contrast, FHA is restricted in the loss mitigation options it can offer. FHA officials told us that it does not offer debt forgiveness, but may defer a limited amount of principal through a partial claim. FHA officials also said they generally set loan term ranges to meet requirements for securitization in the secondary mortgage market, including a fixed interest rate and a 30-year term. In addition, FHA’s loss mitigation alternatives to foreclosure, such as short sales and deeds-in-lieu of foreclosure, are restricted or approved by FHA based on their chance of success and the associated financial effect on the MMI Fund. However, representatives of some advocacy groups told us that borrowers generally benefit from their loans remaining insured and unsold because FHA’s loss mitigation process is more transparent. They said that information on the loss mitigation process under FHA is publicly available, while it can be difficult to access information about some purchasers’ loss mitigation processes. Also, starting in 2012, FHA policies attempted to provide a more consistent loss mitigation process for borrowers across all FHA servicers. In contrast, purchasers can have varying processes for offering loss mitigation options. Various Factors May Contribute to Differences in Outcomes by DASP Sale and Purchaser A number of factors may contribute to differences in outcomes between sold and unsold loans by DASP sale and purchaser, such as increased postsale servicing and reporting requirements, variations in purchaser participation across sales, and differences in the modifications offered by purchasers. FHA Has Expanded Postsale Requirements and Use of NSO Pools Changes in postsale servicing requirements may account for higher reperforming rates for sold loans in the 2016 sales. As discussed previously, FHA introduced additional servicing requirements in 2015 aimed at offering additional protections to borrowers whose loans were sold through DASP. For example, FHA began requiring purchasers to evaluate borrowers for HAMP or substantially similar modifications aimed at lowering borrowers’ monthly payments and offer these modifications to eligible borrowers. Further, the share of loans sold through NSO pools relative to national pools has increased, which may also account for higher reperforming rates for sold loans in the 2016 sales. As noted previously, NSO and nonprofit pools have additional postsale outcome requirements. We compared outcomes for loans sold in NSO pools to outcomes for loans sold in national pools and found that loans sold in NSO pools were more likely to reperform, possibly due to higher occupancy rates in NSO pools compared with national pools. As shown in figure 11, the share of loans sold through NSO and nonprofit pools relative to loans sold through national pools increased between 2013 and 2016, from about 12 percent of the total loans in our scope for the 2013 sales to about 45 percent of loans in the 2016 sales. In addition, FHA introduced a reporting requirement in 2015 that purchasers continue reporting the outcome status of loans even after selling them to new buyers, as opposed to reporting the loans as resold with no further outcome updates. Purchasers may have returned these loans to performing status before selling them because performing loans are more profitable, but the performing status would not have been reported before 2015. The use of resales as a status was substantially lower in the second sale in 2016 compared to the first sale in 2013—0.04 percent of reported statuses compared to 29 percent of reported statuses. This change could be reflected in the higher reperforming outcomes we observed for sold loans in 2016. Purchasers Varied across Individual Sales and May Not Have Offered Comparable Modifications Our analysis indicated that individual purchasers did not consistently buy loans across sales and the share of loans bought by individual purchasers varied. For example, about 42 percent of the purchasers in our scope bought loans in one sale, while about 27 percent of purchasers bought loans in three or more sales. The share of loans bought by individual purchasers has also varied by sale (fig. 12). For example, one purchaser bought about 4 percent of the loans sold in the second sale in 2013 but about 82 percent of the loans sold in the first sale in 2016. This purchaser had higher reperforming and lower foreclosure outcomes compared to other purchasers. In addition, purchasers may not consistently offer modification options. Approximately 18 percent of the sold loans in our scope received one or more modifications. However, individual purchasers offered modifications at varying rates, from no modifications to 46 percent of the loans they purchased. Our analysis also indicates that the type of modifications offered may differ by purchaser. For example, we found that about 88 percent of the modifications that had decreased monthly payments by 30 percent or more were offered by two of the 25 purchasers that reported modifying loans. In addition, the share of modifications offered by individual purchasers that resulted in no payment change or an increase in payment varied. For example, eight purchasers reported either no change or an increase in payment in 51 to 75 percent of the modifications they offered. In contrast, three other purchasers reported either no change in payment or an increase in payment in less than 10 percent of their modifications. Purchasers’ investment goals and expertise could affect borrower outcomes. DASP purchasers include investment firms, rental housing companies, and nonprofit organizations with varying investment goals. In interviews, purchasers cited various goals for purchased loans. For example, an executive of a nonprofit organization said its primary goal was to help borrowers avoid foreclosure, while representatives of an investment firm told us that their goal was to maximize the return for each purchased loan. A representative of one advocacy group told us that purchasers’ different areas of expertise could make different foreclosure and foreclosure avoidance options more or less profitable for them. For example, purchasers with an extensive background in loan servicing may be able to offer modifications at a lower cost, while rental companies may consider DASP as a source for inventory for properties to rent if loss mitigation fails. Additionally, purchasers can have varying levels of success in contacting borrowers to discuss modifications or disposition options for the loans they purchased. Most purchasers noted that it was often difficult to make contact with borrowers because houses were vacant or borrowers avoided contact. For example, one purchaser said it was unable to reach about 25 percent of borrowers for the loans it purchased. Another purchaser said it was unable to reach about half of the borrowers. Furthermore, while several purchasers said they primarily contacted borrowers via the notice of servicing transfer and by phone, one purchaser also said that a more successful outreach method involved in- person visits to borrowers’ homes, but that such visits may not always be feasible due to resource constraints. FHA’s Current Practices May Not Optimize Savings to the MMI Fund, and the Effect of Some Changes Is Unclear FHA May Be Recovering Less for the MMI Fund Than It Could Due to Its Scheduling and Reserve Pricing Practices Scheduling FHA announces bid dates in the Federal Register and industry publications but does not communicate long-range notice of upcoming sales. FHA held multiple sales in 2011, 2012, 2013, 2014, and 2016, but the sales were not held at set intervals or at set dates throughout the years. FHA has not held any DASP sales since September 2016, and officials stated that they do not know when FHA will hold another sale. Our interviews indicate that communicating long-range notice of sales could help keep participation robust and increase bid amounts. One purchaser told us that it was eager for FHA to restart DASP sales. However, purchasers would like to receive additional notice of sales. One purchaser told us that additional notice of FHA sales would allow it the time to plan or raise additional capital needed to participate in a DASP sale. Another purchaser said that, without knowledge of when another sale will occur, it will invest elsewhere. Losing bidders to other entities’ sales could affect bid amounts in DASP sales. According to economic literature, increasing the number of bidders in an auction generally should increase bid amounts—a financial objective for the program. Federal internal control standards state that management should externally communicate the necessary quality information so that external parties can help the entity achieve its objectives and address related risks. For example, although Fannie Mae does not publish an annual schedule, market participants know when to expect Fannie Mae sales because it has held them multiple times a year. In contrast, FHA does not hold regular sales or signal to the market when it will hold its next sale through its outreach because DASP remains a pilot program. FHA officials said they change program parameters with each sale, so it is difficult to schedule sales in advance. We previously noted that, even implementing DASP as a pilot program, FHA could use performance data to establish criteria for the timing of sales and to help optimize the use of the program to achieve its objectives. Similarly, by communicating long- range notice of upcoming sales to market participants, FHA could encourage bidder participation and potentially help meet its objective of maximizing recoveries to the MMI Fund. As discussed in appendix VII, characteristics of successful auctions include attracting sufficient interest in the auction and in designing the auction to meet its objectives. Without communicating long-range notice, FHA may be recovering less than it could for the MMI Fund. Reserve Pricing FHA sets reserve prices—a minimum amount that it is willing to accept as the winning bid—to help ensure that the MMI Fund is minimally affected by the sale. FHA generates a reserve price for each loan and adds those prices together to generate a pool reserve price. If FHA does not receive a bid on a pool that is at or above its reserve price, FHA may choose not to sell the pool. Any amount of the bid above the reserve price represents additional potential proceeds to the MMI Fund. FHA officials stated that they expect that all DASP loans would be foreclosed and the properties placed in its REO inventory had they not been sold. FHA officials stated that they establish each loan’s reserve price considering the percentage of the unpaid balance FHA expects to recover through foreclosure and REO disposition. A recent HUD OIG report found that for loans sold in 2015 and 2016, FHA experienced a 3 percent lower loss rate compared with similar loans that were foreclosed and the associated property placed into FHA’s REO inventory. Loss estimates have varied over time and by location of the property associated with the loan, but generally an REO disposition results in the greatest loss to the MMI Fund. For example, FHA’s Office of Risk estimated that from fiscal year 2013 through the first quarter of 2017, FHA lost 61 percent (recovering about 39 percent) of the unpaid balance on REO dispositions compared to about 46 percent (recovering 54 percent) of the unpaid balance on other nonloan sale dispositions. FHA officials stated that unsold defaulted loans would likely result in foreclosure and being placed in the REO inventory. However, our analysis of outcomes showed that comparable unsold loans resulted in a range of outcomes, not just foreclosure and REO disposition. Specifically, our analysis of outcomes in sales between 2013 and 2016 showed that about 66 percent of unsold loans with characteristics similar to sold loans resulted in foreclosure or remained unresolved. The remaining 34 percent of these unsold loans resulted in a range of nonforeclosure outcomes (including returning the loan to performing status), all of which could produce smaller losses to the MMI Fund compared with REO disposition. Further, our analysis found that about 14 percent of the loans returned to performing status or were terminated as paid in full, thereby generating very little to no loss to the MMI Fund. FHA may be setting its reserve prices too low in some cases. FHA sets a loan’s reserve price considering the percentage of the unpaid balance it expects to recover through an REO disposition to guarantee the minimum recovery proceeds to the MMI Fund. However, when the expected losses to the MMI Fund for some loans are smaller—such as in the case of a different disposition method or a terminated loan—the reserve price would need to be higher to guarantee the minimum recovery proceeds to the MMI Fund. If FHA could recover more of the unpaid loan balance through a non-REO disposition method, setting the reserve price at the expected recovery of the unpaid balance from an REO disposition would be too low. See figure 13 for an illustrative example of how reserve prices could be affected based on different expectations of loan dispositions. The extent to which the MMI Fund could be negatively affected depends on how reserve prices compare to the actual winning bids. In figure 13, if FHA set the reserve price of pool A at $3,900,000, FHA would sell the pool to the highest bidder that bid at least $3,900,000. If the highest bid was less than $3,900,000, FHA may not sell the pool. If the highest bid for the pool was at least $3,900,000 but less than $5,054,000, the MMI Fund would be negatively affected because FHA could have recovered more by not selling the pool. If the highest bid was at least $5,054,000, the MMI Fund may not be negatively affected by the sale. Using a simplified method to calculate reserve prices that does not consider differences in local housing markets, we estimate that 31 percent of the loan pools FHA sold in its 2013–2016 sales had winning bids greater than FHA reserve prices but less than our calculated reserve prices. For about 14 percent of the pools, our calculated reserve price was 10 percent or more below the winning bid, and for 7 percent of the pools, our calculated reserve price was 25 percent or more below the winning bid. Federal internal control standards state that management should use quality information to achieve the entity’s objectives. This includes designing a process that uses the entity’s objectives and related risks to identify the information requirements needed to achieve the objectives and address the risks. However, FHA is not considering information on the range of potential outcomes for loans in setting its reserve pricing because it expects all sold loans to result in foreclosure and REO disposition. Without considering other disposition methods in its reserve pricing, FHA risks recovering less for the MMI Fund in loan sales than if the loans had not been sold and risks not meeting its objective. FHA Does Not Analyze Key Information before Setting Eligibility Criteria FHA’s eligibility criteria specify the characteristics of the loans that can be selected for a loan sale, but FHA does not analyze its portfolio to identify loan characteristics for which DASP would be the lowest-cost disposition method or consider market information before setting the criteria. FHA has analyzed bid amounts from previous sales and made changes to eligibility criteria related to length of delinquency and LTV ratio, in part, intended to increase MMI Fund recoveries. For example, using analysis of its 2014 sales, FHA determined the LTV ratios that produced the highest loan-level recoveries relative to REO dispositions and changed the loan eligibility criteria for the minimum LTV ratios by state for its 2015 sale. According to FHA, this change was intended to make more loans eligible for disposition through DASP sales in certain states that had long foreclosure processes. However, FHA does not analyze its portfolio of defaulted loans to identify characteristics of loans that, if sold, would minimize the loss to the MMI Fund relative to all other disposition methods to inform eligibility criteria for sales. FHA may have missed an opportunity to evaluate when loan sales would be the most effective disposition method to maximize recoveries to the MMI Fund—a financial objective of the program. FHA contracted with CoreLogic in 2016 to develop a tool to determine the lowest-cost disposition for defaulted loans in FHA’s portfolio but did not include loan sales as a potential disposition method. The tool is intended to generate estimates of property values and holding costs and determine the lowest cost disposition method for a given loan at a given time. Used broadly, this information could help FHA identify loan criteria for which DASP sales would be the most effective disposition method and set loan eligibility criteria for DASP loans. However, FHA excluded DASP because, according to the contractor, the data on DASP had been too inconsistent to be reliably included in the CoreLogic tool. Therefore, FHA cannot use the tool to identify loan characteristics for which DASP could be the lowest-cost disposition method or to inform its decisions in setting loan eligibility criteria. Further, FHA determines eligibility criteria before considering current market information. FHA’s transaction specialist gathers market information before the sale, but FHA does not consider it before setting eligibility criteria and soliciting eligible loans from servicers. The transaction specialist analyzes the market and develops a sales strategy report using the loans submitted by the servicers. The report contains information on available capital for key purchasers, the number and type of loans purchasers are interested in buying, other entities’ upcoming sales, and potential pooling strategies for the loans submitted. FHA uses the information to develop pools intended to maximize the sale proceeds, but not to identify characteristics of loans meeting purchasers’ preferences and inform decisions in setting eligibility criteria. FHA’s current approach risks setting criteria that may not maximize recovery to the MMI Fund because it may be selling loans that could result in a smaller loss to the MMI Fund than if they had remained under FHA insurance. FHA generally analyzes how to maximize sales proceeds after setting loan eligibility criteria and reviewing the servicers’ submitted loans because servicers select the loans, voluntarily participate, and may not submit all eligible loans. Further, setting loan eligibility criteria that increase servicers’ cost to identify loans may reduce servicer participation. In addition, FHA does not use current market information because, according to officials, they use data from past sales to determine market preferences and their primary concern is the effect on the MMI Fund. However, FHA has not held a sale since 2016, so market preferences may have changed. Additionally, purchaser participation may decline if loans do not match their preferences. Generally, fewer bidders indicate less interest in the pools and could result in decreased prices, which would reduce returns to the MMI Fund. By implementing DASP, HUD intended to maximize recoveries to the MMI Fund. Without analyzing its loan portfolio to identify when loan sales would be the most cost-effective disposition method and considering market information before setting loan eligibility criteria, FHA cannot appropriately calibrate its loan eligibility criteria to maximize recovery to the MMI Fund. The Effects on the MMI Fund of Changes to Auction Structure and Pooling Strategies Are Unclear Auction Structure Based on our analysis of comparable mortgage industry auctions, FHA’s auction structure mirrors the industry standards of pooled, highest bidder, sealed bid auctions. Other auction structures we examined, such as single loan sales and adding a winner-take-all option, would involve tradeoffs. For example, an analysis by DebtX, a loan sale advisor, showed that FHA would have earned higher proceeds in a prior DASP sale if it had awarded based on single-loan bids rather than the pool-level bids. However, our interviews with FHA officials and purchasers revealed uncertainty in how proceeds from single-loan bids would compare to bids for pooled loans. For example, FHA officials said they benefit from economies of scale when offering larger pools and that administrative costs associated with servicing transfers would be higher if FHA sold loans individually. Furthermore, purchasers may decline to bid on individual loans. Purchasers we interviewed expressed interest in sets of loans rather than individual loans, in part to manage risk. When asked about smaller pools, FHA officials stated that they have used small pools to attract nonprofit bidders, but we found that these pools had a low number of bidders and many were not traded. The effect on the MMI Fund of adding a winner-take-all option to FHA’s auction structure is uncertain. Such a structure could result in increased bid amounts. In a winner-take-all option, each bidder would choose to either participate at the sale level or pool level in the winner-take-all option. In either case, the bidder would place loan-level bids that would be rolled up to the pool or sale level. If a winner-take-all bid exceeds the aggregate of the highest pool-level bid for each pool, all pools are awarded to the winner-take-all bidder. By definition, if a winner-take-all bidder won the auction, the resulting bid would increase FHA’s overall sale proceeds. However, a winner-take-all structure could discourage bidder participation, which could lead to reduced bid amounts. Smaller entities and larger nonwinning bidders may be less likely to participate in future sales because of the costs associated with participating. According to auction theory, the higher the cost of performing due diligence and qualifying for and participating in the auction, the more bidder participation will be discouraged. Although the extent of purchasers’ due diligence checks differed, all the purchasers we interviewed told us that they expend funds to purchase property valuations on at least a sample of loans to check whether the valuations listed in the servicer data were reasonable. Some purchasers also expend funds to examine servicing records or perform legal searches related to the loans. Additionally, bidders are required to submit deposits with their bids that FHA will return if the bidder is not awarded the pool or pools. One purchaser told us it was reluctant to spend the money on due diligence if it did not have a reasonable chance at winning the pool or pools. According to economic literature, having fewer bidders in an auction generally results in decreased prices and an increased opportunity for bidders to form strategic partnerships that would decrease competition. See appendix VII for more information on auction structures. Pooling Strategy It is unclear whether changes to FHA’s pooling strategy—that is, its approach for selecting loans to include in its loan sale pools—would result in more bidders or higher bid amounts. We compared the pooling practices and pool-level data of FHA with those of Freddie Mac and Fannie Mae (the enterprises) to determine whether pooling strategy affected the number of bids. The enterprises started selling defaulted loans in 2015—much later than FHA—and have continued to do so, with Freddie Mac and Fannie Mae both holding sales in October 2018. FHA held three DASP sales in fiscal years 2015 and 2016 that overlapped with the time frame of the enterprises’ sales. FHA and enterprise pools had different financial characteristics—loans in FHA pools were less delinquent, the properties were more likely to be occupied, and the loans had lower underlying property values compared to loans in enterprise pools (see fig. 14). Nonetheless, FHA received similar numbers of bids and bid amounts relative to the estimated property values as the enterprises. Generally, the number of bidders for FHA and the enterprises was between three and six, and bid amounts were typically between 58 and 71 percent of the underlying estimated property value. Many of the purchasers of FHA’s DASP loan pools also purchased the enterprises’ pools of defaulted loans. It is unclear whether adjusting the pooling strategy to focus on specific loan characteristics would increase the number of bidders for FHA. Enterprise officials told us that they pool by geography, occupancy, and LTV ratio and also try to create loan pools such that all loans have the same servicer. Unlike the enterprises, FHA does not pool loans by similar characteristics, and pools frequently have loans from more than one servicer. FHA officials told us they primarily use geography and pool size to pool loans. However, FHA officials also told us they try to include loans to make the pools attractive to different types of purchasers. Loans may be valued differently by bidders with unique strengths—such as strong default servicing infrastructures or experience rehabilitating properties—that would make the loans more profitable to them compared to other bidders. FHA officials stated that they encourage higher, outlier bids by structuring pools to attract different types of bidders. We found differences in the extent to which loan-pool characteristics were associated with bidder participation for FHA’s and the enterprises’ defaulted loan sales. Our multiple variable regression analyses of how loan-pool characteristics predict the number of bidders showed the following: Pools with a higher percentage of occupied properties were associated with an increase in the number of bidders in FHA pools but a decrease in the number of bidders in enterprise pools. Average LTV ratio was not associated with the number of bidders for FHA or the enterprises. National pools were associated with more bidders for FHA. This result may be due to fewer FHA postsale requirements for national pools. For FHA pools, more servicers was associated with fewer bidders, possibly due to higher transaction costs. Although 86 percent of FHA pools had fewer than five servicers, the number of servicers for FHA pools ranged from one to 21. In contrast, all enterprise pools were single-servicer pools, except for four out of 101 pools (about 4 percent) that each had two servicers. See appendix I for a detailed description of these analyses. Setting aside pools for nonprofit organizations has not significantly expanded bidder participation in FHA loan sales. FHA performs market outreach to educate potential purchasers about the DASP process, but barriers to entry exist in terms of qualifications and the underlying capital required. In its 2015 sale, FHA began offering nonprofit-only pools. In 2016, FHA established a goal of selling 10 percent of assets to nonprofits and local governments. In 2015–2016, FHA offered nine pools exclusively to nonprofits, of which five (about 56 percent) received bids at or above FHA’s reserve price and were traded. Each pool received between one and three bids. Despite heavy marketing, all traded pools were awarded to two organizations, including one first-time purchaser. In comparison, from 2010–2016, FHA offered 191 national and NSO pools, and 185 (about 97 percent) received bids at or above FHA’s reserve price and were traded. Several stakeholders told us that most nonprofit organizations do not have the capacity to service delinquent loans, but they may be able to participate in the program in a different capacity. For example, two purchasers partnered with nonprofit organizations to perform outreach to borrowers. Conclusions Since 2002, FHA has used loan sales intermittently to reduce its backlog of defaulted mortgages and preserve the financial health of the MMI Fund. In addition, some homeowners have received additional opportunities to modify their loans and retain their homes through the program. Yet, our review found several areas where FHA can improve its management of DASP through more formalized procedures and analyses, as follows. Improving controls. By evaluating eligibility at various points throughout the 3-month period prior to the sales, including after the servicer update, FHA could better prevent the sale of ineligible loans. Additionally, as FHA finalizes its comprehensive procedures, it can better ensure that it is considering the effects of previous changes on the program by including procedures for reviewing and documenting program changes in a timely manner. Using performance data. FHA has not developed key performance measures for DASP. Without measurable targets related to clear program objectives, FHA is not well-positioned to assess the effectiveness of DASP—which is still considered a pilot program—in achieving its objectives. Furthermore, by using performance data to determine the optimal timing of DASP sales, FHA could help the program achieve higher recoveries. Evaluating outcomes. FHA has not conducted an analysis that compares the extent to which sold loans help avoid foreclosure, as compared to similar, unsold loans. Such an analysis would help assess DASP’s effectiveness in meeting a program objective. Monitoring and evaluating purchasers’ modifications. FHA does not monitor purchasers of defaulted loans to ensure they are complying with FHA’s requirement to offer payment-lowering modifications to eligible borrowers. Additionally, FHA may not collect the data it needs to evaluate whether modifications offered by purchasers remain sustainable. With better monitoring, FHA could determine whether individual purchasers are meeting these requirements. Maximizing benefits of loan sales. FHA has opportunities to make changes in how loan sales are held and structured that could enhance bidder participation and better meet the DASP objective of maximizing recoveries to the MMI Fund—which are two characteristics of successful auctions. Providing better advance notice to prospective bidders, setting reserve prices based on realistic expectations, and setting loan eligibility requirements that encourage more bidding could improve the results of DASP sales and thereby reduce losses to the MMI Fund. Recommendations for Executive Action We are making the following nine recommendations to FHA: The Commissioner of FHA should ensure that its eligibility checks are conducted throughout the DASP sale process, such as by establishing a schedule to check for eligibility at certain milestones. (Recommendation 1) In formalizing procedures for DASP, the Commissioner of FHA should document processes for timely consideration and review of program changes. (Recommendation 2) The Commissioner of FHA should clearly define DASP objectives and develop measurable targets for all program objectives. (Recommendation 3) The Commissioner of FHA should use performance data to develop criteria for when to hold DASP sales. (Recommendation 4) The Commissioner of FHA should evaluate loan outcomes under DASP compared to outcomes for similar, unsold loans. (Recommendation 5) The Commissioner of FHA should monitor individual purchasers’ compliance with FHA’s modification requirements and ensure the purchasers submit the data needed to evaluate the sustainability of modifications. (Recommendation 6) The Commissioner of FHA should communicate long-range notice to prospective bidders of upcoming DASP sales. (Recommendation 7) The Commissioner of FHA should develop a methodology to assess the range of possible outcomes for loans when setting DASP reserve prices. (Recommendation 8) The Commissioner of FHA should analyze FHA’s loan portfolio and market information before setting loan eligibility criteria. (Recommendation 9) Agency Comments and Our Evaluation We provided a draft of this report for review and comment to HUD and FHFA. HUD provided written comments, which have been reproduced in appendix VIII, that communicate FHA’s response to the report. Both HUD and FHFA provided technical comments, which we have incorporated, as appropriate. In its written response, FHA’s management generally agreed that opportunities exist for improvement to single-family loans through more formalized procedures and analyses, as the defaulted loan disposition option transitions to a permanent disposition alternative. FHA generally agreed with seven recommendations and did not explicitly agree or disagree with two recommendations. FHA neither agreed nor disagreed with our recommendation that FHA should ensure that its eligibility checks are conducted throughout the DASP sale process, such as by establishing a schedule to check for eligibility at certain milestones. FHA stated that it works with the servicers and relies on them to determine eligibility throughout the DASP sale process. FHA also stated that its management agrees to include a schedule of eligibility checks in its procedures. We acknowledge that servicers check loan eligibility throughout the process, as stated in the report. However, we maintain that FHA and its contractors should also space their own checks throughout the process, specifically scheduling some closer to the bid date, and not rely exclusively on the servicers for this function at the end of the sale process. FHA neither agreed nor disagreed with our recommendation that FHA should clearly define DASP objectives and develop measurable targets for all program objectives. FHA management stated that it believes it already has clear objectives and performance management in place for its DASP objective to maximize recoveries to the MMI Fund and that it measures whether it is meeting this objective. We acknowledge that FHA’s objective to maximize recoveries to the MMI Fund is clear and that it has a measureable target. However, as stated in the report, agency documents and program changes reflect additional program objectives related to preserving homeownership, helping to stabilize neighborhoods, and offering borrowers a second chance at avoiding foreclosure that do not have measurable targets. We maintain that FHA should clarify its program’s objectives in agency documents, whether that be one objective or several, and ensure that each objective has a measurable target. FHA also took issue with aspects of our comparison of sold and unsold loans in its written response and technical comments. In its written response, FHA noted that the unsold loans in our analysis are invalid for comparison to sold loans because these unsold loans had not been deemed by servicers as having completed all applicable loss mitigation activities prior to being included in the analysis the way sold loans had. We attempted to minimize differences between the sold and unsold loans by matching loans across several variables that could affect the likelihood of foreclosure or foreclosure avoidance. We found a high rate of similarity between the two groups and indirectly controlled for any differences in the extent of loss mitigation by including length of delinquency as one of the matching variables. According to the FHA servicing handbook, servicers are generally required to either use a loss mitigation option for which a borrower qualifies or initiate foreclosure within 6 months of the default date. In its technical comments, FHA also noted that our matching of comparison loans omitted important variables. In particular, FHA noted that the analysis did not hold constant several factors related to the risk of foreclosure, including default risk as measured by FICO scores, debt-to-income ratios, home price appreciation, and loan amount and term. However, we indirectly controlled for loan term and home prices by matching loans by origination years and indirectly controlled for loan amount and home prices by matching on categories of LTV ratios. We did not control for debt-to-income ratios or FICO scores, but FHA’s data systems did not contain them for unsold loans and FHA does not include them as criteria for DASP eligibility. Further, these variables may not be substantially different between the sold and unsold loans because the loans in both groups are severely delinquent. We revised the report to clarify that we estimated the LTV ratio at the time of the DASP sale. We calculated the LTV ratio using the outstanding loan amount and estimating current property values by adjusting the original sale values for regional changes in home prices over time. In addition, FHA stated in technical comments that our comparison group is invalid because 100 percent of loans in DASP sales would end in foreclosure if they were not included in a sale. FHA stated that the only loans eligible for sale are those for which the only alternative remaining to the borrower is foreclosure. However, we disagree that all sold loans would have ended in foreclosure had they not been sold. As discussed in the report, unsold loans with characteristics similar to sold loans experience a range of outcomes, including up to 34 percent experiencing outcomes other than foreclosure following sales. In addition, the status of delinquent loans can be very fluid throughout the sale process, even after purchasers place bids on them, and borrowers who previously did not qualify for a loss mitigation option could become eligible to be evaluated again (and their loan could become ineligible for sale) if their circumstances change. For example, our analysis of FHA data found that from 2010 through 2016 about 23 percent of loans were removed from sales between the bid and claims dates due to, among other things, loans entering into loss mitigation. Furthermore, we found that for five individual loan pools, more than half of the loans were removed from the sales between the bid and claims dates. These results argue against the validity of FHA’s presumption that all loans selected for sales would have ended in foreclosure. Although our matching process does not capture all potential foreclosure risk characteristics and our results should be interpreted accordingly, our analysis supports our assumption that the pools of sold and unsold loans are generally comparable and describes relationships between DASP participation and loan outcomes. We maintain that our approach is reasonable using the available data and forms a sound basis for the findings and recommendations in the report. As FHA considers actions in response to our recommendations about evaluating loan outcomes and assessing its methodology for setting reserve prices, we encourage it to further enhance the robustness of these analytical methods. . As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of HUD, the Director of FHFA, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. Appendix I: Objectives, Scope, and Methodology The objectives of our report were to examine (1) the changes the Federal Housing Administration (FHA) has made to the Distressed Asset Stabilization Program (DASP) over time; (2) certain DASP procedures, including those associated with loan eligibility, and documentation; (3) FHA’s evaluation of the identified outcomes for loans that have been sold through DASP and how these compare with similar, unsold loans; and (4) the potential effects that changes to DASP might have on the Mutual Mortgage Insurance Fund (MMI Fund). Databases Used in Analyses throughout the Report To conduct the data analyses discussed in the sections below, we used the FHA data sets listed in table 2. (We discuss the use and reliability of these data sets in the sections that follow this table.) Document Review and Interviews To address all the objectives, we reviewed relevant laws, agency documents, and agreements. We reviewed the National Housing Act, Department of Housing and Urban Development (HUD) program evaluation policy and sale notices in the Federal Register, and Office of Management and Budget (OMB) Circular A-11. We reviewed HUD’s contractual agreements with servicers and purchasers for each DASP sale from 2010 to 2016, which are called, respectively, Participating Servicer Agreements (servicer agreements) and Conveyance, Assignment, and Assumption Agreements (purchaser agreements). We also reviewed other agency documents, including HUD’s Fiscal Year 2017 Annual Performance Report, FHA’s DASP sale results, FHA’s Actuarial Reports, HUD’s Reports to the Commissioner on Post Sale Reporting, and the Federal Housing Finance Agency’s (FHFA) Enterprise Non-performing Loan Sales Reports. We also reviewed prior GAO work on related topics. We interviewed officials from multiple offices within HUD, including the Offices of Asset Sales, Single Family Asset Management, Risk Management and Assessment, Finance and Budget, and the National Servicing Center. We also interviewed HUD’s three primary contractors for DASP at the time of our review—transaction specialist: Verdi Consulting; compliance analytics: SP Group; and program financial adviser: NOVAD Management Consulting. We interviewed officials from FHFA and the government-sponsored enterprises (enterprises)—Freddie Mac and Fannie Mae—as they also auction defaulted loans. We interviewed and reviewed reports from selected consumer advocacy organizations and industry stakeholders that included five servicers, seven purchasers, and two loan-sale advisory firms. In interviews, we generally discussed with participants the following topics: changes to DASP over time; what works well and what could be improved in DASP; foreclosure avoidance options that purchasers offer; the effectiveness of FHA’s 2015 and 2016 DASP reforms; communication to borrowers whose loans are selected for a DASP sale; and the auction process and effect of alternative auction structures on the MMI Fund. To select servicers and purchasers to interview, we analyzed the bid day pool-level data and postsale data, respectively. We selected and interviewed five servicers from a universe of 56 servicers based on high and low participation in terms of number of sales, loans sold, and the unpaid balance of the loans and type of institution (bank and nonbank). We selected and interviewed seven purchasers from a universe of 29 purchasers based on participation, postsale foreclosure rate, and type of institution (for-profit and nonprofit). The views and practices of the servicers and purchasers we selected may not represent those of the servicers or purchasers not selected. Identifying and Mapping Loans Sold through DASP To identify a complete list of the loans sold through DASP (sold loans), as described in the background section of the report and used in analyses throughout, we obtained and analyzed postsale reporting data. Per the purchaser agreements, purchasers are required to report the outcome status of sold loans on a quarterly basis for 4 years following the transfer of loan servicing responsibilities. The quarterly postsale reports did not always include data for every purchased loan. We therefore compared the number of loans included in each quarterly postsale report for each pool and used the quarterly reports with the highest loan count to develop a complete list of the loans sold through DASP. To develop the map showing the concentration of sold loans by state, we used data from the Single Family Default Monitoring System (default monitoring system) to calculate the ratio of loans sold through DASP to FHA-insured, defaulted loans with six or more missed payments in July of each year. We then categorized states into five ratio categories based on the distribution of ratios across states. We limited our review of participants and characteristics to the loans included in our comparison analysis of outcomes to provide descriptive context for this analysis. To assess the reliability of the data sources above, we interviewed FHA officials about how the data were collected, processed, and accessed. We also identified the sold loans that were not reported in the default monitoring system at the time servicers submitted the loans to FHA for sale. We found that less than 0.1 percent of the sold loans in our scope were not reported as delinquent by servicers and determined that, due to their small percentage, excluding these loans would not bias our results. Based on our interviews and review of unreported loans, we concluded that servicers generally reported sold loans in the default monitoring system, and we found the data to be sufficiently reliable for the purpose of identifying and describing sold loans. Examining DASP’s Current Process and How It Changed over Time To describe the DASP process and changes to the program over time, we reviewed FHA documentation, legislation, and other reports. To describe how DASP currently works, we analyzed the 2016 servicer and purchaser agreements and interviewed FHA officials and servicers. To describe how the program changed over time and the type of changes that FHA made, we reviewed HUD’s authorizing legislation to accept assignment and sell loans, program requirements under OMB Circular No. A-11, and HUD press releases that announced the program’s initiation and changes. To identify changes in servicer agreements and purchaser agreements since 2010, we performed a content analysis identifying differences from sale- to-sale (one servicer agreement and one purchaser agreement for each sale between 2010 and 2016). One analyst performed the review, and a second analyst verified the selected content. To gather additional background information on the program and the changes over time, we reviewed reports issued by the HUD Office of Inspector General (OIG) and consumer advocacy and other research organizations such as the National Consumer Law Center, Center for American Progress, and Urban Institute. To corroborate our information on the program and changes, we asked FHA to provide us a list of changes to the program between 2010 and 2016, and we interviewed FHA officials in HUD headquarters and at the National Servicing Center. We further corroborated our understanding of DASP through interviews with the servicers and purchasers. Evaluation of Certain DASP Procedures and Documentation To identify FHA’s procedures for monitoring loan eligibility, we examined procedures identified in the servicer agreements and contracts and statements of work for entities assisting in oversight of DASP sales. We assessed the extent to which these procedures existed and were working effectively by reviewing status codes from FHA’s default monitoring system and examining relevant findings from HUD OIG audit reports. We found limited information in agency documentation on steps conducted to verify loan eligibility and had to rely on discussions with FHA staff and contractors on monitoring processes. We also interviewed servicers on their process for selecting loans and certifying loan eligibility for DASP. We further corroborated this information by providing a combined list of steps to FHA officials to verify accuracy. To assess whether FHA’s procedures for assessing loan eligibility were working, we determined the extent to which FHA’s sold loans appeared to be ineligible in its 2016 sales. To identify the ineligible loans, we compared the eligibility criteria listed in the 2016 servicer agreements to the data in the default monitoring system. We obtained the default information for sold loans for the period 2 months prior to the bid date— the period when servicers generally submit loans for sale—and at the bid date. We limited our analysis to loans sold in 2016 because FHA’s loan eligibility criteria changed from sale to sale and 2016 was the most recent year a sale occurred. We selected a nongeneralizable sample of 10 loans with ineligible default codes in the default monitoring system as of the bid date. To determine why FHA sold loans that appeared to be ineligible, we provided list of sold loans with ineligible codes to FHA staff for them to research and provide their explanations. We followed up in interviews with officials from FHA’s Office of Asset Sales to further clarify their responses. We also interviewed FHA officials regarding data reliability and to ensure that our understanding of the default codes and their corresponding eligibility or ineligibility for sale was accurate. We also performed electronic checks for consistency and validity and found the data to be sufficiently reliable for the purpose of determining default status, length of delinquency, and the extent to which loans that FHA sold in 2016 appeared to be ineligible. Analysis of Loan Modifications To assess whether DASP purchasers offered borrowers payment- lowering modifications, we evaluated the loan modifications offered by individual purchasers by comparing borrowers’ monthly mortgage payments prior to being modified to their monthly payments after being modified. We obtained postmodification payment data from the postsale reports and premodification payment data from the submitted loan database. Using the most recent postsale record for each modified loan, we calculated the change in payment resulting from the modifications offered by DASP purchasers. To confirm that we used the appropriate data sources and variables for our analysis, we contacted FHA’s Program Financial Advisor, who collects postsale reporting data and reports some information on modifications. Our analysis included all loans sold in DASP sales that occurred between 2013 and 2016, with some exceptions, in line with the scope of our comparison analysis of outcomes. We selected this scope because it represented the period for which FHA was generally able to provide consistent postsale quarterly reports. In addition, to assess the sustainability of the modifications offered by DASP purchasers, we used data on modification type from the postsale reports to calculate the number of modifications that included a deferment. We identified loan modification characteristics from prior GAO work. We also reviewed the purchaser agreements and postsale reports to examine the information available on modified interest rates. Our analysis was limited to modifications that were reported using the more expansive list of characteristic codes introduced in 2016, which accounted for about 95 percent of the modifications in our scope. To assess the reliability of the modification data, we checked for missing or invalid data entries across different modification fields, including modification date, modification type, and monthly payment before and after a modification. We found that purchasers generally reported consistent information on modifications for loans sold in DASP sales that occurred between 2013 and 2016 and determined the data to be sufficiently reliable for the purpose of calculating payment change and assessing the sustainability of modifications. Comparison Analysis of Outcomes for Sold Loans and Unsold Loans Scope of the Data We used multiple FHA data sources to match sold loans to similar unsold loans and compare outcomes across the groups. We used data from FHA’s default monitoring system and integrated database to obtain information on loan-level characteristics for both sold and unsold loans, such as length of delinquency. However, FHA data did not contain loans’ current property value or current loan-to-value (LTV) ratio. To calculate the current property value, we generated property values for sold and unsold loans based on data in the integrated database, including property value at origination, date of origination, and location information. We then aged the property values to the matching month and year using FHFA’s House Price Index data, which considers geography. We calculated the LTV ratio for sold and unsold loans by dividing the current unpaid principal balance obtained from the default monitoring system by the calculated current property value. To identify the loans sold through DASP and to determine their outcomes, we used postsale reporting data reported by DASP purchasers. To determine monthly outcome statuses for unsold loans, we used FHA’s default monitoring system and integrated database. Our analysis generally included loans sold in DASP sales that occurred between 2013 and 2016, but we excluded some sales and pools for various reasons. We excluded loans sold in the DASP sales that occurred from 2010 through 2012 because FHA could not provide semiannual or quarterly postsale reports for these loans. We excluded loans sold in Neighborhood Stabilization Outcome (NSO) pools in the first sale in 2013 because FHA had not yet implemented reporting requirements for more detailed information on loan status for NSO pools. We excluded Direct Sales, through which FHA directly transfers loans to government entities, as well as Aged Delinquent Portfolio Loan Sales, because these sales do not follow normal DASP procedures. Lastly, we excluded loans in pools that were offered for sale but not traded and loans that dropped out before transfer and were never sold. FHA was generally able to provide quarterly reports for the remaining sales and pools within the required reporting time frame. Data Preparation and Reliability We took a number of steps to prepare and ensure the reliability of the data used to match sold loans to similar, unsold loans and compare outcomes. We generated seven datasets corresponding to the seven DASP sales in our scope. Each dataset was made up of the records in the default monitoring system 2 months prior to the bid date for the corresponding DASP sale—the time servicers submit eligible loans for sale to FHA, according to FHA officials. We eliminated duplicate case numbers as well as erroneous submissions, and we added sale and pool variables to identify sold loans based on the master list of sold loans. We also excluded unsold loans that were ineligible for sale at the time of matching. Specifically, we reviewed FHA’s servicer agreements and developed criteria for excluding unsold loans from matching based on sale eligibility requirements outlined in these agreements. We interviewed FHA officials to ensure that our understanding of the default status codes and their corresponding ineligibility for sale was accurate. We then used this information to identify and exclude ineligible loans. We performed a variety of electronic checks to test the completeness, consistency, and logic of outcome statuses for sold and unsold loans as reported by servicers. We excluded or corrected, where possible, a small percentage of sold and unsold loans (2 percent excluded and about 11 percent corrected) that had invalid or illogical reported statuses. We also excluded loans with invalid case numbers, loans erroneously reported as sold by purchasers, and other problem records. These exclusions accounted for less than 1 percent of the sold loans in our scope. We found that three pools were missing more than half of the expected number of postsale reports. Because these pools accounted for less than 2 percent of the sold loans in our scope, we decided to keep these pools in our analysis as they provided additional data points for estimating outcome probabilities, and including them would not significantly bias our results. Finally, when assessing data reliability, we consulted relevant documentation on the default monitoring system, integrated database, and postsale reporting systems and the specific fields used from these systems. We also interviewed officials knowledgeable about how data from these systems were collected, maintained, and accessed. Based on these steps, we determined that the data were sufficiently reliable for the purpose of matching sold loans to similar, unsold loans and comparing outcomes. Matching Analysis We used statistical matching methods to identify a comparison group of unsold loans that closely resembled sold loans on loan characteristics that could affect the likelihood of foreclosure. Unsold loans were matched to sold loans for each sale, resulting in seven groups of unsold loans corresponding to loans sold in the seven DASP sales that occurred in 2013–2016. We matched unsold to sold loans 2 months prior to the bid date across the following characteristics: Length of delinquency. Number of missed payments at matching. Occupancy status. Whether property was occupied or vacant at matching. Location. Location of the property, based on latitude and longitude. Servicer. FHA-approved, mortgage servicer. Loan-to-Value (LTV) ratio category. Value of the property relative to the outstanding unpaid balance on the loan at matching. Loan origination. Year of the loan’s origination. We excluded modification status from the matching criteria. While there is some indication that loans that have been modified once are more likely to redefault in the future, this is largely dependent on the quality of the modification. However, modification quality could not be determined based on the FHA data we received. Our analysis did not seek to conduct a definitive evaluation of the causal effects on outcomes of being sold through DASP. Instead we sought to improve on simple comparisons of outcomes between sold and unsold loans by constructing a comparison group of unsold loans that were similar to sold loans on loan-level characteristics known to affect the likelihood of foreclosure. For example, matching sold and unsold loans by location minimized variation in neighborhood characteristics and local housing markets that could be associated with a higher or lower likelihood of foreclosure. We selected these factors based on our previous work on foreclosure mitigation and on consultations with subject-matter experts within GAO. See appendix V for more information on our statistical matching analysis. Outcome Analysis To compare outcomes for sold and unsold loans, we identified outcomes using postsale reporting data dictionaries in FHA’s purchaser agreements as well as FHA’s status codes used in its default monitoring system and integrated database data dictionaries. We grouped the outcomes into six outcome categories. To assign sold loans to a category, we used FHA’s postsale reporting data, and to assign unsold loans to a category, we used FHA’s default monitoring system and integrated database data. The outcome categories were as follows: Foreclosure. Loans terminated with foreclosure. Reperforming. Loans restored to performing status either under the original mortgage terms or through a permanent modification. In this outcome, the borrower retains ownership of the home. Temporary Action. Loans with temporary action that allow the borrower to retain ownership of the home—for example, an agreement for paying the loan balance or restoring it to performing status has been reached but has not met FHA’s time requirement to meet FHA’s definition of performing. This category may also include other interventions that have the intent of keeping the borrower in the home, such as forbearance. Short sale/deed-in-lieu of foreclosure. Loans that avoid foreclosure through short sales and deeds-in-lieu of foreclosure. In this outcome the borrower loses ownership of the home. Unresolved. Loans remaining in default status and whose outcomes were unresolved. Other. Loans whose outcomes do not fit into these other categories. A number of sold loans were reported by purchasers as resold, with no further outcome updates, and we decided to categorize these separately. Purchasers had the option of reporting on loans as resold until 2015, when FHA introduced a reporting requirement that purchasers continue reporting the outcome status of loans even after selling them to new buyers. For the loans in our scope, the percentage of postsale reports that included a status of resold ranged from 7 to 35 percent for the 2013 and 2014 sales, before dropping to less than 1 percent beginning with the 2015 sale. Purchasers may have returned resold loans to performing status before selling them because performing loans are more profitable, and, by categorizing these loans separately, we may have undercounted performing loans for the earlier DASP sales. While we considered classifying loans reported as resold as performing, our review of status sequences for loans with at least one resold status showed that purchasers reported a range of nonperforming outcomes before and after the resold status, indicating that not all resold loans were performing. We therefore determined that categorizing resold loans separately would result in more reliable estimates for sold loans. Using data from the default monitoring system to classify outcomes for the matched, unsold loans in our analysis required us to make some assumptions that may have resulted in overcounting performing, unsold loans. Because the default monitoring system only contains data on delinquent loans and does not include status information on performing loans, our classification of performing, unsold loans was based on whether or not a servicer reported the loan in the default monitoring system in a given month. As a result, we assumed that unreported loans were performing. However, a missing report could also be the result of a reporting omission by the servicer, rather than an indication of a performing status. To mitigate the risk of overcounting performing, unsold loans, we used a variable indicating the length of a loan’s current default episode to help us distinguish between performing loans and servicer omissions. Specifically, we counted unsold loans as performing only if the default episode length in the most recent default monitoring system report was less than the reported episode length in the default monitoring system report preceding the period of no reporting. We assumed that a lower default episode length in the most recent default monitoring system report meant that the borrower was making payments during the period of no reporting. Otherwise, we classified periods of no reporting as missing. We compared monthly outcomes for sold loans and unsold loans after servicing transferred to the purchaser. We set the origin of the observation period to the latest servicing transfer date in each pool of sold loans and their associated matched unsold loans. Because the latest servicing transfer date varied across these groups, the number of observations and the associated dates varied across pools and sales. We measured outcomes for up to a maximum of 48 months, from January 2013 through December 2017, the most recent full quarter of postsale reporting data available at the time of our review. The follow-up periods ranged from the full, 4-year reporting period required by FHA for loans sold in the 2013 sales to 1 year for loans sold in the second sale in 2016. See appendix V for more information on our statistical analysis of outcomes for sold loans and unsold loans. Potential Effects of Changes to DASP on the MMI Fund Association of Pool-Level Characteristics with Bidder Participation To examine the extent to which loan-pool characteristics were associated with bidder participation for FHA loan sales and the enterprises’ nonperforming loan sales, we built regression models. We identified from interviews key characteristics (independent variables) that may make loan pools attractive to certain bidders, such as having a single servicer or low vacancy. We obtained bid-day data from FHA and the enterprises that included the number of bidders (dependent variable) and the winning bid amounts, as well as the timing of the sale. We generated FHA pool characteristics from the loan level data in FHA’s submitted loan database and supplemented it with FHA default status data (see table 2 above for further information about FHA’s data sets). For the enterprises, we obtained pool characteristics from a published FHFA report. This report provided a range of characteristics to compare to those of FHA’s pools. See table 3 for our regression estimates of the relationship between pool characteristics and the number of bidders in FHA’s DASP sales and the enterprises’ sales. The associated p-values are presented in parentheses, and *, **, and *** denote significance at 10 percent, 5 percent, and 1 percent or better, respectively. In the report body we use the 95 percent confidence level as indicating significance of the regression estimates. To assess the reliability of the FHA data, we performed reasonableness checks that resulted in the removal of FHA’s 2010 sale due to a large number of invalid case numbers and two additional pools from later sales, we also removed pools based on missing or invalid date—in total 4 percent of FHA’s pools. We did not independently verify the data in the FHFA reports, but we interviewed the FHFA staff that generated the report about the reliability of the data. Some limitations stem from the differences between FHA’s and the enterprises’ pools and the underlying loans as well as the data available on the pools. For example, we use data from FHA sales from 2011–2016 and from sales in 2015–2017 for the enterprises. We use the time variables to control for housing market differences as well as the defaulted loan sale market. Additionally, we included FHA’s nontraded pools but not the enterprises’ nontraded pools because the FHFA reports did not present data on these pools. We showed the differences and similarities across the entities in figure 14 in the report. We determined that data for the remaining pools were sufficiently reliable for examining the association of pool characteristics with bidder participation and for comparison between the enterprises’ and FHA’s sales. To calculate pool reserve prices, we obtained FHA data on quarterly loss severity by disposition method for 2013–2016. Using our results from the outcomes comparison analysis, we calculated pool-level reserve prices and compared them to winning pool-level bids. Auction Structure Analysis To assess the effect that changing FHA’s auction design could have on the MMI Fund and to identify elements of a successful auction structure, we reviewed economic literature on auction structures and auction descriptions in business and commercial literature. To compare the DASP auction structure with the enterprises as well as mortgage auctions in the private market, we analyzed agency and enterprise documents and interviewed market participants. We developed a detailed description of FHA’s and the enterprises’ current auction structures, including information about the nature of the loan pools being auctioned; about sellers, purchasers, and other auction stakeholders; and about the benefits and drawbacks of the auction design. In interviews, we received suggestions about aspects of FHA auctions that, if changed, may increase bidder participation. To examine these aspects, we interviewed purchasers on their potential interest in these changes and examined FHA sale data following an instance of a single purchaser winning all the pools in a sale. To assess the DASP auction structure, we compared it to selected characteristics of successful auctions and determined the extent to which the characteristics were used by FHA. We conducted this performance audit from January 2017 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: DASP Servicers, Purchasers, and Characteristics of Sold Loans This appendix includes descriptive information about the Federal Housing Administration’s (FHA) Distressed Asset Stabilization Program (DASP) servicers, purchasers, and sold loans. The information presented is generally based on loans sold in DASP sales that occurred between 2013 and 2016. DASP Servicers and Purchasers Thirty-two servicers participated in DASP sales between fiscal years 2013 and 2016, with the largest participating servicer offering 48 percent (more than 44,000) of the loans sold. As seen in figure 15, the number of servicers increased from nine in the first sale in 2013 to 22 in the second sale in 2016. During this same period, 26 purchasers participated in the DASP sales, with the largest participating purchaser buying 27 percent (about 25,000) of the loans sold. The share of loans offered by individual servicers also varied over time and by sale. One or two servicers offered the majority of sold loans in earlier sales, but more servicers offered a greater share of the loans sold in later sales (see fig. 16). For example, one servicer offered 89 percent of the loans sold in the first sale in 2013, about 51 percent of the loans sold in the second sale in 2014, and about 8 percent of the loans sold in the second sale in 2016. During this time, new servicers began offering loans for sale, and servicers that had offered a smaller share of the loans sold in earlier sales began offering a larger share of loans for sale. Characteristics of Loans Sold through DASP Occupancy status. The majority of properties sold through DASP were occupied by the borrower, with a smaller portion having been vacated (see fig. 17). DASP purchasers told us that their ability to contact and engage borrowers is one determinant in whether they are able to offer loss mitigation options to avoid foreclosure. One purchaser noted that in cases where it is unable to contact the borrower, which may indicate that the property is vacant, it tries to foreclose as quickly as possible. Delinquency. The majority of loans sold through DASP had missed 12 or more payments (see fig. 18). As discussed earlier, a loan becomes delinquent after a borrower misses a single payment, and goes into default after it is two payments past due. Generally, servicers must utilize a loss-mitigation option or initiate foreclosure within 6 months of default. As we previously reported, serious delinquency is among the factors associated with an increased likelihood of foreclosure. Loan-to-Value (LTV) ratio. About 18 percent of sold loans had an LTV ratio of 110 or greater (see fig. 19). The LTV ratio represents the unpaid principal balance of a loan as a percentage of the current property value. As we previously reported, negative equity or a high LTV ratio is among the factors associated with an increased likelihood of foreclosure. Origination. As figure 20 shows, sold loans were more likely to have originated at the peak of the housing crisis in 2008 and 2009. FHA officials told us that they used DASP to reduce the significant backlog of defaulted loans they were faced with following the housing crisis. Appendix III: Federal Housing Administration Documents Guiding the Distressed Asset Stabilization Program In this appendix, we describe the documents the Federal Housing Administration (FHA) uses to guide the Distressed Asset Stabilization Program. The documents listed in table 4 represent the current written procedures and guidance that FHA planned, as of May 2018, to incorporate into a single document—the Asset Sales Handbook—to centralize the information for internal and external stakeholders. Appendix IV: Reported Postsale Modification Actions We examined the different types of actions purchasers have used to modify loans they purchased through the Distressed Asset Stabilization Program (DASP) and the expected effect of each type of action on borrowers’ payments. Table 5 summarizes our findings on postsale modification actions. Our analysis was limited to modifications reported using reporting codes introduced in the purchaser agreement for the first sale in 2016, and included loans sold between fiscal years 2013 and 2016. We found that the Federal Housing Administration (FHA) may not have the data it needs to determine whether payment-lowering modifications offered by purchasers were sustainable—for example, a modification in which a low payment was later adjusted to higher than what it was prior to modification. Therefore, we could not determine the long-term effect on payment for many modifications offered by purchasers, as noted by “unclear” in the last column of table 5. Appendix V: Additional Information on Matching and Outcomes Analysis This appendix provides additional methodological details on our analysis to compare outcomes between loans sold through the Distressed Asset Stabilization Program (DASP) and a comparison group of similar unsold loans. The analysis consisted of two parts: (1) applying statistical methods for constructing matched comparison groups and (2) estimating a statistical model of loan outcomes using the matched sample of loans. Additional Information on Matching Analysis We matched one unsold loan to each sold loan, using exact and Mahalanobis distance matching methods. We matched exactly on occupancy status, state, and loan servicer, and we matched the distributions of loan delinquency period, loan-to-value ratio (divided into five categories), geographic coordinate, and origination year. Unsold loans could be matched multiple times in order to maximize the degree of similarity between the sold and unsold loans, given constrained sample sizes of potential comparison loans. (That is, we used one-to-one matching with replacement.) Matching occurred separately for each loan sale in order to measure the matching variables 2 months before each sale occurred. We assessed the quality of candidate matched samples by consulting univariate empirical-QQ plots, descriptive statistics, and multivariate Kolmogorov-Smirnov tests of equal distributions for each of the matching variables, as implemented in the “Matching” package for the statistical software, R, version 3.5.1. We attempted to match exactly within the smallest geographical area that sample sizes allowed. Location is important for the outcomes of Federal Housing Administration (FHA) loans and is potentially correlated with many unobserved variables, such as local housing market conditions. After experimenting with multiple geographic areas, such as the census tract and county, we chose a strategy of matching exactly on state and matching in distribution on latitude, longitude, and product. This ensured that the comparison loans were in the same states as the sold loans, which held constant differences in foreclosure processes and other political and legal differences. Although the matched loans were potentially in different counties or municipalities than the sold loans, generally they were still close to each other, as measured by the geographic coordinates. We obtained a similar matched sample of comparison loans for each loan sale, as summarized in table 6 and figures 21 and 22. Although we conducted the matching separately for each sale (exactly matched), we combined the sales and their matched comparison loans for the purpose of summarizing their similarity across the matching variables. Additional Information on Outcome Analysis We used statistical modeling methods designed for longitudinal time-to- event or “duration” data to compare outcomes for sold and matched unsold loans. Conventional duration methods, such as “competing risks” models, would estimate the probability that a loan experienced one or more terminal outcomes by a certain follow-up time. These methods assume that event times are observed exactly, and that no outcome can occur more than once. These assumptions were not realistic for our analysis. Loans could transition among several nonterminal outcomes over time, such as reperforming or temporary action, before experiencing a terminal outcome, such as foreclosure. Our data sources measured the status of unsold loans monthly and sold loans quarterly. However, events could occur on any date, in continuous time, so the status of each loan was unknown between pairs of reporting times (or interval-censored). We used Multi-State Markov models to account for these features of the data. Our models assumed a directed graphical structure for how loans could transition among events between observed follow-up times, as described in figure 23. We developed our model of possible transitions based on FHA’s typical process for managing unsold delinquent loans and DASP program rules for managing sold delinquent loans. To simplify the model, we did not allow paths for transitions that were infrequently observed, illogical, or inconsistent with prior knowledge about loan management. These unusual transitions in the data may reflect misclassified outcomes or transitions through unobserved outcomes between observation times. Table 7 gives the sample counts of the transitions in the matched sample of loan-month observations. The graphical version of our model implied a matrix of modeled transitions among outcomes, with transition probabilities set to 0 for paths between outcomes not shown in the graph. Specifically, we defined the loan outcomes at time t, Y, as a stochastic process, taking values according to an underlying model of transition probabilities from time 0 to t: where r and s denoted two outcomes from the set of outcomes above in table 7, such as unresolved and reperforming. Consistent with existing literature, we assumed that the outcome process was a time- homogenous Markov chain. This assumption made the model mathematically tractable, but required the transition probability at any follow-up time to be independent of prior outcomes and constant over the observation period. (We estimated versions of the model that relaxed this assumption, as described below.) Under this assumption, we modeled the transition hazard rate from outcome r to s as: where x and βrs were vectors of covariates and transition-specific parameters (excluding an intercept) and qrs was an unspecified proportional baseline hazard. All covariates were time-invariant characteristics of the loans measured at baseline, 2 months prior to the loan’s bid date, used to create the matched sample. We estimated βrs using maximum likelihood estimation methods, as implemented by the “msm” package in R 3.5.1. The body of this report provides estimated transition probabilities for various groups of loans, including loans that were sold or unsold. We estimated the probability of a loan’s transitioning from unresolved at t = 0 to some other outcome at t using the estimated parameters and the matrix exponential P(t) = exp(tQ), where P and Q are the matrices of transition probabilities and hazards, respectively, for all outcomes r and s. We used Monte Carlo simulation from the fitted multivariate normal distribution of the parameters to estimate 95 percent confidence intervals for the transition probabilities, using 1,000 draws. In appendix VI, we provide more detailed estimates of these transition probabilities and their confidence intervals for key findings discussed in the body of this report. Our models estimated the difference in transition probabilities between sold and unsold loans in the matched sample by including an indicator for sold status in x. We estimated transition probabilities for certain subpopulations of loans, like specific purchasers or loan sales, by estimating separate models for each subpopulation. This approach allowed the models to be fully stratified and reduced computational burdens associated with estimating many parameters using a sample of 1 million or more observed transitions, as a fully interactive specification between sold status and the subpopulation variables would have required. However, this approach prevented us from estimating the partial interactions between sold loan status and the subpopulation variables. We conducted several validation and robustness checks of the analyses reported in the body of this report. These included the following: Predictive fit. We did not design our models to predict future outcomes but rather to make inferences about the difference in transition probabilities between sold and matched unsold loans. However, to identify substantial problems with model fit, we compared the observed prevalence of each outcome to the estimated prevalence expected under our models. Figure 24 shows the predictive fit for models with a covariate in x for sold status and a piecewise-constant indicator for the period after month 12. The estimated and observed prevalence are generally close for most outcomes before month 40. After that month, the model underestimates the prevalence of foreclosure and overestimates the prevalence of unresolved. This lack of fit late in the observation period may reflect the substantial effect of sales cohort, which we modeled through separate models stratified by sale rather than as a covariate. In any case, the model fit was acceptable, given our nonpredictive use of the model and the limitations of using observed outcome prevalence rates to validate predictions of a process with interval censoring. Time-inhomogenous models. We relaxed our assumption that the transition intensities were constant throughout the observation period by including indicators in x for whether the observation fell before or after 12 months. FHA changed DASP rules before the 2015 sales to extend the moratorium on foreclosures from 6 months to 12 months. Outcome transition estimates from a model including these time indicators plus a sold indicator appear in table 8, along with our base estimates from a time-homogenous model with only the sold indicator. Although Akaike Information Criterion values showed that the piecewise model improved the fit, the estimated transition probabilities generally supported the same substantive conclusions. The piecewise model estimated that sold loans were somewhat more likely to transition to a short sale or deed-in-lieu outcome, and somewhat less likely to transition to reperforming, but the direction of the association was the same as in the time-homogenous model. We used the time- homogenous model to provide results in the body of this report and in appendix VI, due to the considerable computing time required to estimate models with piecewise-constant covariates. Appendix VI: Data for Selected Outcome Figures and Additional Outcome Estimates In this appendix, we provide data for selected borrower outcome figures presented in this report. We also provide additional outcome figures and data, as well as outcome data for sold loans and unsold loans by some loan-level characteristics. These figures and data are based on the statistical matching and modeling analysis of loans sold through the Distressed Asset Stabilization Program (DASP) and similar, unsold loans described in appendix I and appendix V of this report. Data for Outcome Figures Tables 9–12 present data for selected outcome figures shown in the report. Table 9 presents estimates of foreclosure and foreclosure avoidance outcome rates for sold loans and similar, unsold loans, based on statistical models (fig. 7). Table 10 presents these estimates for out-of- home and in-home outcomes (fig. 8). Figure 9 in the body of this report shows the foreclosure and foreclosure avoidance outcomes by DASP sale, and tables 11 and 12 present these estimates for all outcomes by DASP sale, 12 and 24 months following the servicing transfer date, respectively. Additional Outcomes As discussed in appendix I, to compare foreclosure and foreclosure avoidance outcomes for sold and unsold loans, we assigned loans to one of six outcome categories. Figure 25 and table 13 present the outcome figures and associated data for sold and unsold loans across all six categories. Loan-Level Characteristics Figures 26–29 compare specific outcomes for sold and unsold loans across different loan characteristics. We selected characteristics and outcomes that showed clear patterns or differences between sold and unsold loans. Our analysis showed that the loan-to-value (LTV) ratio was less strongly associated with reperforming rates for sold loans compared to similar, unsold loans (see fig. 26). For example, while the probability of reperforming varied across different LTV ratio categories for unsold loans, the probability varied less for sold loans. Our analysis of outcomes by different delinquency categories showed that length of delinquency was less strongly associated with reperforming rates for sold loans compared to similar, unsold loans (see fig. 27). For example, while the probability of reperforming 12 months after the servicing transfer date ranged from 8 to 29 percent across different delinquency lengths for unsold loans, this range was smaller for sold loans—about 9 to about 16 percent. Our analysis of outcomes by year of loan origination showed that length of delinquency was less strongly associated with reperforming rates for sold loans compared to similar, unsold loans (see fig. 28). For example, the year of loan origination did not affect the probability of reperforming for sold loans. However, for unsold loans the probability of reperforming was lowest for loans originating in 2007–2008 at the beginning of the housing crisis. Our analysis of outcomes by occupancy showed that, for occupied properties, sold loans were more likely to experience foreclosure compared with similar, unsold loans (see fig. 29). However, for vacant properties, sold loans experience foreclosure at equal or smaller rates compared to similar, unsold loans. Appendix VII: Additional Auction Structure Information and Evaluation The Federal Housing Administration (FHA) uses a pooled, highest-bidder, sealed-bid auction structure to sell its single-family defaulted residential mortgages through the Distressed Asset Stabilization Program (DASP). This auction structure is consistent with industry standards and private market practices for selling these mortgages and includes many characteristics of a successful auction. We identified characteristics of successful auctions by reviewing economics literature on auction structures and auction descriptions in business and commercial literature, and we obtained information about the nature of the loans being auctioned, about sellers, purchasers and other auction stakeholders, and about the benefits and drawbacks to each of various details of the auction design. Table 14 shows some auction characteristics and an evaluation of FHA’s DASP design. Appendix VIII: Comments from the Department of Housing and Urban Development Appendix IX: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Jill Naamane (Assistant Director), Rhonda Rose (Analyst in Charge), Abigail Brown, Stephen Brown, Karen Jarzynka-Hernandez, John Karikari, May Lee, Ned Malone, Paulina Maqueda-Escamilla, John McGrail, Samuel Portnow, Tovah Rom, Jena Sinkfield, Anne Stevens, Jeff Tessin, Jim Vitarello, Sarah Wilson, and Elisa Yoshiara made key contributions to this report. Also contributing to this report were DuEwa Kumara and Jason Rodriguez.
Why GAO Did This Study HUD insures single-family mortgage loans and is authorized to sell defaulted loans under the National Housing Act. In fiscal years 2010–2016, FHA auctioned off approximately 111,000 loans to private purchasers under DASP. DASP helped reduce a backlog of federally insured defaulted loans stemming from the 2007–2011 financial crisis and was intended to protect the MMI Fund by paying insurance claims before the costly foreclosure process. GAO was asked to evaluate DASP. This report examines, among other things, certain DASP procedures, including verifying loan eligibility criteria, and documentation; FHA's evaluation of the identified outcomes of sold loans and how these compare with similar, unsold loans; and the potential effects that changes to DASP might have on the MMI Fund. GAO reviewed FHA policies, contracts, and reports, and interviewed FHA officials, selected servicers and purchasers based on sales participation, and other stakeholders. GAO also conducted a statistical analysis comparing outcome data for sold loans and similar loans that remained FHA-insured and analyzed the effect of loan pool characteristics on bidder participation. What GAO Found The Department of Housing and Urban Development's (HUD) Federal Housing Administration (FHA) uses multiple entities to check loan eligibility for the Distressed Asset Stabilization Program (DASP)—in which FHA accepts assignment of eligible, defaulted single-family loans from servicers in exchange for claim payments and sells the loans in competitive auctions. After servicers submit loans for sale, FHA and its contractors concurrently check loan data for completeness, validity, and eligibility. FHA relies on servicers to check eligibility a few weeks before and again after the bid date. The status of delinquent loans can be fluid, and a change in eligibility status close to this date may not be detected. GAO's analysis of fiscal year 2016 default data indicates about 2.67 percent of loans that FHA sold were ineligible based on length of delinquency or loss mitigation status. Without checking loan eligibility closer to bidding, FHA risks selling ineligible loans, and borrowers could lose access to benefits. FHA does not evaluate outcomes for sold loans against similar unsold loans. GAO found that, in aggregate, sold defaulted loans were more likely to experience foreclosure than comparable unsold defaulted loans (see figure). However, GAO's analysis identified varying outcomes by purchasers and sales. For example, some purchasers' loans had higher probabilities of avoiding foreclosure, with borrowers making regular payments again by 24 months after the transfer of loans. Also, loans sold in 2016 sales were less likely to experience foreclosure compared to unsold loans. HUD policy states that the agency's evaluations isolate program effects from other influences. Evaluating outcomes for sold loans against similar unsold loans could help FHA determine whether DASP is meeting its objective of maximizing recoveries to the Mutual Mortgage Insurance Fund (MMI Fund) and understand the extent to which DASP helps borrowers. Changing some of FHA's auction processes may help the MMI Fund. FHA could increase participation and MMI Fund recoveries in its auctions by communicating upcoming sales earlier. One purchaser said that additional notice would allow it time to plan for the capital needed to bid. Also, FHA set reserve prices (minimum acceptable price) based on the amount it expected to recover after loans completed foreclosure—yet GAO estimates that some of these loans will avoid foreclosure (see figure). As a result, FHA risks recovering less for the MMI Fund in loan sales than if the loans had not been sold. What GAO Recommends GAO is making nine recommendations to FHA, including establishing specific time frames to check loan eligibility, evaluating loan outcome data, and changing auction processes to help protect the MMI Fund. FHA generally agreed with seven recommendations, and neither agreed nor disagreed with two. GAO maintains that all the recommendations are valid.
gao_GAO-18-135
gao_GAO-18-135_0
Background Coast Guard Organizational Structure for TAP Coast Guard staffing for the TAP program reflects the organizational structure of its Health, Safety, and Work-Life Directorate, which oversees TAP policy. The Coast Guard’s TAP managers are assigned to 13 installations where Health, Safety, and Work-Life offices are located. One or two TAP managers are assigned to each of the Coast Guard’s nine districts, which often span multiple states and territories, and these TAP managers oversee operations both for the installation where they work and for units stationed throughout the region (see fig. 1). For example, the TAP manager assigned to Coast Guard Base Cleveland oversees TAP implementation both for that installation and for Coast Guard units serving in Coast Guard District 9—a region that encompasses portions of eight states and the Great Lakes area. The program manager in Coast Guard Headquarters manages Coast Guard’s Transition Assistance Program. The Coast Guard protects and defends over 100,000 miles of U.S. coastline and inland waterways, and consequently, TAP-eligible Coast Guard servicemembers sometimes work in small, widely dispersed units assigned to remote locations, including on Coast Guard vessels. One aspect of the Coast Guard’s mission—a first responder for maritime search and rescue in United States waters—can require Coast Guard servicemembers to respond to emergency situations at a moment’s notice. TAP Process and Timing The Coast Guard, which is overseen by DHS, not DOD, generally oversees TAP implementation for its servicemembers. Federal law requires DOD and DHS to require eligible servicemembers under their respective command to participate in TAP, with some exceptions. In response to this statutory requirement, DOD has promulgated regulations and developed issuances which require that servicemembers complete the component parts of the TAP program, and that commanding officers ensure that servicemembers under their command complete these parts, with some exceptions. In contrast, according to Coast Guard officials, Coast Guard has not promulgated any regulations to implement TAP. Further, Coast Guard issued its most recent Commandant Instruction in 2003, approximately 8 years prior to TAP redesign in 2011. However, Coast Guard issued policy guidance in 2014 that made some limited updates to the Commandant Instruction. Coast Guard officials also said the Coast Guard plans to issue a new TAP Commandant Instruction in May 2018. Under the redesigned TAP, Coast Guard servicemembers—like their DOD counterparts—begin TAP by attending pre-separation or transition counseling where they are briefed on TAP requirements and available transition resources. Pre-separation or transition counseling can be delivered by TAP managers, uniformed career counselors, or online (see fig. 2). Coast Guard servicemembers are able to participate in TAP either through the Coast Guard or at a DOD installation, if space is available. During or at the end of pre-separation or transition counseling, participants register for and attend TAP courses. The core curriculum includes three required courses—the Department of Labor (DOL) Employment Workshop, unless exempt, and Department of Veterans Affairs (VA) Benefits Briefings I and II—and other courses that focus on aspects such as translating military skills and experiences into credentialing for civilian jobs and preparing a financial plan. Participants may also elect to attend additional 2-day classes either at a Coast Guard or DOD installation or online through the Joint Knowledge Online platform, according to agency officials. These additional 2-day classes include Accessing Higher Education, Career Technical Training, and Entrepreneurship. Federal law requires the Coast Guard to permit servicemembers who elect to take these additional 2-day classes to receive them. Federal law establishes a time frame within which servicemembers with anticipated separation or retirement dates should begin the program. According to federal law, retirees with anticipated separation dates are expected to begin TAP as soon as possible during the 24-month period preceding that date, but not later than 90 days before separation. Similarly, servicemembers with anticipated separation dates who are not retiring are expected to begin as soon as possible during the 12-month period preceding that date, but not later than 90 days before separation. Servicemembers who learn that they will separate or retire from the military fewer than 90 days before their anticipated separation or retirement date are expected to begin TAP as soon as possible within their remaining period of service. Interagency Collaboration As we previously reported, officials from multiple federal agencies collaborate to deliver and assess TAP. The TAP interagency governance structure includes senior officials from DOD, VA, DOL, DHS, the Department of Education, the U.S. Office of Personnel Management, and the Small Business Administration (SBA), who participate in TAP Senior Steering Group meetings at least every month and TAP Executive Council meetings each quarter. Further, officials tasked to particular interagency working groups focus on specific elements of TAP (e.g., curriculum or performance measures), meet more frequently (typically at least once a month), and generally communicate weekly, according to agency officials. The TAP program manager for the Coast Guard told us that he participates in several of the working groups. One such working group is the performance management working group that oversees the interagency TAP evaluation plan, which includes monitoring performance measures related to TAP requirements, indicators of post-program outcomes, and formal evaluations sponsored by interagency partners. While DOD tracks TAP-specific performance measures, other interagency partners track indicators of how well veterans fare after leaving military service. For example, DOD tracks performance measures prior to servicemembers’ separation, such as TAP participation and credential attainment rates, while other agencies track post-separation indicators, such as unemployment rates among veterans ages 18 to 24. The performance management working group also reviews the formal evaluation efforts led by individual agencies and provides feedback to help shape their efforts in accordance with the TAP Evaluation Plan. Coast Guard Lacks Reliable Data and Cites Several Factors that Affect Participation Coast Guard Lacks Reliable Data on Servicemembers’ Participation in TAP The Coast Guard does not have complete or reliable data on participation levels in TAP. According to Coast Guard officials, a major reason why the data are not reliable is that the Coast Guard lacks an up-to-date Commandant Instruction that specifies when to record TAP participation data. Consequently, the data are updated on an ad-hoc basis, according to agency officials, and may not be timely or complete. For example, one TAP manager said she updates the list of TAP participants for her installation only once every few months because of her other duties. According to federal internal control standards, management should use quality information—including current and timely information— to achieve the entity’s objectives and to communicate quality information to external parties. Given the lack of timely and complete data, we determined the Coast Guard’s TAP data were not sufficiently reliable for an analysis of participation in TAP classes. Because it lacks policies and procedures governing reliable data collection, including when data should be entered and by whom, the Coast Guard cannot determine to what extent its servicemembers attend TAP, although federal law mandates that DHS ensure all TAP-eligible servicemembers participate in the program. In addition, the data collection system currently used to track TAP participation is not sufficient to ensure reliable data. For example, according to Coast Guard staff, TAP staff enter TAP participation data into a shared spreadsheet that all TAP managers can edit. Specifically, staff record the names of servicemembers they identify as TAP-eligible and whether these individuals completed required portions of TAP. Coast Guard officials said they are in the process of adopting a new data system, in October 2018, to more reliably track TAP participation and that they expect to fully adopt this system–DOD’s TAP-IT Enterprise System—after a new Commandant Instruction is finalized, in May 2018. In November 2016, DOD launched the new system to collect TAP-related data for servicemembers in the Army, Navy, Air Force, and Marine Corps. In addition to standardizing data collection and improving data completeness and accuracy, the TAP-IT Enterprise System is expected to track information related to the time frames of servicemembers’ participation. According to a senior DOD official, the military services will not be able to use the system to generate unit-level or installation-level reports until October 2018. Serving at a Remote Installation and Rapid Separations Hindered TAP Participation, As Did Limited Staff Capacity and Competing Priorities According to our survey, the most common factor affecting TAP participation, cited at 11 of the 12 Coast Guard installations we surveyed, pertained to servicemembers assigned to geographically remote locations. The next three most commonly cited factors–each cited by 7 of the 12 installations surveyed—relate to the timing of TAP participation: rapid separation from the military, not being sufficiently aware of the need to attend TAP, and starting the transition process too late to attend. (See fig. 3.) Headquarters-based TAP officials identified additional factors that may affect servicemember participation, such as separating from the Coast Guard Reserves or retiring with no plans to work after leaving the military. However, the Coast Guard lacks participation data to verify whether participation rates for these groups are in fact lower than for other Coast Guard servicemembers. Coast Guard installations we surveyed did not indicate that unit commanders or direct supervisors affected participation in TAP’s required courses or additional 2-day classes. However, Coast Guard servicemembers and TAP officials we spoke with said unit commanders or direct supervisors sometimes prevented participation. All three TAP managers we spoke with (of 12 nationwide) told us that while commanders generally allowed servicemembers to register for TAP courses, they occasionally required them to return to their duties before completing the courses. We observed this during a TAP class at a Coast Guard installation we visited when a servicemember’s commander ordered her to return to the unit during TAP training and she missed a briefing she wanted to attend. Two of three TAP managers we interviewed also said commanders sometimes required servicemembers under their command to wait to take TAP classes until close to their separation date because of mission priorities. Two of three TAP managers interviewed said that commanders in the Coast Guard face unique challenges in ensuring TAP participation. They said commanders in all branches of the military must balance competing demands, including their primary mission and the training needs of the personnel they oversee. They said it can be particularly difficult for Coast Guard commanders to juggle these priorities because Coast Guard servicemembers are sometimes assigned to very small units or called to return to duty for emergency situations during scheduled TAP classes. One TAP manager said that a commander in a remote location had collaborated with her to provide a classroom-based TAP class for transitioning Coast Guard servicemembers within the commander’s unit, but rescue efforts occurred during the class which resulted in most of those servicemembers returning to their vessel to respond to the emergency. In addition, all three TAP managers we spoke with said there are limited resources for holding TAP in a classroom setting. Consequently, classroom-based TAP may not be offered frequently in remote locations, making rescheduling difficult. One TAP manager said that her installation typically offers three or four TAP classes a year and because classes are so infrequent, servicemembers are encouraged to start TAP as soon as possible prior to separation. Coast Guard staff we interviewed said that juggling competing priorities affected the Coast Guard’s ability to implement TAP. Both the frontline and headquarters staff who oversee TAP implementation said they oversee at least three other programs in addition to TAP at their installation and throughout their regions, including the Coast Guard’s relocation and spousal employment programs. Coast Guard Relies on Online Delivery of TAP for Several Categories of Servicemembers The Coast Guard relies on online delivery of TAP information and classes for servicemembers who are rapidly separating and assigned to remote and geographically dispersed units, according to our survey results and several Coast Guard staff we interviewed. For example, all 12 installations we surveyed cited servicemembers facing rapid separations as a reason for accessing TAP training online, and 11 cited servicemembers being remotely stationed as a reason. Coast Guard staff also said online TAP was used for servicemembers interested in attending additional 2-day classes. The three TAP managers we interviewed also identified several reasons why installations had to rely on online TAP classes. For example, one manager corroborated our survey results, saying that many Coast Guard servicemembers worked in small units assigned to remote and geographically dispersed locations, making it difficult to convene a sufficient number of transitioning Coast Guard servicemembers to meet minimum class size requirements. In addition, all three managers said they used the online version of TAP for remotely stationed Coast Guard servicemembers because the Coast Guard lacked the resources for them to attend classes in person. Although they preferred that servicemembers participate in live, classroom-based TAP classes, all of the managers acknowledged that the online version of TAP played an integral role in ensuring that more servicemembers could participate in the program. However, two of them noted that while classroom delivery of TAP classes provided an interactive learning environment that allowed participants to ask questions and learn from their peers, online participants generally clicked quickly through the slides and had difficulty understanding the information being presented. Two managers told us that they regularly used the online version to deliver parts of the TAP curriculum. For example, one TAP manager said she required participants to complete the crosswalk of military and civilian occupations class online before attending required classes in person. Two managers noted that additional 2-day classes were available online, and one noted that some servicemembers attended these classes in a classroom setting either on a Coast Guard base or a DOD installation. Finally, all three TAP managers said that many participants in online TAP classes would benefit from participating in a real-time virtual version of TAP led by live facilitators. Two managers told us that having a remote facilitator delivering TAP in real time would give participants more opportunity to ask questions and better understand and absorb class content. Feedback About TAP Was Generally Positive Despite these challenges, TAP managers and separating Coast Guard servicemembers we interviewed provided generally positive feedback about the TAP program. All of the 25 Coast Guard servicemembers we spoke with said that the information they received during the courses was useful and they liked the instructors. One Coast Guard servicemember praised the classroom courses for being interactive, and several Coast Guard servicemembers said they wanted the opportunity to retake TAP before or shortly after they separated from the Coast Guard. However, many said the volume of information presented in a short period of time could be overwhelming and was like “trying to drink from a firehose.” Coast Guard Cannot Effectively Measure Performance or Monitor Implementation to Ensure Key TAP Requirements Are Met Coast Guard Has Not Set a Formal Performance Goal for TAP Participation and Cannot Effectively Measure Program Performance Because it Lacks Reliable Data The Coast Guard has not set a formal performance goal for TAP participation, according to a Coast Guard official, and as previously discussed, does not have complete, reliable data. Without reliable information, the Coast Guard cannot effectively monitor TAP implementation or measure program performance. DHS is mandated to ensure that all TAP- eligible servicemembers of the Coast Guard participate in TAP before leaving military service. However, without effective monitoring of program participation, the Coast Guard cannot know to what extent its servicemembers receive the required training they need to prepare for civilian life. According to federal internal control standards, management should consider external requirements—such as the laws with which the entity is required to comply—to clearly define objectives in specific and measurable terms. In addition, establishing goals can help agencies define expected performance and articulate results. A Coast Guard official said the Coast Guard’s long-term goal is for full compliance with TAP requirements, but in the interim, the Coast Guard uses DOD’s 85 percent VOW compliance goal as an informal benchmark against which to gauge the Coast Guard’s TAP performance. However, the Coast Guard has not communicated a specific, measurable goal to TAP staff implementing the program, or to Coast Guard commanders who oversee separating and retiring Coast Guard servicemembers, according to a Coast Guard official. Establishing and communicating a formal goal could help the Coast Guard define expected performance. The official also told us that, like DOD, the Coast Guard tracks the elements of TAP mandated under the VOW Act— transition or pre-separation counseling, VA Benefits I and II, and the DOL Employment Workshop. Coast Guard Does Not Monitor Compliance with Additional TAP Requirements The Coast Guard does not monitor the (1) timeliness of participation in TAP, and (2) access to additional 2-day classes. A Coast Guard official said the Coast Guard does not currently monitor TAP beyond tracking whether separating servicemembers participate in the required courses, and currently lacks the capacity to undertake additional monitoring efforts. However, he said additional monitoring would be possible once the Coast Guard completed the move to the DOD TAP-IT Enterprise data system. Timeliness of TAP Participation According to a Coast Guard official, the Coast Guard does not currently monitor the timeliness of TAP participation although federal law prescribes time frames for servicemembers to begin TAP participation. Generally, separating servicemembers who are not retiring are to begin TAP participation no later than 90 days before their separation date. Without a systematic method for monitoring timeliness, the Coast Guard cannot know whether its servicemembers begin the program on time or account for the timeliness of TAP participation. As a result, the Coast Guard cannot know whether its servicemembers are starting TAP early enough to complete the training they need to adequately prepare for their transition to civilian life. Access to Additional 2-Day Classes The Coast Guard does not track which of its servicemembers participate in the additional 2-day classes, according to a Coast Guard official we interviewed, even though federal law requires that DHS ensure those who elect to participate are able to receive the training. By not tracking which Coast Guard servicemembers participate in 2-day classes or requiring transition staff to document when servicemembers ask to attend, the Coast Guard cannot determine the extent to which servicemembers who wished to attend these courses were able to do so, as required by law. Roles and Responsibilities Are Not Clearly Defined Coast Guard commanders and TAP managers do not have clearly defined roles and responsibilities in implementing TAP because of the lack of an up-to-date Commandant Instruction, according to TAP staff we interviewed. As previously discussed, the Coast Guard’s last Commandant Instruction on TAP was issued in 2003, approximately 8 years prior to TAP’s redesign. According to federal internal control standards, to achieve the entity’s objectives, management should assign responsibility and delegate authority to key roles throughout the entity. Without an up-to-date Commandant Instruction, TAP managers and commanders may be unclear on who is ultimately responsible for ensuring servicemembers attend TAP. Moreover, two TAP managers also told us that an up-to-date Commandant Instruction might lead some commanders to place higher priority on ensuring TAP participation. Coast Guard officials said the Coast Guard was in the process of revising the TAP Commandant Instruction and anticipated issuing the new instruction in May 2018. Coast Guard Does Not Share Participation or Performance Data with Commanders or TAP Interagency Partners, Limiting Monitoring and Evaluation The Coast Guard lacks the ability to share data with commanders, limiting its ability to monitor TAP participation and ensure servicemembers attend the program. According to a Coast Guard official, the Coast Guard’s current data collection system also cannot generate installation or unit- level participation rates to share with commanders who oversee transitioning and retiring servicemembers. Federal internal control standards state that management should share quality information throughout an organization to enable personnel to perform key roles, and we have previously reported that by regularly sharing useful performance information with leaders at multiple levels of an organization, agencies can help leaders make informed decisions. Without this information, individual unit commanders or the commanders’ supervisors cannot determine whether Coast Guard servicemembers under their command completed TAP or identify whether there is a need for corrective actions to ensure they do so. As we mention earlier in this report, the Coast Guard plans to adopt DOD’s TAP-IT Enterprise System, which according to officials, could help the Coast Guard ensure eligible servicemembers participate in the program. According to a Coast Guard official, once the system is fully implemented by the Coast Guard, commanders will be required to verify and document whether Coast Guard servicemembers under their command completed TAP, potentially making commanders more vested in the process. We previously reported that a senior DOD official said that the TAP-IT Enterprise System may be able to generate unit- and installation-level reports for the four DOD-led military services by October 2018, and a Coast Guard official said he would work with DOD to identify whether this capability could also extend to the Coast Guard. Once data reliability improves, sharing installation and unit-level TAP performance information with Coast Guard commanders could support monitoring efforts. The performance measures tracked by the TAP interagency working group do not reflect TAP implementation broadly across all five military services, according to a Coast Guard official we interviewed. The Coast Guard does not currently share TAP data it collects with DOD or other members of the interagency performance working group. While the benefits of interagency data sharing cannot be realized without the Coast Guard first improving the quality and completeness of its TAP data, we have identified leading practices for interagency collaboration, including that members of interagency working groups identify and share relevant agency performance data. Moreover, federal internal control standards call for management to communicate quality information to external parties. Because the Coast Guard does not share TAP data, the performance measures tracked by the interagency group do not reflect Coast Guard servicemembers’ experiences and thus do not provide a complete picture of TAP implementation across the five military services. More specifically for the Coast Guard, without such data sharing, future TAP evaluations may not be able to assess the effectiveness of TAP delivery, hindering the Coast Guard’s ability to make program adjustments to better prepare its servicemembers to successfully transition to life after military service. Coast Guard officials said migrating to DOD’s TAP-IT Enterprise System will facilitate information sharing with interagency partners and that improving data completeness and reliability is a top priority for 2018. Conclusions Given the sacrifices servicemembers have made to serve their country, it is imperative they are afforded every chance to adequately prepare for civilian life before leaving military service. In order to make a successful transition, servicemembers need to be well-positioned to get a job or make an informed decision about whether to pursue additional education or start a small business. As such, the Transition Assistance Program (TAP) serves a critically important function—to give servicemembers the tools and information they need to successfully transition to life outside the military. Federal law requires that the Coast Guard ensure all eligible servicemembers participate in the program, but thousands of Coast Guard servicemembers may have transitioned without the support provided by TAP. Reliably tracking participation has proven to be a challenge for the Coast Guard, in part because it lacks a current Commandant Instruction that defines the roles and responsibilities of staff responsible for implementing TAP and ensuring complete and reliable data are collected. In preparing to issue an updated Commandant Instruction, the Coast Guard has taken a positive step toward addressing the limitations of its current TAP data, and will be better positioned to ensure compliance with VOW Act requirements using reliable data. In addition to collecting reliable data, the Coast Guard could further demonstrate its commitment to meeting TAP requirements by establishing formal performance goals that measure the extent to which Coast Guard servicemembers participate in TAP. By establishing interim performance goals, the agency would be able to show its progress towards achieving full compliance. Moreover, communicating performance goals to unit and installation commanders could enhance accountability and might spur progress toward meeting federal program requirements. By expanding its monitoring efforts beyond tracking participation in TAP’s required classes, the Coast Guard could enhance its ability to ensure other TAP requirements are met and that its servicemembers are able to access additional transition resources. Monitoring the timeliness of participation would help ensure Coast Guard servicemembers have adequate time to complete TAP before leaving the military. Further, by monitoring requests to participate in additional 2-day classes and 2-day class attendance, the Coast Guard would be in a better position to identify whether servicemembers who wish to attend the classes are able to do so, to determine whether more classes are needed, and to communicate this information to the interagency partners responsible for delivering these classes. Commanders can also play a key role in bolstering TAP participation. Having an up-to-date written Commandant Instruction that explicitly describes commanders’ roles and responsibilities could enhance commanders’ ability to ensure TAP’s proper implementation and compliance with VOW Act requirements. Moreover, once data quality improves, providing commanders a mechanism to readily determine whether servicemembers under their command have completed TAP could help them monitor the program to ensure that all TAP-eligible servicemembers receive the resources they need to successfully transition to civilian life. Finally, once more reliable data on Coast Guard servicemember participation are available, sharing this information with interagency partners could improve TAP implementation on a broader scale. Sharing reliable data, such as participation figures for the Coast Guard, would give TAP interagency partners a more complete picture of implementation across all five military services. Sharing such information would also enhance the interagency group’s ability to evaluate how well TAP serves the entire population of servicemembers. Improving the reliability of the Coast Guard’s TAP data will be essential for the benefits of data sharing to be realized. Recommendations for Executive Action To ensure that all eligible Coast Guard servicemembers are provided the opportunity to complete the Transition Assistance Program (TAP), we recommend the Commandant of the Coast Guard take the following seven actions: Issue an updated Commandant Instruction that establishes policies and procedures to improve the reliability and completeness of TAP data by including when and by whom data should be recorded and updated. (Recommendation 1) Establish a formal performance goal with a measurable target for participation rates in VOW Act-mandated portions of TAP. (Recommendation 2) Monitor the extent to which Coast Guard servicemembers participate in TAP within prescribed time frames. (Recommendation 3) Monitor the extent to which Coast Guard servicemembers who elect to participate in additional 2-day classes are afforded the opportunity to attend. (Recommendation 4) Issue an updated Commandant Instruction that defines the roles and responsibilities of the personnel who administer the program and ensure servicemembers’ participation. (Recommendation 5) Once reliable data are available by installation or unit, enable unit commanders and the higher-level commanders to whom they report to access TAP performance information specifically for the units they oversee so that they can monitor compliance with all TAP requirements. (Recommendation 6) Once reliable data are available, share TAP information with DOD and other interagency partners, such as data on participation in required TAP courses and additional 2-day classes. (Recommendation 7) Agency Comments and our Evaluation We provided a draft of this report to the Departments of Homeland Security, Defense, Education, Labor, and Veterans Affairs, the Office of Personnel Management, and the Small Business Administration for their review and comment. The formal written response of the Department of Homeland Security (DHS) is reproduced in appendix II. In addition, DHS provided technical comments from Coast Guard officials that we incorporated into the report as appropriate. The other agencies did not provide any comments. In its written comments, DHS agreed with all seven of our recommendations. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or brownbarnesc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objective, Scope, and Methodology Overview This report examines (1) what is known about the reliability of Transition Assistance Program (TAP) data on participation levels and the factors that affect Coast Guard servicemembers’ participation, and (2) the extent to which the Coast Guard measures TAP performance and monitors key areas of TAP implementation. To address these questions, we surveyed Coast Guard installations with full-time TAP operations; reviewed Coast Guard data on TAP participation for fiscal years 2012 to 2017; visited one Coast Guard installation and interviewed TAP managers from two additional Coast Guard installations selected for diversity in location, among other reasons; and interviewed Coast Guard officials responsible for overseeing TAP implementation for the Coast Guard. We also reviewed relevant federal laws, regulations, policies, documents, and publications. Information in this report is current as of the date GAO received formal agency comments from DHS. Survey Our survey of Coast Guard installations with full-time TAP operations asked about how TAP was being implemented. The survey included questions about the accessibility of TAP components, challenges Coast Guard servicemembers faced in attending the components, and the level of commander support for participation. Our survey targeted front-line TAP managers, who could draw on the expertise of TAP course facilitators, transition counselors, career counselors, and other key TAP staff as necessary. After drafting the survey questions, we pretested them with a TAP manager to ensure (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the survey did not place an undue burden on agency officials, (4) the information could feasibly be obtained, and (5) the survey was comprehensive and unbiased. We revised the content and format of the survey based on the feedback we received. We initially sent the survey to TAP managers at all 13 Coast Guard installations at which TAP staff were located. We removed one installation when we later found that the TAP manager position was vacant and revised the total to 12 Coast Guard installations. The survey was accessible online from October 31, 2016, through January 18, 2017, through a secure server that recipients were able to access using unique usernames and passwords. We sent an email announcement to TAP staff at all 13 Coast Guard installations at which TAP staff are located on October 24, 2016. We sent a second email on October 31, 2016 to notify participants the survey was available online, and provided their unique passwords and usernames. We sent two follow-up e-mails (November 14, 2016 and November 28, 2016) to those who had not responded. Finally, we contacted all remaining nonrespondents by telephone starting December 5, 2016. The survey was available online until we reached a 100 percent response rate. Interviews with Coast Guard Installation TAP Staff and Servicemembers To increase our understanding into how TAP was being implemented at installations and supplement our survey findings, we visited one Coast Guard installation and interviewed TAP managers from two additional installations. We selected the installations based on several factors, including the size of the installation, proximity to Department of Defense (DOD) installations, and diverse locations in the United States. (See table 1.) At Coast Guard Base Elizabeth City in North Carolina, the installation we visited, we interviewed the TAP manager, uniformed career counselors, and senior installation leadership. During our interviews with TAP managers at all three installations, we asked about the extent to which Coast Guard servicemembers participate in TAP’s required and additional 2-day classes, including whether the servicemembers attended classes online or in a classroom setting, challenges to ensuring Coast Guard servicemembers participate in TAP, and the extent to which they monitor Coast Guard servicemembers’ participation in TAP. At Coast Guard Base Elizabeth City, we also interviewed 25 Coast Guard servicemembers (both officers and enlisted personnel) to get their perspective on how well TAP worked and any challenges they had participating. To help guide the interviews with the Coast Guard servicemembers, we asked them to complete a short questionnaire that asked about their experiences with the TAP program. Interviews with Agency Personnel We also interviewed TAP staff at Coast Guard headquarters to learn about TAP policy, monitoring efforts, and performance measures for the service overall. For example, we asked what policies and procedures guide installations’ TAP implementation; what performance measures the Coast Guard uses to monitor TAP; how performance results are reported and shared with different levels of Coast Guard leadership; and to what extent the Coast Guard uses results from TAP participant satisfaction assessments. We also asked whether the Coast Guard plans to shift to DOD’s new TAP-IT Enterprise System and how using the new system could affect its monitoring efforts in the future. In evaluating the Coast Guard’s performance measures, we focused on measures related to servicemembers’ transition experiences before leaving the military. We did not gather information on post-program evaluations and outcomes because they were determined to be outside the scope of this review. Data Reliability Assessment We reviewed DHS data on TAP participation for fiscal years 2012 to 2017. To assess the reliability of the Coast Guard’s TAP participation data, we interviewed agency officials knowledgeable about the data. We determined these data were not sufficiently reliable due to limitations with the Coast Guard’s data collection system. Specifically, the system lacks adequate controls to ensure TAP data are complete and accurate. We conducted this performance audit from February 2016 to April 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Meeta Engle (Assistant Director), Amy MacDonald (Analyst-in-Charge), James Bennett, Holly Dye, David Forgosh, Ying Long, Jonathan McMurray, Jean McSween, Andrew Sherrill, Benjamin Sinoff, and Timothy Young, made significant contributions to this report. Also contributing to this report were Susan Aschoff, Jessie Battle, Ramona Burton, Melinda Cordero, Elizabeth Curda, Dawn Hoff, Ben Licht, Serena Lo, Sheila McCoy, Almeta Spencer, Christopher Schmitt, James Whitcomb, and Jill Yost. Appendix IV: Related Products Transitioning Veterans: DOD Needs to Improve Performance Reporting and Monitoring for the Transition Assistance Program, GAO-18-23. Washington, D.C.: November, 8, 2017. Transitioning Veterans: Improvements Needed in DOD’s Performance Reporting and Monitoring of the Transition Assistance Program, GAO-18-225T. Washington, D.C.: November 8, 2017. Department of Defense: Transition Assistance Program (TAP) for Military Personnel, GAO-16-302R. Washington, D.C.: December 17, 2015. Veterans’ Employment: Need for Further Workshops Should Be Considered before Making Decisions on Their Future, GAO-15-518. Washington, D.C.: July 16, 2015. Military and Veteran Support: DOD and VA Programs That Address the Effects of Combat and Transition to Civilian Life, GAO-15-24. Washington, D.C.: November 7, 2014. Veterans Affairs: Better Understanding Needed to Enhance Services to Veterans Readjusting to Civilian Life, GAO-14-676. Washington, D.C.: September 10, 2014. Transitioning Veterans: Improved Oversight Needed to Enhance Implementation of Transition Assistance Program, GAO-14-144. Washington, D.C.: March 5, 2014. Military and Veterans’ Benefits: Enhanced Services Could Improve Transition Assistance for Reserves and National Guard, GAO-05-544. Washington, D.C.: May 20, 2005. Military and Veterans’ Benefits: Observations on the Transition Assistance Program, GAO-02-914T (July 18, 2002).
Why GAO Did This Study Thousands of Coast Guard servicemembers have left the military and transitioned into civilian life, and some of these new veterans may face significant challenges, such as finding and maintaining employment. To help them prepare, federal law mandated that DHS provide separating Coast Guard servicemembers with counseling, employment assistance, and information on veterans' benefits through TAP. GAO was asked to examine TAP implementation. This review analyzes (1) the reliability of TAP data on participation levels for Coast Guard servicemembers and the factors that affect participation, and (2) the Coast Guard's performance measures and monitoring efforts related to TAP. GAO interviewed Coast Guard headquarters staff; surveyed 12 Coast Guard installations that conduct TAP (100 percent response rate); collected and reviewed participation data for reliability; and interviewed TAP managers from three installations selected for size and location, and 25 Coast Guard servicemembers at one location. (For a companion report on TAP implementation for separating and retiring servicemembers in other military services, see GAO-18-23 .) What GAO Found The United States Coast Guard (Coast Guard), which is overseen by the Department of Homeland Security (DHS), lacks complete or reliable data on participation in the Transition Assistance Program (TAP), designed to assist servicemembers returning to civilian life. According to senior Coast Guard officials, a major reason why data are not reliable is the lack of an up-to-date Commandant Instruction that specifies when to record TAP participation data. Consequently, the data are updated on an ad-hoc basis and may not be timely or complete, according to officials. Federal internal control standards call for management to use quality information to achieve the entity's objectives. Until the Coast Guard issues an up-to-date Commandant Instruction that establishes policies and procedures to improve the reliability and completeness of TAP data, it will lack quality information to gauge the extent to which it is meeting TAP participation requirements in the VOW to Hire Heroes Act of 2011. According to GAO's survey of Coast Guard installations, various factors affected participation, such as servicemembers serving at geographically remote locations or separating from the Coast Guard rapidly. TAP officials and Coast Guard servicemembers GAO interviewed said commanders and direct supervisors sometimes pulled servicemembers out of TAP class or postponed participation because of mission priorities. TAP managers also said they rely on delivering TAP online because many Coast Guard servicemembers are stationed remotely. The Coast Guard cannot effectively measure performance to ensure key TAP requirements are met because it lacks reliable data and does not monitor compliance with several TAP requirements. Further, the Coast Guard has not established a formal performance goal against which it can measure progress, although federal internal control standards stipulate that management should consider external requirements—such as the laws with which the entity is required to comply—to clearly define objectives in specific and measurable terms. Establishing a goal could help the Coast Guard define expected performance. In addition, the Coast Guard does not monitor TAP requirements regarding the timeliness of servicemembers' TAP participation or their access to additional 2-day classes. Consequently, it cannot know whether servicemembers are starting TAP early enough to complete the program or those who elected to attend additional 2-day classes were able to do so before separation or retirement, as required by the Act. Finally, the Coast Guard lacks an up-to-date Commandant Instruction that establishes the roles and responsibilities of Coast Guard staff in implementing TAP. Federal internal control standards stipulate that management should assign responsibility and delegate authority to key roles throughout the entity. Issuing an up-to-date Commandant Instruction that defines roles and responsibilities would clarify who is ultimately responsible for ensuring Coast Guard servicemembers attend TAP, thereby facilitating accountability. What GAO Recommends GAO is making seven recommendations, including that the Coast Guard issue a new Commandant Instruction establishing data collection policies, set TAP performance goals, monitor timeliness and access, and define roles and responsibilities. DHS agreed with all of GAO's recommendations.
gao_GAO-18-330
gao_GAO-18-330_0
Background Contracted Services Data Collection and Inventory Requirements and Process In part to improve the information available and management of DOD’s acquisition of services, Congress enacted section 2330a of title 10 of the U.S. Code in 2001, which required the Secretary of Defense to establish a data collection system to provide management information on each purchase of services by a military department or defense agency. Congress amended section 2330a in 2008 to add a requirement for the Secretary of Defense to submit an annual inventory of the activities performed pursuant to contracts for services on behalf of DOD during the preceding fiscal year. The inventory is to include a number of specific data elements for each identified activity, including: the function and missions performed by the contractor; the contracting organization, the military department or defense agency administering the contract, and the organization whose requirements are being met through contractor performance of the function; the funding source for the contract by appropriation and operating agency; the fiscal year the activity first appeared on an inventory; the number of contractor employees (expressed as FTEs) for direct labor hours and associated cost data collected from contractors; a determination of whether the contract pursuant to which the activity is performed is a personal services contract; and a summary of the contracted services data required to be collected in subsection 2330a(a) of title 10 of the U.S. Code. The secretaries of the military departments and heads of the defense agencies are required to review the contracts and activities in the inventory for which they are responsible to ensure that personal services contracts were performed appropriately and that the activities listed do not include inherently governmental functions, among other factors. In addition, in 2011 Congress amended section 2330a to add a requirement that the secretaries of the military departments and heads of the defense agencies develop a plan, including an enforcement mechanism and approval process, to provide for the use of the inventory by the military department or defense agency to implement requirements of section 129a of title 10, U.S. Code (section 129a requires policies and procedures for determining the appropriate mix of military, civilian, and contractor personnel to perform DOD’s mission); facilitate the use of the inventory for compliance with section 235 of title 10, U.S. Code (section 235 requires budget justification materials to include the amount requested for procurement of contract services and the number of full-time contractor employees projected); provide for appropriate consideration of the conversion of activities identified under section 2463 of title 10, U.S. Code (section 2463 requires procedures to ensure civilian employees are considered for performing critical functions); and ensure that the inventory is used to inform strategic workforce planning. In section 812 of the National Defense Authorization Act for Fiscal Year 2017, enacted in December 2016, Congress further amended section 2330a by reducing the scope of the required data collection, specifying the type of contracted services to be included in an inventory summary submitted to Congress, and calling for particular attention to the military departments’ review of certain high-risk contracts (see table 1). To address the requirements of section 2330a of title 10, U.S. Code, DOD is to conduct several key steps for each fiscal year (see table 2). DOD has submitted to Congress annual, department-wide inventories for fiscal years 2008 through 2015. As shown in table 2, each inventory is required to be submitted to Congress by June 30, and is to reflect activities performed during the preceding fiscal year. DOD has not always submitted the inventory to Congress on time. For example, DOD was required to submit the fiscal year 2015 inventory to Congress on June 30, 2016, but did not do so until September 20, 2016. For the inventory of fiscal year 2016 contracted services, the department submitted its summary of the inventory to Congress in February 2018. Prior GAO Work Over the past 8 years, we have issued several reports on DOD’s efforts to compile and review its inventory of contracted services. We have made 18 recommendations, 7 of which are still open, on a variety of issues related to the inventory. Key findings and recommendations in our prior work that pertain to this review are included below. In November 2014, we found the military departments generally had not developed plans to use the inventory to facilitate DOD’s workforce planning, workforce mix, and budget decision-making processes, and that numerous offices were responsible for the various decision- making processes at the military departments. This, in turn, left the department at risk of not complying with legislative requirements. We recommended that secretaries of the military departments identify an accountable official within their departments with responsibility for leading and coordinating efforts across their manpower, budgeting, and acquisition functional communities, and, as appropriate, revise guidance, develop plans and enforcement mechanisms, and establish processes. DOD concurred with the recommendation, but as of January 2018, the Army and Navy still had not identified accountable officials. The Air Force has identified an interim accountable official in its Program Executive Office for Combat and Mission Support, according to an Air Force official. In November 2015, we found that DOD’s effort to establish an office to implement and support a common, enterprise-wide contractor manpower data system had encountered a number of challenges and lacked clearly defined roles and responsibilities for the office. DOD had not outlined the relationships between the support office, military departments, and other stakeholders in exploring the longer-term solution to collect contractor manpower data and integrate inventory data within the military departments’ decision-making processes. We recommended DOD clearly identify the longer-term relationships between the support office, military departments, and other stakeholders. DOD concurred and has since stood up the support office (now called the Total Force Management Support Division) and implemented the Enterprise-wide Contractor Manpower Reporting Application (ECMRA) department-wide. However, DOD has not yet fully identified longer-term relationships. By doing so, DOD would help ensure that efforts to integrate contracted services data into decision- making processes will meet user needs and expectations. Most recently, in October 2016, we found that DOD components (which include the military departments) continued to improve their reviews of the inventory compared to prior years, but that they may continue to underreport contractors providing services that are closely associated with inherently governmental functions. Specifically, our analysis found that in fiscal year 2014 DOD obligated about $28 billion for contracts in the product service codes that the Office of Federal Procurement Policy and GAO identified as more likely to include closely associated with inherently governmental functions. In comparison, the components identified a total of $10.8 billion in obligations or dollars invoiced for contracts that included such work. We also found that the military departments had not yet developed plans to use the inventory to inform workforce mix, strategic workforce planning, and budget decision-making. We did not make new recommendations in that report. DOD Collected Data for the Inventory of Fiscal Year 2016 Contracted Services Using the Same Sources as in Prior Years To facilitate DOD’s submission of an inventory summary to Congress, OSD’s inventory guidance required each military department to submit to the offices of the USD(AT&L) and USD(P&R) a list of all services provided under contract consistent with the guidance and within the scope of section 2330a of title 10, U.S. Code, as amended by section 812 of the fiscal year 2017 NDAA. The military departments collected data for the fiscal year 2016 inventory using the same data sources—FPDS-NG and ECMRA—as they had in prior years, though each department used slightly different processes from one another. OSD’s inventory guidance provided for flexibility in how the military departments compiled and submitted data. For example, the guidance required that the inventory submissions include, at a minimum, all purchases of services with a total contract value of $3 million or more and in the following service acquisition portfolio groups: logistics management services; equipment-related services; knowledge-based services; and electronics and communications services. It did not, however, preclude the military departments from submitting additional information beyond the minimum threshold. In addition, under the guidance, military departments were encouraged to augment FPDS-NG data with data from ECMRA, as has been the process in the past. We analyzed the effect of the recent statutory changes, as implemented in OSD’s inventory guidance, on fiscal year 2016 contracted services data reported in FPDS- NG and compiled by USD(AT&L). We found that the number of service purchases reported under the inventories across the department would be reduced to about 2 percent of the total service purchases if the components reported only the minimum information required under OSD’s guidance. This approach would capture about 30 percent of the total service contract dollars. Officials responsible for overseeing the data collection effort within each of the three military departments stated that for fiscal year 2016 they collected data captured in FPDS-NG and ECMRA, as they have done for previous inventories. The military departments varied somewhat in how they collected and reported their data, which is permitted under OSD’s guidance. The following is a description of the military departments’ processes for collecting data and key aspects of their inventories: Army officials stated that they extracted their inventory data for fiscal year 2016 primarily from ECMRA and used FPDS-NG data to fill gaps in data not collected in ECMRA, such as data on aspects of contract competition (e.g., number of offers and small business considerations). Army officials estimated that the total invoices in ECMRA represented approximately 80 percent of contracted services obligations for fiscal year 2016. In its inventory, submitted to OSD in January 2018, the Army reported services purchased under contract actions with fiscal year 2016 invoiced amounts both above and below $3 million. The Army reported that its fiscal year 2016 inventory accounts for $31 billion in invoiced amounts and 157,000 contractor FTEs. Navy officials stated that they captured nearly all of their inventory data for fiscal year 2016 from FPDS-NG and combined it with ECMRA data. Navy officials estimated that approximately 75 percent of the Navy services contracts that it believed should have been reported in ECMRA were reported during fiscal year 2016. The Navy submitted summary data, including fiscal year 2016 obligations and contractor FTEs by command and in total, to OSD in December 2017. The Navy did not provide a list of its fiscal year 2016 service purchases in time to be included in the inventory summary for Congress, but a USD(AT&L) official said the information provided was sufficient to allow OSD to prepare the summary. The Navy subsequently submitted its full inventory of fiscal year 2016 contracted services to OSD in March 2018 and reported over $6.5 billion in obligations and over 45,000 contractor FTEs. Air Force officials stated that they drew approximately 75 percent of the data elements required for the inventory for fiscal year 2016 from FPDS-NG. Air Force officials stated that they also extracted data from the Air Force financial management system, such as total contracted dollar amounts, and manpower data from ECMRA. Air Force officials did not have an estimate of the percentage of service contracts that were reported in ECMRA in fiscal year 2016. The Air Force submitted its inventory to OSD in December 2017 and included services purchased under contract actions with fiscal year 2016 invoiced amounts or obligations both above and below $3 million. In addition, the Air Force specifically identified purchases within each of the four service acquisition portfolio groups specified in OSD’s inventory guidance. The Air Force reported approximately $14.6 billion in obligations with an estimated 73,400 contractor FTEs in its fiscal year 2016 inventory. A USD(AT&L) official stated that he used the information provided by the military departments and defense components to help create the inventory summary required by section 812 of the fiscal year 2017 NDAA. OSD submitted this inventory summary to Congress in February 2018. This official added that OSD will discuss whether changes in its guidance for the next inventory are needed to clarify what information the military departments and defense components should submit. Military Departments Have Not Developed Statutorily Required Plans and Continue to Make Limited Use of the Inventory to Inform Management Decisions The military departments generally have not developed plans to use the inventory to inform management decisions as required by subsection 2330a(e) of title 10 of the U.S. Code and OSD’s inventory guidance. Further, manpower and budget officials said they make limited use of the inventory to inform strategic workforce planning, workforce mix, and budget decisions. This situation is similar to what we have found in our past work. Manpower and budget officials we spoke with stated the inventory is often too outdated to inform their decision-making, though the inventory provides a single source of certain types of information that are not readily available elsewhere. This limited use may also reflect, in part, the lack of accountable officials responsible for developing plans and enforcement mechanisms to use the inventory, as we recommended in November 2014. Military Departments Generally Have Not Developed Plans to Use the Inventory for Decision- Making Subsection 2330a(e) of title 10 of the U.S. Code, DOD Instruction 5000.74, and OSD’s inventory guidance direct the military departments and defense agencies to use the inventory to inform workforce and budget decisions. When we last reported on this issue in October 2016, we identified 12 guidance documents from the military departments related to strategic workforce planning, workforce mix, and budget decisions. Our current work found that 14 documents, some of which are the same as what we reported in October 2016, make up the current set of military departments’ guidance in these areas. Further, we found the degree to which these guidance documents require the use of the inventory in these areas is still minimal—3 of the 14 documents include requirements related to the inventory (see table 3). Two documents, the Army’s July 2009 memorandum on civilian workforce management and the Army’s March 2010 concept plan guidance, require the use of the inventory for insourcing plans to convert contracted activities to performance by government personnel. Air Force Instruction 38-201 on management of manpower requirements directs the Air Force manpower division to support the review of the inventory, but does not require its use for workforce mix decisions. As noted previously, in November 2014 we found that no single office or individual at the military departments was responsible for leading or coordinating efforts between the various functional areas to develop a plan to use the inventory to inform management decisions. As a result we recommended that the secretaries of the military departments identify accountable officials to do so. As of January 2018, the Army and Navy still had not named accountable officials responsible for developing plans and enforcement mechanisms to use the inventory for workforce and budget decisions, according to officials at those departments. Navy officials said they have not reached agreement on the appropriate managerial level of an accountable official. According to an Air Force official, the Air Force has named an official from the Program Executive Office for Combat and Mission Support to serve on an interim basis. We continue to believe this recommendation is valid and should be fully implemented. Military Departments Make Limited Use of the Inventory for Decision- Making Army manpower officials we interviewed stated that the inventory provided information that was not readily available elsewhere and the information collected in the inventory process may be useful for making workforce mix decisions. For example, Army manpower officials said the inventory provides a single source for information like the number of contractor FTEs, contractor labor hours and costs, the location of work performance, and the functions performed. Army officials said they can use this information to analyze cost factors and contract expenditures and compare them to in-house costs. In addition, Army officials noted the inventory provides information to address questions from Congress, DOD, and Army leadership about the number and cost of contractors, and that it is the only source of detailed data that supports analysis of the contractor workforce mix that is statutorily required. Comptroller, Navy, and Air Force officials added that they use information from the inventory to estimate the average number of contractor FTEs that are reported in DOD’s annual budget request. However, representatives from the workforce and budgeting offices within the military departments we interviewed also noted that the inventory has limitations that hinder its use. These officials noted that the data reflected in the inventory are often too outdated to help inform strategic decisions that are usually made at the local level—such as a specific military installation—based on real-time data. For example, Air Force officials said that under the program objective memorandum (POM) process, the Air Force identifies future budget requests and workforce needs 2 years before the beginning of a fiscal year, whereas the most recent inventory data available may already be 2 years old when that process starts. To illustrate the issue, the officials noted that they were already planning for the 2020 POM in early fiscal year 2018, although the fiscal year 2016 inventory was not yet available. As a result, if the Air Force were to use inventory data to plan for the 2020 POM, they would have to rely on fiscal year 2015 inventory data. Air Force officials also said certain types of information that are useful for strategic planning, such as planned contracts for services and the scope and duration of the existing contracts, are not captured in the inventory process. Army officials had a similar perspective and said they do not use the inventory to plan for the POM because collecting data on past contracted services is not as relevant to estimating future requirements and funding needs. As part of Congress’s efforts to inform DOD’s management of its acquisition of contracted services, it enacted the inventory legislation. We concluded in January 2011 that the real benefit of the inventory process would ultimately be measured by its ability to inform management’s decision-making. As noted above, we have made recommendations to help improve this decision-making, which we continue to believe should be fully implemented. DOD officials have also identified ways in which the inventory can be useful. Recent legislation and our prior work in other related areas have identified additional means through which DOD can manage its acquisitions of contracted services. In December 2017, the National Defense Authorization Act for Fiscal Year 2018 was enacted. Section 851 requires DOD to regularly analyze past spending patterns and anticipated future requirements for its procurement of services and use these analyses to inform decisions on the award of and funding for such service contracts. In August 2017, we found DOD had not fully implemented three key leadership positions that were intended to enable DOD to more strategically manage service acquisitions. We recommended the USD(AT&L) reassess the roles, responsibilities, authorities, and organizational placement of key leadership positions to help foster strategic decision-making and improvements in the acquisition of services. DOD concurred with our recommendation. In December 2017, the Deputy Secretary of Defense appointed a reform leader for service contracts and category management—an approach intended to manage entire categories of spending across government for commonly purchased goods and services—and established related reform teams to help ensure department-wide efficiency in contract spending. In February 2016, we found that DOD’s and Congress’s insight into future spending on contracted services was limited because DOD did not identify service contract spending needs beyond the current budget year. Although program offices generally kept track of their future service contract needs and estimated costs for 5 years out, they were not required to identify planned service contract spending beyond the budget year. We recommended that the military departments revise their programming guidance to collect information on how contracted services will be used to meet requirements beyond the budget year. DOD partially concurred with our recommendation, but noted that the volatility of requirements and each budget cycle constrain the department’s ability to accurately quantify service contract requirements beyond the budget year. We agreed that requirements and budgets change over time, but our work showed that the needed data already exists and is not captured in such a way to inform senior leadership on future service contract spending. We continue to believe that implementing this recommendation will assist the department in gaining better insight into contracted service requirements and enable more strategic decisions about the volume and type of services it plans to acquire. Agency Comments We are not making new recommendations in this report. We provided a draft of this report to DOD for comment. In its written comments, which are reprinted in appendix I, DOD stated that it remains committed to improving its processes for collecting, analyzing, and reporting contracted services data. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Under Secretary of Defense for Personnel and Readiness; the Under Secretary of Defense for Acquisition and Sustainment; and the Under Secretary of Defense (Comptroller). In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Defense Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Katherine Trimble (Assistant Director); Brenna Derritt (Analyst-in-Charge); Pete Anderson; Dennis Antonio; Vincent Balloon; Lorraine Ettaro; Gina Flacco; Kristine Hassinger; and Julia Kennon made significant contributions to this review.
Why GAO Did This Study DOD obligated about $150 billion on contracted services—such as information technology support and maintenance of defense facilities—in fiscal year 2016. DOD has faced long-standing challenges in effectively managing its service acquisitions. The National Defense Authorization Act for Fiscal Year 2017 amended existing requirements for DOD to annually collect data on contracted services and to compile and review an inventory of the functions performed by contractor personnel. The Act also contained a provision for GAO to report on the status of this data collection and to assess DOD's use of the inventory. This report addresses how DOD (1) collected data to create an inventory of fiscal year 2016 contracted services and (2) used the inventory to inform workforce planning, workforce mix, and budget decisions. GAO has reported on DOD's inventory of contracted services since 2010. GAO reviewed OSD and the military departments' guidance, as well as the military departments' inventory submissions to OSD. GAO also analyzed contracted services data and interviewed OSD and military department officials. What GAO Found GAO found that the Department of Defense (DOD) used the same sources as it did in prior years to collect data and create an inventory of fiscal year 2016 contracted services, which is intended, in part, to help DOD make more strategic workforce decisions and better align resources. Office of the Secretary of Defense (OSD) guidance, issued in September 2017 to implement congressional direction, required the military departments to include in their submissions, at a minimum, purchases of services with a total contract value of $3 million or more, and in four services acquisition portfolio groups—logistics management, equipment-related, knowledge-based, and electronics and communications. As permitted under OSD's inventory guidance, the military departments varied somewhat in how they reported their contracted services data to OSD. For example, the Army and Air Force included purchases both over and under $3 million and the Air Force also identified purchases by the four portfolio groups. The Navy submitted summary data of contracted services but did not provide a list of purchases in time to be included in an inventory summary for Congress. An OSD official said, however, that the information provided was sufficient to prepare the inventory summary, which OSD submitted to Congress in February 2018. The Navy subsequently provided a list of its fiscal year 2016 service purchases to OSD in March 2018. Military departments generally have not developed plans to use the inventory for workforce and budget decisions, as statutorily required. This is consistent with what GAO found in November 2014 and October 2016. GAO's analysis found that the military departments' guidance generally does not require using the inventory in workforce and budget decisions (see table). Army manpower officials told GAO that inventory information such as the number of contractor full-time equivalents and the functions performed can be used to inform workforce mix decisions. However, workforce and budget officials at the Army, Navy, and Air Force stated they make limited use of the inventory to inform decision-making, in part because by the time the inventory is available, the data reflected are often too outdated to inform strategic decisions. GAO has previously recommended ways to improve use of the inventory. In November 2014, for example, GAO found that a lack of officials at the military departments who are accountable for integrating the use of the inventory leaves the department at continued risk of not complying with the legislative requirement to use the inventory to support management decisions. This issue persists, as the military departments have not made final designations for accountable officials responsible for developing plans and enforcement mechanisms to use the inventory. What GAO Recommends GAO is not making new recommendations in this report. Seven of 18 prior GAO recommendations related to the inventory remain open, including a recommendation for DOD to identify officials at the military departments responsible for developing plans and enforcement mechanisms to use the inventory. In its comments, DOD stated it is committed to improving its inventory processes.
gao_GAO-18-481
gao_GAO-18-481_0
Background To participate in federal student aid programs, postsecondary schools must be 1) certified by Education as eligible to participate in federal student aid programs, 2) accredited by a recognized accrediting agency— generally nongovernmental, nonprofit entities—and 3) authorized by the state in which the school is physically located. (See table 1.) FSA is responsible for ensuring that schools with access to federal student aid are eligible and capable of properly administering federal student aid funds, according to standards established by Education and authorized by the Higher Education Act. These standards include requirements for schools related to communication, personnel, policies, procedures and reporting, and adequate checks and balances in a system of internal controls, among others. FSA is also responsible for conducting ongoing financial oversight of schools that receive federal student aid. This includes reviewing annual financial statement audits to assess a school’s financial responsibility and providing additional oversight to schools that do not meet financial responsibility standards outlined in the Higher Education Act. Schools that participate in federal student aid programs generally are required to submit annual compliance audits. The compliance audit provides information that FSA can use to assess the school’s administration of federal student aid programs and to identify schools that require additional oversight because they do not fully comply with federal student aid administrative requirements. The OIG is required to assess the quality of school compliance audits and selects a sample to review each year. The OIG reviews the audit documentation to ensure that it supports the auditor’s opinions and that the audit results are reliable. According to agency guidance, FSA staff should refer compliance audits to the OIG for a quality review if they have any concerns about the quality of the audits. Both FSA and OIG officials stated that the OIG has primary responsibility for issues related to audit quality. General Certification Process When a school first applies to be certified to administer federal student aid, FSA will either approve the school for provisional certification— generally for 1 year—or deny certification (see fig. 1). Once a school is approved for initial certification and applies for recertification, FSA will provisionally or fully recertify the school, or deny certification. According to FSA procedures, FSA uses provisional certification for initial, or first time, applicants, as well as schools that are applying for recertification. Provisional certification is the only approval status available to new schools. In addition, FSA may decide to recertify a school provisionally if it determines that a school has not fully complied with federal student aid requirements. FSA prohibits provisionally certified schools from opening new campus locations or offering new programs without approval from FSA, and provisionally certified schools that are denied recertification have a less substantive appeals process than fully certified schools. Further, recertified schools in provisional status are subject to more FSA oversight than schools that are fully certified. FSA procedures allow for some discretion in determining for how long to certify a school. Provisional recertification generally lasts 1 to 3 years, while full recertification generally lasts 4 to 6 years. Education Evaluates a Variety of Information during the Certification Process and Approves Most Schools Education Reviews Information from Multiple Sources to Assess a School’s Capability to Administer Federal Student Aid Education’s FSA regional staff draw information from a variety of sources during the certification process to assess a school’s capability to administer federal student aid. According to FSA documents, regional staff are to review information collected from schools and third parties, such as annual compliance audits conducted by independent auditors, among other information sources. FSA staff responsible for different functional areas, such as financial and compliance audits, accreditation status, and student loan default rates, compile and review information on schools, according to FSA procedures. FSA officials told us that these staff meet to discuss any potential program eligibility issues and to ensure that all information relevant to a school is considered before making a certification decision. FSA’s certification procedures outline some of the key information that regional staff should assess, some of which is relevant to both initial and recertification decisions, and some of which is specific to each type of certification process (see fig. 2). Documents and policies provided by schools: FSA regional staff are directed to review documents submitted by schools, including school catalogs, and certain school policies—such as admissions and student refund policies—that are relevant to assessing administrative capability. Proof of accreditation: School accreditors are responsible for applying and enforcing standards to help ensure that the education offered by schools is of sufficient quality to achieve program objectives. Accreditation of schools, which generally includes a site visit, takes place on a cycle that may range from every few years to as many as 10 years. Proof of state authorization: States are responsible for authorizing schools to offer postsecondary education and respond to student complaints. The process for approving schools varies from state to state and may include on-site visits. Audited financial statements: FSA regional staff are directed to review information in audited financial statements to assess schools’ financial health. Schools are required to have annual audited financial statements issued by an independent certified public accountant or a government auditor. Key Information Required for Initial Certification Self-reported school data: FSA regional staff are instructed to review data on continual student enrollment in eligible academic programs and student withdrawal rates. Pre-certification review and school outreach: FSA staff are responsible for contacting school personnel to verify the school’s application information and discuss relevant policies, procedures, and other materials relevant to administering federal student aid. FSA visits to newly certified schools: After schools first apply and are provisionally certified, Education requires FSA regional staff to contact them within 3 months and schedule an on-site school visit. Schools cannot administer federal student aid until they are certified, so FSA has limited information on how newly certified schools are administering federal student aid programs. School visits provide FSA with an opportunity to collect additional information about a provisionally certified school’s ability to administer federal student aid. Some FSA regional staff we interviewed told us that on-site visits to newly certified schools provide valuable first-hand information about whether these schools are administering federal student aid in accordance with program requirements. If FSA regional staff find that a school is having difficulties administering federal student aid, FSA procedures direct regional staff to assist schools by providing clarification and guidance on federal student aid policies, recommending additional training for school officials, and helping schools develop a plan to track and report on their corrective actions, among other things. Key Information Required for Recertification Compliance audits: FSA staff are directed to review information in compliance audits to determine if schools are complying with specific federal student aid requirements. Generally, compliance audits are required to be conducted annually by an independent auditor, and submitted with the school’s audited financial statements. Program reviews: FSA regional staff are also responsible for conducting program reviews, usually on site, which evaluate school compliance with federal requirements and can provide more in-depth information on schools than compliance audits, according to some FSA staff we interviewed. Generally, FSA selects schools for program reviews that it considers to be at risk for noncompliance, according to Education documents. FSA conducts approximately 250 to 300 program reviews per year, according to FSA documentation. FSA staff from all four of our selected regional offices told us they consider results from any recent program review in decisions about recertification and noted that such information, when available, is valuable for assessing schools’ administrative capability. Education data: FSA regional office staff are also directed to review data on student loan default rates. Most Schools Are Provisionally or Fully Certified to Receive Federal Student Aid From calendar years 2006 through 2017, FSA approved most schools applying for certification to receive federal student aid, according to Education data. Initial Certification Applications From 2006 through 2017, FSA approved 89 percent of schools new to administering federal student aid for provisional certification and denied 11 percent of schools overall (see fig. 3). Denial rates for initial certification were 11 percent for public and for-profit schools and 14 percent for nonprofit schools. For more information on 2006-2017 school certification outcomes by year, see appendix I. FSA regional staff responsible for reviewing school applications told us that schools are denied initial certification for issues such as a lack of accreditation, not offering eligible programs for federal student aid, or not meeting other statutory eligibility requirements. For example, FSA staff said that for-profit and vocational schools that apply for initial certification are required to provide an eligible program continuously for 2 years prior to their initial application. FSA staff may also advise schools that do not meet basic eligibility requirements not to apply, which could result in fewer initial certification denials overall. In addition, FSA staff said they often work with schools to address compliance problems, for example, by providing guidance on revising school policies that do not meet requirements, so that the schools are able to meet FSA’s certification requirements. Recertification Applications From 2006 through 2017, 76 percent of schools applying for recertification were fully recertified, 21 percent were provisionally recertified, and 3 percent were denied recertification. Sixty-six percent of for-profit schools were fully recertified, 28 percent were provisionally recertified, and 6 percent were denied. In comparison, 86 percent of public schools were fully recertified, 14 percent were provisionally recertified, and fewer than 1 percent were denied. Nonprofit schools had rates similar to public schools, with 80 percent fully recertified, 18 percent provisionally recertified, and 2 percent denied (see fig 4). FSA staff from all four of our selected regional offices told us that they typically deny recertification when a school no longer meets eligibility requirements, such as losing accreditation, or when there is significant evidence of serious issues or massive wrongdoing, such as fraud. For example, managers in one regional office told us they denied recertification for a school because they had evidence that the school was accepting students without valid high school diplomas and referring them to diploma mills to boost enrollment. Staff in two FSA regional offices told us that they can also choose to fully recertify a school for shorter periods of time if they uncover issues related to administrative capability. For example, one regional staff member told us that when they found a school’s default rate for one federal student loan program had been high for the prior 3 years, the regional office decided to shorten the school’s full recertification period from 6 to 4 years, to allow FSA staff to review the school again sooner. Reasons for Provisional Certification FSA staff from all four of our selected regional offices told us that they provisionally certify schools for a variety of reasons, including when a school submits a late compliance audit or when a recent compliance audit indicates that a school could potentially have significant problems. Generally, schools in provisional certification status are subject to additional monitoring by FSA compared to schools that have been fully certified. For example, Education officials said that if they have concerns about a provisionally certified school’s student withdrawal rate, they can add provisional conditions requiring the school to submit monthly enrollment rosters for review. Staff in two FSA regional offices told us that in other cases, if they have concerns about how a school is administering federal student aid or suspected fraud, they can put a school on provisional status and conduct a program review to collect more detailed information on compliance with federal requirements. Education data also show that most schools remain in provisional status the first time they are recertified—62 percent from 2006 to 2017. In contrast, FSA staff fully recertified over three-quarters of schools that applied for recertification a second time during the same time period (see table 2). For more information on first and second recertification outcomes by school sector, see appendix II. Compliance Audits Are Key to Certification Process and Education Has Taken Steps to Address Audit Quality We found that FSA generally relies on compliance audits as the only annual on-site review to determine how schools applying for recertification administer federal student aid. The audits provide direct information collected by independent auditors from school visits and file reviews examining how schools administer federal student aid and comply with program requirements. For example, OIG audit guidance directs auditors to check whether schools are distributing federal student aid to eligible students and accurately calculating student loan amounts. FSA officials and staff from all four of our selected regional offices said that compliance audits are a key source of information they use to assess a school’s administrative capability. Officials from Education’s OIG said that the quality of information in compliance audits varies substantially and depends on the auditor. The OIG has found quality problems in some of the compliance audits it selects—based on auditor and school risk factors—for its annual quality control reviews. Because the OIG selects higher risk audits to review, its reviews are more likely to detect problems, and OIG officials said they cannot make any conclusions about the overall prevalence of quality problems in compliance audits. However, our analysis of OIG quality review data found that of the 739 compliance audits reviewed by the OIG from fiscal years 2006 through 2017, the OIG passed 23 percent (173) and failed 59 percent (436). An additional 18 percent (130) passed with deficiencies. For example, across the 41 compliance audits it reviewed in fiscal year 2016, the OIG identified 264 quality deficiencies with the auditor’s work, according to our analysis of quality reviews provided by the OIG. The most frequently cited issues in these 41 audits were: reporting (24 audits), such as lack of evidence that the auditor tested whether the school correctly reported student enrollment status; student eligibility (20 audits), such as lack of evidence that the auditor verified student school attendance; and administrative capability (19 audits), such as lack of evidence that the auditor determined whether the accreditor had been notified about a change in school ownership within 10 days. FSA officials also identified quality issues with the compliance audits of some schools. FSA headquarters officials and staff we interviewed in several regional offices said they have seen schools with significant program review findings that had not been identified in annual compliance audits. FSA staff said they have referred some compliance audits to the OIG for quality reviews when they have had questions about the thoroughness of an audit. We also found a couple of examples in our review of school certification documents in which the findings identified in a school’s compliance audit were different from the findings identified by FSA in a program review of the same school covering the same time period. In one case, FSA staff said they probably would have fully recertified the school if they had relied solely on the compliance audit. Instead, they used the program review to determine that the school should be provisionally recertified. Compliance audits and program review findings are based on a sample of student records, and FSA staff said some differences in findings might be explained by differences in the records reviewed. FSA and OIG officials cited several issues that can affect the quality of compliance audits. FSA and OIG officials we interviewed said that some auditors conducting compliance audits have insufficient training in federal student aid, which contributes to audit quality problems. OIG staff also said that even if an auditor meets the general training hour requirements for auditors, the training content may not be relevant for federal student aid audits. In addition, FSA and OIG officials said some schools— particularly smaller schools—tend to hire less experienced auditors in order to save money, often resulting in poor quality audits. FSA officials in most selected regional offices said that additional training on federal student aid for auditors who are new to or unfamiliar with federal student aid could help improve audit quality. FSA and the OIG recently have taken steps to address audit quality and the information available to FSA staff when making certification decisions. These efforts include: Training for auditors: The OIG has taken steps to enhance training offered to auditors of schools’ administration of federal student aid and is exploring opportunities to provide additional training. In December 2017, the OIG and the American Institute of Certified Public Accountants cosponsored training for auditors on the OIG’s 2016 revised guide for audits of for-profit schools, and other topics related to auditing federal student aid. The training included discussion of common audit quality issues and areas of highest risk. According to an OIG official, about 200 auditors attended, and after the event, the American Institute of Certified Public Accountants and the OIG posted a recording of the training to their websites to make it available to additional auditors. In addition, OIG officials said they maintain an email account—listed on the OIG website—through which auditors can ask questions and receive responses. In March 2018, the OIG posted frequently asked questions and answers to the website. Timeliness of OIG quality reviews: Both FSA and OIG officials said that the OIG has recently renewed efforts to issue compliance audit quality reviews more quickly, after several years in which staffing shortages and other issues led to some delayed quality reviews. Guidance to schools on selecting an auditor: OIG officials said that at the 2017 FSA training conference for school financial aid staff, they presented to more than 400 participants about factors schools should consider when hiring an auditor. For example, they suggested that schools verify the licenses of certified public accountants, ask about the types of engagements an auditing firm has conducted, request and check references, check for any actions that may have been taken against a firm, and ask whether the auditor has been subject to a previous review by the OIG or another agency. FSA officials said they expected to invite the OIG to present at future FSA conferences, and OIG officials said they were seeking additional opportunities to share information on auditor selection with schools, including a planned presentation to an association of postsecondary schools. FSA working group: FSA recently established a working group to update its guidance to FSA staff on how to coordinate with the OIG to address compliance audits with quality problems. Among other topics, the working group has consulted with the OIG about how schools are made aware of the OIG’s findings regarding the quality of their audits. FSA officials said that OIG officials have provided input and feedback on FSA’s proposed changes to the guidance. Audit guide revisions: In addition, OIG and FSA staff told us they expected the OIG’s 2016 revisions to the for-profit school audit guide to improve the quality of compliance audits for those schools. They said that because the revised guide clarified some issues that were confusing to auditors in the previous guide issued in 2000, auditors might be better able to implement the guidance. The audit guide revisions include more testing and reporting requirements, clarified procedures, and guidance on issues such as fraud reporting and coordinating financial and compliance audits. The 2016 revisions first applied to audits for fiscal years beginning after June 30, 2016, and FSA began receiving those audits at the end of 2017. In addition, although the OIG’s 2016 revisions only apply to audits of for-profit schools, FSA officials said they planned to establish a working group to consider improvements to audit guidance for public and nonprofit schools. FSA and OIG efforts to address audit quality could help ensure that compliance audits provide accurate and reliable information on school administrative capability for Education’s recertification decisions. Agency Comments and Our Evaluation We provided a draft of this report to Education for review and comment. Education’s Office of Inspector General provided technical comments, which we considered and incorporated as appropriate. Education did not provide other comments on the report. We are sending copies of this report to the appropriate congressional committees; the Secretary of Education; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: School Certification Outcomes, Calendar Years 2006-2017 Appendix II: Distribution of First and Second Recertification Outcomes by School Sector, Calendar Years 2006-2017 Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Michelle St. Pierre (Assistant Director), Kristy Kennedy (Analyst-in-Charge), Edward Bodine, Marissa Jones, and Mark Ward made significant contributions. Also contributing to this report were Susan Aschoff, Deborah Bland, Nagla’a El-Hodiri, Monika Gomez, Sheila R. McCoy, Jessica Orr, Mimi Nguyen, John Mingus, Rhiannon Patterson, Monica Savoy, Benjamin Sinoff, and Rosemary Torres Lerma.
Why GAO Did This Study Education provided over $122 billion in grants, loans and work study funds to help students pay for college at about 6,000 schools in fiscal year 2017. Education is responsible for certifying that these schools are eligible for and capable of properly administering federal student aid funds. Schools are required to submit an annual compliance audit that provides information on schools' administrative capability, which Education considers in its school certification decisions. GAO was asked to review Education's process for certifying schools to receive federal student aid. This report examines (1) how Education certifies schools to administer federal student aid and how frequently schools are approved and denied certification; and (2) the role of compliance audits in the certification process and what, if any, steps Education has taken to address the quality of the audit information. GAO analyzed data on school certification outcomes for calendar years 2006-2017 (when GAO determined data were most reliable); reviewed data and reports summarizing Education's reviews of compliance audit quality for fiscal years 2006-2017; reviewed a non-generalizable sample of 21 school certification decisions from fiscal years 2015 and 2016, selected for a mix of decisions, school characteristics, and geographic regions; examined relevant federal laws, regulations, policy manuals and guidance; and interviewed Education officials. What GAO Found The Department of Education (Education) is responsible for evaluating a variety of information to determine whether a postsecondary school should be certified to administer federal student aid programs, and agency data show that it approves most schools that apply. Education procedures instruct regional office staff to review school policies, financial statements, and compliance audits prepared by independent auditors, among other things. Education can certify schools to participate in federal student aid programs for up to 6 years, or it can provisionally certify them for less time if it determines that increased oversight is needed—for example, when a school applies for certification for the first time or when it has met some but not all requirements to be fully certified. In calendar years 2006 through 2017, Education fully or provisionally approved most schools applying for initial or recertification to receive federal student aid (see figure). Note: Schools applying for certification for the first time and approved are placed in provisional certification. In deciding whether to certify schools, Education particularly relies on compliance audits for direct information about how well schools are administering federal student aid, and Education's offices of Federal Student Aid and Inspector General have taken steps to address audit quality. The Inspector General annually selects a sample of compliance audits for quality reviews based on risk factors, such as auditors previously cited for errors. In fiscal years 2006 through 2017, 59 percent of the 739 selected audits received failing scores. Audits that fail must be corrected; if not, the school generally must repay federal student aid covered by the audit. Because higher risk audits are selected for review, Inspector General officials said they cannot assess the overall prevalence of quality problems in compliance audits. These two Education offices have taken steps to improve audit quality. For example, the Inspector General offered additional training to auditors on its revised 2016 audit guide and provided guidance to schools on hiring an auditor, while Federal Student Aid created a working group to strengthen its procedures for addressing poor quality compliance audits. Education's efforts to address audit quality could help ensure that these audits provide reliable information for school certification decisions. What GAO Recommends GAO is not making recommendations in this report.
gao_GAO-18-248
gao_GAO-18-248_0
Background Social Security Disability Insurance (SSDI) In recent decades, economic and demographic factors have contributed to an increase in the number of SSDI beneficiaries and increased program costs, which has reduced the size of the Disability Insurance Trust Fund reserves from a peak of $215.8 billion in 2008 to $46.3 billion in 2016. Over the past 26 years, the total number of SSDI beneficiaries more than doubled, from 4.2 million in calendar year 1990 to nearly 11 million in calendar year 2016. By contrast, during that same time, the number of workers covered by SSDI increased by less than a third—from 133 million to 171 million. In calendar year 2016, around 8.8 million workers with disabilities and 1.8 million dependents (spouses and children) received SSDI payments, totaling $142.7 billion, of which $133.6 billion was paid to the workers and $9.1 billion to their dependents. Private Disability Insurance (PDI) In most cases, long-term employer-sponsored private disability insurance (PDI) is paid for by the employer and provided as part of a package of benefits for employees, although sometimes employees are required to pay some of or the entire PDI premium. According to one industry survey (the most recent available), in 2013, 19 PDI companies covering about 75 percent of the PDI market policies provided PDI benefits to around 653,000 individuals, with annual payments totaling around $9.8 billion. Another industry survey estimated that the five largest insurers in the PDI market held about half of the market share of premiums paid. PDI policies can be offered either on an opt-in basis, in which employees who choose to pay for PDI receive it, or on an opt-out basis, where employees are automatically enrolled in PDI, but can decline (opt-out of) the insurance. Other Disability Protections for Workers Beyond the insurance market for PDI, workers may be eligible for other types of disability protection through their employment. For example, 38 percent of workers have employer-sponsored short-term disability coverage according to BLS, but unlike SSDI, this coverage typically lasts 6 months. In addition, state workers’ compensation programs generally provide payments and assistance to individuals who are injured on the job, while both SSDI and PDI are designed to replace lost income from the onset of any disability regardless of whether it was work-related. Also, some workers may be eligible for disability payments through defined benefit pension plans. These benefits, commonly known as disability retirement benefits, provide eligible workers with early retirement payments if they can no longer work because of the onset of a disability. According to BLS data, many workers covered by defined benefit pension plans are state and local workers, though these data do not show what portion of them are not covered by SSDI. Proposals to Expand PDI From our literature review, we identified three distinct proposals for expanding PDI in order to potentially alleviate financial challenges facing the SSDI program. These proposals were made in studies authored by: David Babbel and Mark Meyer (Babbel and Meyer) of Charles River Associates, Rachel Greszler (Greszler) of The Heritage Foundation, and David Autor and Mark Duggan (Autor and Duggan) for The Center for American Progress and the Hamilton Project. While the proposals differed in how PDI expansion might be achieved, each proposal assumes or requires that PDI coverage would provide vocational assistance, workplace accommodations, and partial income replacement to employees with work-limiting disabilities. Each proposal assumed that PDI expansion would result in the provision of effective return–to-work assistance earlier than would occur under SSDI. According to the authors, their proposals would slow the growth of the SSDI program by increasing work attachment of potential applicants or beneficiaries of SSDI and reversing the decline in employment rates of work-capable adults with disabilities, thereby improving the long-term solvency of the Social Security system. Two of the proposals suggested piloting the approaches to assess potential savings and implementation issues. SSDI Covers a Much Larger Portion of the Workforce than PDI, and Features of Coverage Differ SSDI Covers Almost All Workers, Whereas PDI Covers About One-Third of Workers, Who Are Generally Higher-Paid According to our analysis of SSA and BLS data, nearly all American workers pay Social Security taxes and are potentially covered by SSDI, while only a third of workers have PDI coverage. For SSDI, an estimated 96 percent of American workers, along with their employers, pay Social Security payroll taxes, a portion of which are used to fund SSDI. Of individuals aged 20 or older in 2016, 87 percent met the SSA work requirements to be eligible for benefits in the event of a disability. By contrast, as of March 2017, the Bureau of Labor Statistics (BLS) estimates that approximately 33 percent of the workforce is insured by employer-sponsored PDI where the employer pays at least some of the premium. Employees may also pay the entire premium of employer- sponsored PDI—and researchers we interviewed from three private sector organizations that survey the PDI market told us that these plans are a minority of the PDI market. However, neither BLS nor industry surveys comprehensively track the extent of PDI coverage where employees pay 100 percent of the premium cost. In addition, while SSDI coverage is higher across all industries and income levels, PDI coverage is much more prevalent at higher wage levels and in certain occupations and industries than others. In particular, as of March 2017, 60 percent of those in the highest 10 percent of wage earners had PDI, whereas 4 percent of those in the lowest 10 percent did (see fig. 1). Our analysis of BLS data found that differences in PDI coverage also exist by occupation and industry. Specifically, 60 percent of workers in business and financial operations occupations have PDI coverage, compared to 16 percent of workers in construction, extraction, farming, fishing, and forestry occupations that have PDI (see fig. 2). Broad differences in PDI coverage also exist by industry; for example, 83 percent of workers in utilities have PDI, but only 5 percent of workers in leisure and hospitality have PDI (see fig. 3). According to researchers from one organization with whom we spoke, the higher rates of PDI coverage reflect areas where labor markets are more competitive, leading employers to offer PDI to attract employees. SSDI and PDI Differ in Many Respects, Including Eligibility, Benefit Level, and Approach to Return to Work Eligibility Our review of SSDI program rules and PDI policies indicates that eligibility for PDI is similar in some ways to eligibility for SSDI. For example, both allow individuals with many types of disabilities to receive benefits until retirement, recovery, or death. However, there are also some significant differences, as SSDI and PDI have different definitions and employment requirements. According to SSDI program rules, to meet SSDI’s definition of disability, an individual must have a medically determinable physical or mental impairment that (1) has lasted or is expected to last at least 1 year or to result in death and (2) prevents the individual from engaging in substantial gainful activity. SSA uses a list of medical conditions—established in regulations—that it considers severe enough to entirely prevent an individual from working. Benefits can also be provided for medical conditions that are not on the list if the medical condition or combination of medical conditions meets or equals the severity of those on the list. SSA also considers additional factors— such as an individual’s residual functional capacity, relevant past work, age, education, and work experience. SSA can determine that the medical conditions combined with the applicable factors preclude the individual from performing his or her prior work or any other work in the national economy. Eligibility for granted SSDI benefits continues until retirement or death, or until SSA deems that the underlying medical conditions have sufficiently improved or that the individual has become gainfully employed. In contrast, typical PDI policies have provisions related to inability to work that may compensate workers in a wider range of circumstances than SSDI does, although these provisions become more strict after 2 years. For the first 2 years of PDI benefits, policies generally define disability as the inability of an individual to work his or her own occupation. For disabilities that last for more than 2 years, a typical PDI policy changes how it defines disability from the inability to work in one’s previous occupation, to the inability to work in any occupation offering a reasonable income, which was 60 percent of pre-disability earnings in the three sample policies we examined. Similar to SSDI, PDI benefit payments generally continue for the length of the disability or until retirement; however, unlike SSDI, benefits paid for certain conditions, such as mental health conditions, are generally limited to 2 years. Also unlike SSDI, PDI policies typically include a pre-existing condition provision, whereby benefits are not paid if the applicant received treatment, services, or consultation or took medication for the condition in the 3 months prior to being insured. The requisite time period between the onset of a disability and when benefits can begin is comparable between SSDI and PDI, according to our review of SSDI program rules and PDI policies; however, the time it takes to process and make decisions on claims may run longer for SSDI. For both SSDI and PDI, benefits do not usually start immediately upon disability onset. SSDI and PDI applicants must apply for benefits and usually wait for a period of time—known as a waiting period for SSDI and as an elimination period for PDI—for payments to begin. The waiting period for SSDI benefits is 5 months after disability onset. For the PDI policies we examined, the elimination period ranged from 3 to 6 months after onset. Another factor affecting the time to receipt of benefits is the time it takes to award the benefit. For PDI, federal regulations under the Employee Retirement Income Security Act of 1974 (ERISA) that govern claims in ERISA-covered plans, including disability claims, generally require initial claims to be decided within 45 days after receipt of the claim by the plan, with some ability to extend for two 30 day periods based on reasons beyond the plan’s control. PDI claimants may also appeal the initial decision. On the other hand, based on our review of SSDI program rules, claims for SSDI are not subject to timing requirements established by law. Further, denied claimants may appeal the initial decisions. The average decision time for appeals before Administrative Law Judges in fiscal year 2017 was 605 days, according to SSA’s fiscal year 2017 performance report. Our review of SSDI program rules and PDI policies also indicates that individuals are required to have longer employment periods for SSDI eligibility than for PDI, but PDI is generally not portable if the individual leaves the employer offering PDI. According to SSA guidance, individuals become eligible to receive SSDI payments after they have paid the Social Security payroll tax long enough—about 10 years for many—and recently enough to accumulate the required number of credits. In contrast, the three sample PDI policies we reviewed contained 30-day waiting periods for coverage to begin. Further, part-time status and job changes affect PDI more than SSDI eligibility. SSDI’s program rules generally allow for the work credits that an individual has accumulated to continue to count toward SSDI eligibility even if the individual is working part-time, changes jobs, or becomes unemployed or otherwise leaves the workforce. In comparison, PDI coverage, which is offered at the discretion of employers, is not generally portable, and may exclude part-time workers altogether—as was the case for the three “typical” PDI policies we reviewed. Nationally, PDI coverage is much more prevalent among full- time workers than part-time workers. According to BLS data, 42 percent of full-time workers and 5 percent of part-time workers have PDI where the employer pays for all or part of the premiums. See Table 1 for a comparison of SSDI and PDI eligibility features. Our review of SSDI program rules and PDI policies indicates that SSDI benefit levels for individuals are generally lower than PDI but are designed to provide more income replacement for low-income workers than higher-income workers. Under federal law, SSA determines benefit amounts using a progressive formula, whereby low-income beneficiaries receive relatively higher benefit payments based on their average monthly earnings over the course of their career. For calendar year 2018, the formula pays 90 percent of the first $896 of the individual’s average monthly earnings, plus 32 percent of the earnings between $896 and $5,399, plus 15 percent of earnings over $5,399. (See fig. 4 for the amount of benefits SSA paid in 2018 according to prior income levels.) Using the formula, we calculated that at an average indexed annual earnings of $44,000, the monthly benefit would be $1,693, which is 46 percent of prior average monthly earnings. Under the formula, workers earning less would receive a higher proportion of their prior average, while workers earning the taxable maximum (set at $128,400 for 2018) or more would be eligible to receive $3,042 per month, which is at most 28 percent of their earnings. Under federal law, disabled workers with qualifying dependents may receive additional SSDI payments, up to 50 percent of their individual benefit amount. Therefore, according to our calculations, the maximum family benefit for average annual indexed earnings of $44,000 would be $2,540, which is 69 percent of prior average monthly earnings. By contrast, up to certain income levels, PDI policies typically replace 60 percent of an employee’s current salary if the employee is unable to continue working his or her prior job. Therefore, a worker earning $44,000 annually in their prior job would receive $2,200 per month. For high-income workers, PDI policies typically have a monthly maximum payment. In one PDI policy we reviewed, this monthly maximum was $5,000. Employers and employees pay for SSDI through payroll taxes on employees’ wages and salaries, so the cost to employers and employees only varies based on employees’ wages and salaries. Federal law also determines the part of the payroll tax that is allocated for SSDI, half of which is contributed by the employee and half by the employer. For PDI, either employers, employees, or a combination of the two make premium payments, depending on the policy negotiated between the insurer and the employer. According to industry representatives with whom we spoke, premiums may vary based on many factors, such as wages and salaries, the length of the elimination period, rate of income replacement, type of industry, and a company’s prior claim experience. Another difference between SSDI and PDI benefit levels is their treatment of partial benefits or partial disability determinations. SSDI program rules do not provide partial payments to individuals who have lost some but not all of their ability to earn income in the national economy. In contrast, some PDI policies may pay benefits for a partial disability. For example, in one of three policies we examined, workers could qualify for partial benefits, at lower levels, if they were partially unable to achieve their previous earnings because of a disability. PDI policies generally require beneficiaries to apply for SSDI and, if found eligible, PDI payments are typically adjusted downward (offset) by the amount of SSDI payments. There is no similar requirement or payment adjustment for SSDI beneficiaries. In cases where PDI beneficiaries are not required to or do not apply for SSDI, PDI policies we reviewed would still reduce their PDI payment by the SSDI amount that the beneficiaries may have been entitled to receive. The PDI payments would be reduced for the full amount of the SSDI payments, including any SSDI payments for the worker’s spouse and dependents, but will typically maintain a minimum PDI benefit. The three PDI policies we reviewed provide a minimum $100 monthly benefit when the SSDI offset would otherwise totally eliminate the PDI benefit or reduce it below $100 a month. According to insurers we interviewed and PDI policies we reviewed, insurers will assist PDI beneficiaries with their SSDI applications, and if necessary, provide legal assistance for SSDI appeals processes. According to one industry survey, 72 percent of PDI beneficiaries also qualified for SSDI. PDI benefits may also be reduced by the amount of income from other sources such as workers’ compensation payments, sick leave, or severance pay from an employer. Additionally, under federal law, SSDI confers Medicare eligibility after 2 years. In contrast, insurance associations and our review of PDI policies indicate that PDI policies do not typically provide health care benefits. Table 2 provides a summary of SSDI and PDI benefit features. Both SSDI and PDI policies include incentives to return to work, such as allowing beneficiaries to retain some earnings when they return to work, but PDI policies may provide return-to-work services sooner than SSDI. As long as they continue to meet SSDI’s eligibility criteria, beneficiaries can earn up to the substantial gainful activity amount each month, without any impact on their SSDI benefit, according to SSDI program rules. SSDI program rules also provide work incentives in the form of a trial work period, which allows the beneficiary to receive full disability benefits while potentially earning more than the substantial gainful activity amount, for up to 9 months. SSDI beneficiaries who earn above the substantial gainful activity threshold after 9 months of a trial work period will no longer receive SSDI cash benefits, but will continue to receive Medicare coverage, if enrolled, for up to 7 years and 9 months. After the trial work period ends, the 36-month extended period of eligibility begins, during which SSDI beneficiaries are entitled to receive benefits so long as they continue to meet the definition of disability and their earnings are below the substantial gainful activity monthly earnings limit. Moreover, individuals whose benefits stopped due to work may have their benefits reinstated under an expedited reinstatement if for medical reasons they are unable to work again at some point within 5 years. Under this expedited reinstatement, beneficiaries receive up to 6 months of temporary cash benefits while SSA conducts a medical review. Despite these SSDI provisions, participants of a 2013 Social Security Advisory Board Forum have criticized the SSDI program for having poorly structured work incentives, and we have previously reported that complex SSDI rules related to these work incentives may result in overpayments to beneficiaries. PDI policies we reviewed also provide for continued payments while beneficiaries participate in the insurer’s return-to-work program or find other employment. However, in contrast to what is referred to as SSDI’s “cash cliff,” PDI payments are gradually reduced in some ways to account for the beneficiaries’ earnings. For example, in one policy we reviewed, if the beneficiary participates in the insurer’s return-to-work program, the beneficiary may continue receiving benefit payments in addition to any employment earnings. However, unlike SSDI, the combination of the employment earnings plus the PDI payment would be capped at 110 percent of the beneficiary’s pre-disability earnings. Under this same policy, after the first 12 months that the beneficiary is disabled and working at a reduced capacity, the partial PDI payment decreases proportionally as the employment earnings increase until the beneficiary earns 80 percent of their pre-disability earnings, at which point they are no longer considered to be disabled. The other two policies we reviewed provided pro-rated PDI payments as soon as a beneficiary had some work earnings, until those earnings reach a threshold, such as 80 or 100 percent of their pre-disability earnings. Both SSDI and PDI policies offer services and supports to beneficiaries to help them return to work, but PDI policies may focus more on early provision of services and, depending on the policy, earlier intervention and case management. SSDI program rules allow beneficiaries access to return-to-work services and supports through the Ticket to Work (TTW) program, which helps interested beneficiaries transition to self-sufficiency through work. When individuals become eligible for SSDI, SSA guidance calls for sending them information about public or private employment networks or state vocational rehabilitation agencies. According to SSA’s guidance, beneficiaries can choose to work with one of these service providers and develop a plan for work goals that may involve services such as training, career counseling, vocational rehabilitation, and job placement. The TTW program then pays for those services and ensures that participating beneficiaries will not be subject to a review of their disability while they continue to work with the service provider. However, the SSA Office of the Inspector General reported that fewer than 3 percent of beneficiaries were participating in TTW in 2015. In addition, SSA-funded evaluations have found that TTW has had limited success in returning SSA beneficiaries to work and reducing their dependence on SSDI. In addition to return-to-work services through TTW, SSA officials told us that beneficiaries may use services provided through or by other federal, state, and local programs or provider networks, such as the Department of Labor’s Stay-at-Work/Return-to-Work initiative. However, we have previously reported that the large number of federal agencies and programs providing employment supports to individuals with disabilities represents a fragmented system of services, and little is known about their effectiveness. In contrast, according to insurance representatives and the three PDI policies we reviewed, PDI policies may provide early interventions, funding for workplace accommodations, and case management to help beneficiaries return to work. For example, one policy we reviewed explicitly offered an early intervention program to covered employees even when the PDI insurer was not also the short-term disability insurer, to identify workers who might benefit from vocational analyses and rehabilitation services before they are eligible for long-term disability benefits. Separately, this policy also had a return-to-work program with case managers who coordinate services and refer beneficiaries to clinical specialists, such as nurse consultants, psychiatric clinical specialists or vocational rehabilitation consultants. According to this policy, if the insurer determined that beneficiaries were capable of participating in the return- to-work program, but did not, their benefits could cease. Information on how many PDI beneficiaries receive work assistance, such as worksite modifications, and insurers’ aggregate expenditures for such assistance is also generally unknown. While participation in and the impact of SSA’s TTW program has been extensively evaluated, the insurance representatives and researchers with whom we spoke could not provide us with data or studies showing the extent or cost of work assistance provided by PDI insurers, so the impact of these investments is not publicly known. See Table 3 for a comparison of SSDI and PDI policies’ work incentives and assistance. Implications of Proposals to Expand Private Disability Insurance Cannot Be Assessed Due to Incomplete Information Our literature review identified three distinct proposals to expand PDI— through some type of federal action—as a way to provide savings for SSDI; however, we were unable to assess the implications of these proposals on SSDI. Based on our review, there is an array of complex factors that could influence PDI expansion and SSDI cost savings— factors for which data, methods, and assumptions for projecting SSDI savings are either unreliable and unsupported, or unavailable. In addition, insurer, employer, and employee stakeholders we spoke with identified other implications of expanding PDI—but these implications cannot be ascertained because the proposals are not sufficiently detailed. PDI Expansion Proposals Foresee Savings The three distinct PDI expansion proposals we identified include the following: David Babbel and Mark Meyer (Babbel and Meyer) of Charles River Associates proposed that voluntary employer-sponsored PDI coverage could be extended to more working Americans through congressional action and the federal government facilitating education and outreach efforts. Specifically, they recommended the enactment of legislation to make it clear to employers that automatic enrollment with “opt-out” arrangements under employer-sponsored group disability plans is legal. The authors believe this will address confusion and uncertainty that is holding employers back from providing PDI. Rachel Greszler (Greszler) of The Heritage Foundation proposed encouraging employers to voluntarily provide PDI in exchange for a payroll tax credit. Under this proposal, participating employers would qualify for the tax credit by covering the first 2 or 3 years of PDI benefits at least equivalent to SSDI benefits to employees. Workers awarded benefits under the employers’ PDI would transfer to the SSDI program if their disability continued beyond the first 2 or 3 years and they qualified for SSDI. PDI would then cease to provide benefits, unless employers chose to extend the PDI policies. According to the author, if an individual is denied PDI benefits, the individual could apply for SSDI. David Autor and Mark Duggan (Autor and Duggan) proposed extending coverage of PDI to all workers through a statutory mandate. Employers would be required to provide PDI benefits for 2 years to individuals with disabilities who are unable to work. At the end of this period, PDI benefits would cease and SSDI would provide benefits for individuals qualifying for SSDI. Under the proposal, individuals with extremely disabling conditions with very limited prospects of returning to work (e.g. stroke, late stages of certain cancers, etc.) would be eligible to apply for SSDI at the onset of their disability, in lieu of PDI. Table 4 summarizes key features of the three proposals to expand PDI that we identified. Many Unknowns Make An Assessment of the Potential for SSDI Savings Uncertain Differences in Covered Populations Existing differences in the SSDI and PDI covered populations may play a role in determining the potential impacts of expanding PDI. As previously noted, SSDI covers almost all workers, whereas PDI coverage tended to be for those with higher wages and was more prevalent in certain industries. Based on our review of BLS data, in order to expand significantly, PDI would need to cover more lower-wage workers and other occupations and industries where it is currently less common, such as in retail and construction. However, as indicated in the Autor and Duggan proposal, expanding PDI to workers currently not covered could affect PDI premiums, based on the type of industry and wage levels. According to various stakeholder groups we interviewed, changes in PDI premiums would, in turn, have implications for the attractiveness of PDI to employers and employees under voluntary proposals. The overlap in PDI with the SSDI beneficiary population also plays a role in determining any potential impact of expanding PDI. As previously noted, one industry survey reported that 72 percent of PDI beneficiaries of its member companies also received SSDI. One insurer told us that the longer PDI beneficiaries remain on PDI, the more likely they will also receive SSDI. In fact, for beneficiaries on PDI for 2 years, 58 percent also get SSDI benefits; and for beneficiaries on PDI for more than 5 years, more than 90 percent also receive SSDI. Our review of these data suggests that for those receiving both SSDI and PDI benefits, it may be difficult to attribute return to work and other changes in circumstances, such as changes in health, to either PDI or SSDI. For example, it is possible that any differences in return-to-work outcomes for SSDI beneficiaries who receive PDI versus those who do not may have more to do with the specific characteristics and circumstances of the beneficiaries than as a result of having PDI coverage. Types and Timing of Return-to- Work Assistance Offered To achieve SSDI cost savings, the three proposals assume that insurers will provide or reimburse employers for providing vocational rehabilitation, workplace accommodation, and return-to-work services, but the proposals provide few, if any details about how this would occur. For example, the two proposals that describe voluntary PDI enrollment do not explicitly require that such services be provided through PDI. The Autor and Duggan proposal, which includes a mandate for enrollment, requires that PDI provide workplace accommodations consistent with the Americans with Disabilities Act (ADA) and vocational rehabilitation services. The proposal includes a list of vocational rehabilitation services that insurers could provide, but the authors acknowledge that in practice it is not always “clear-cut” when a “reasonable accommodation” under the ADA is required and what the accommodation should be. As noted previously, we were unable to find public data on the extent to which PDI policies currently provide such services and insurance representatives and researchers we contacted that collect and report PDI data said that they do not collect such data from insurance companies. According to our review of PDI policy provisions that allow for rehabilitation and workplace accommodation services, the decision of what assistance will be provided through the PDI policy, if any, and the extent of such assistance the insurer provides or helps the employer provide, is at the discretion of the insurer. It is also possible that insurers would make less of an investment in return-to-work services for PDI beneficiaries under the two time-limited proposals because the insurers are only responsible for 2 to 3 years of disability payments, compared to traditional PDI policies where the employer may have financial responsibility to make payments to beneficiaries until they reach retirement age, unless insurers can help them return to work. Several stakeholders said that additional uncertainty exists with respect to effectiveness or attractiveness of PDI expansion proposals for populations currently not covered by PDI, such as low-wage workers and those with physically demanding jobs. BLS data show that PDI is currently less prevalent among these workers, and therefore less is known about the type and effectiveness of return-to-work services that would be offered to them under PDI expansion. For example, researchers report that lower-wage workers may have jobs that offer limited opportunities to adjust work schedules—a flexibility that one research group said could assist workers in the case of disability. In addition, researchers stated that lower paying jobs tend to not offer sick leave and other key benefits, and the absence of such benefits may present another potential obstacle for successful rehabilitation and workplace accommodation efforts. According to various stakeholder groups we interviewed, employers in low-paying industries or who otherwise do not offer these benefits would have less of an incentive to offer PDI or other supports to help retain their workers compared to employers who compete for skilled employees that are also typically more difficult to replace. These factors—in combination with previously discussed unknowns related to the cost of PDI in non-traditional sectors— reflect complexity and uncertainty about the extent to which PDI may be expanded through a voluntary system. The proposals assert that expanded PDI would provide financial support, accommodations, and rehabilitation services much sooner than SSDI. However, based on our review, it is not clear if this would happen for two of the proposals. As previously noted, the SSDI elimination period is 5 months, after which SSDI beneficiaries become eligible for return-to-work assistance through the TTW program and financial incentives, but lengthy SSA decision times may significantly delay when individuals receive return-to-work supports. The Autor and Duggan mandatory proposal has PDI benefits commencing within 3 months of disability onset, which is sooner than SSDI, and therefore, depending on the circumstances, may allow for the provision of return-to-work services sooner than under SSDI. The Babbel and Meyer and Greszler voluntary proposals do not specify the length of elimination periods. While the Babbel and Meyer and Greszler proposals indicate PDI will provide return-to-work services sooner than SSDI, it is unclear whether or how the timing of return-to- work services might evolve under the two voluntary proposals. Moreover, while data exist on SSDI initial and appeal decision times, we were unable to find current industry-wide data on the average decision period for PDI, or on the extent of appeals and how long on average these take to decide. Other Factors That Could Affect PDI Enrollment and SSDI Cost Savings Based on our review of the PDI expansion proposals and interviews with stakeholder groups, we identified several additional factors that could affect the extent to which the PDI proposals could increase PDI coverage and result in SSDI cost savings, especially under the two voluntary proposals (Babbel and Meyer and Greszler). Such factors include the likelihood that efforts to encourage PDI enrollment might be successful, the effect of policy premiums and tax credits on employers’ willingness to offer PDI policies, and whether expanded PDI might lead to more people also going on to SSDI. Babbel and Meyer asserted that congressional action and federal outreach would clarify for employers that automatic enrollment with opt- out arrangements is legally permissible and thereby result in voluntary PDI expansion. According to the authors, their approach was motivated by the success of similar automatic enrollment provisions in the Pension Protection Act of 2006 in raising the participation and savings rates in 401(k) defined contribution savings programs. However, since PDI automatic enrollment is already available, it is not clear how their proposal for congressional action and federal outreach would result in more employers adopting it and employees participating. The Babbel and Meyer proposal is also based on requiring employees to pay part or all of the insurance premiums. According to employee advocacy groups, workers at the lowest end of the wage spectrum in particular may have little, if any, disposable income to pay for PDI, and also little incentive to participate when SSDI already replaces a relatively high proportion of their wages. Further, in an employer discussion session we heard that employees willing to contribute in part or the entire premium may also have a greater risk of needing PDI benefits, and adverse selection could result in higher premiums, which in turn, fewer workers may be willing to pay. The Greszler proposal anticipates potential significant savings for SSDI assuming that employers who had not previously offered PDI to their employees would opt to offer PDI in exchange for a payroll tax credit. According to an employer association, in making this choice, employers would need to compare the financial benefit of a payroll tax credit with the cost of PDI premiums, among other things—which may evolve under the proposal, according to insurers in a discussion group we held. According to insurance industry representatives, the direction of possible premium changes under the Greszler proposal is unclear because the proposal reduces employers’ financial responsibility to 2 to 3 years of potential disability benefit costs. This shorter benefit period could reduce premiums typical for longer term policies. However, since persons would not generally be able to receive SSDI benefits during this period under the proposal, there would be no offset of SSDI benefits against PDI benefits (as discussed earlier). According to an industry association and an insurer discussion group we held, the absence of the SSDI offset could increase premiums, possibly substantially. Another consideration raised by SSA officials is whether PDI expansion would increase SSDI applications and benefits paid, which would reduce potential SSDI savings from the proposals and could increase the cost of the SSDI program. Typical PDI policies may effectively require PDI beneficiaries to apply for SSDI, and PDI insurers may assist beneficiaries with SSDI applications. Insurance association representatives told us that in addition to helping keep PDI premiums attractively low, such practices benefit those who become eligible for SSDI benefits by providing health care benefits that they might not otherwise be able to access. One insurance association further noted that by helping PDI beneficiaries complete SSDI applications, SSA may receive well- supported applications that are more efficient to process. On the other hand, researchers and SSA officials indicated that such PDI practices may result in some individuals applying for and receiving SSDI who would not have otherwise done so. Cost Saving Estimates Are Unreliable or Unsupported Each proposal states that expanding PDI would reduce SSDI costs. The proposals indicate that this would be achieved mainly through PDI early intervention after employees’ onset of disabilities and a resulting reduction in the number of SSDI claimants or duration of SSDI beneficiaries’ status. Only the Babbel and Meyer proposal developed an estimate of potential savings. In forecasting SSDI savings, The Babbel and Meyer proposal estimated cost savings by assuming that automatic enrollment would result in PDI coverage increasing from 33 percent to just over 50 percent of private sector employees. Comparing PDI disability termination rates from the Society of Actuaries with SSDI termination rates, Babbel and Meyer estimated that PDI expansion would save the federal government an additional $500 million to $700 million per year, with a 10-year cumulative savings of $5 billion to $7 billion. They said that because they were unable to conduct a rigorous and comprehensive study of disability, recovery, and reemployment, their proposal “quantifies the benefits of group disability insurance indirectly, using publicly available data that are sparse, aggregated, and often difficult to interpret.” The Greszler proposal relied on the Babbel and Meyer analysis in concluding that there would be significant SSDI savings. According to Greszler, early intervention would keep individuals on the job and reduce the number of potential SSDI beneficiaries. Further, Greszler assumes that the loss of tax revenue from the proposed payroll tax credit would be made up by lower SSDI expenditures incurred during the 2 – 3 years that employees are covered by PDI instead of SSDI. However, the Greszler proposal did not quantify the magnitude of the tax credit or the overall savings to SSDI. The Autor and Dugan proposal noted that SSDI expenditures would be lower because mandated PDI policies would pay the first 2 years of benefits, instead of SSDI. The authors also noted that, over the longer-term, 2-year mandated PDI for employees has the potential to pay for itself and generate SSDI savings if the proposed mandate succeeds in allowing 1 in 11 would-be SSDI beneficiaries to remain gainfully employed. However, Autor and Duggan’s proposal did not include an explanation of how, or data or evidence supporting that, the proposed PDI mandate would achieve employment for 1 in 11 would-be SSDI beneficiaries. Our analysis of the Babbel and Meyer proposal found that the available data used to develop the SSDI cost savings estimates due to PDI expansion were not comparable and therefore did not result in a reliable estimate of the financial impact of current or expanded PDI on SSDI. In their proposal, Babbel and Meyer estimated cost savings by comparing SSDI’s and PDI’s recovery rates. For SSDI, Babbel and Meyer used an SSDI work termination rate that includes the number of SSDI beneficiaries terminated during the year due to having earnings that exceeded the substantial gainful activity amount, divided by the total number of SSDI beneficiaries during the year. For PDI, Babbel and Meyer used a PDI recovery rate that includes the number of PDI benefit awards terminated during the year for multiple reasons, divided by the cumulative number of months all PDI benefits were received by PDI beneficiaries who received PDI benefits during the year. However, we found that the numerators and denominators used to compare SSDI and PDI recovery rates are not comparable. For example, the PDI numerator reflects a much broader definition of recovery than the SSDI numerator, which may contribute to overestimating PDI’s relative recovery rate. Specifically, the SSDI numerator is limited to those terminated from SSDI for earnings exceeding SGA, whereas the PDI numerator includes terminations for reasons besides return to work, such as medical improvement (even if an individual did not return to work), failure to submit required documents to continue receiving benefits, changes in coverage from inability to perform own occupation to any occupation coverage, and other non-specified terminations. The denominator used in the comparison also differs. For SSDI, it is the number of people receiving SSDI benefits during the year. For PDI, it is the cumulative number of months of all PDI benefits that were received by PDI beneficiaries who received PDI benefits during the year. Because the denominators are different, we were unable to determine whether they contributed to an under- or overestimate of PDI’s relative recovery rate. Regardless, we determined that the non-comparable rates in Babbel and Meyer’s proposal affect the reliability of its cost savings estimate. SSA’s Office of the Chief Actuary also reviewed the proposal at our request and concluded that the SSDI and PDI termination rates shown in the proposal were comparable neither in concept nor in unit of analysis. Even with common units of analysis in SSDI and PDI termination rates, estimates of the impact of PDI on SSDI would also need to consider the other differences that we described above, such as differences in covered populations. Authors Suggested Proposals Be Pilot Tested The authors of two proposals we spoke to suggested that any proposal to expand PDI should be pilot tested before being implemented nationwide due to the number and complexity of factors involved and their potential effect on SSDI. For example, in their proposal, Autor and Duggan noted that, given the inevitable challenges and uncertainties associated with rolling out a major program innovation, it would be desirable to phase in such a plan and to run pilot programs in a limited number of states. They also suggested that pilot programs could be targeted, such as to larger firms. In discussing the Greszler proposal with the author, she told us that a pilot test of her proposal might help show if the program works better in some industries or occupations than others, as well as determine how employers respond to the tax incentive and if employees feel they are treated fairly by private insurers. Similarly, we have previously reported that changes affecting the SSDI program may raise particular implementation challenges, given the program’s inherent complexity; any changes may require pilot testing to evaluate the potential effects or unintended consequences that the Congress, the administration, SSA, and the broader public will need to know to make an informed decision about whether to implement program changes nationwide. SSA and DOL have funded and overseen pilot programs to test other proposals to help individuals with disabilities participate in the workforce. Missing Details Make Identifying the Implications of the Proposals for Stakeholders Uncertain Employee advocacy groups, employers, and insurance companies we spoke with raised various questions and concerns about the potential impacts of expanding PDI—implications that the proposals did not explicitly or fully address and therefore remain uncertain. The proposals also provided few details on any oversight role that would be needed by federal or state governments. Potential Impacts on Employees The proposals assert that employees could potentially benefit in the event of a disability from PDI cash benefits that may be higher than SSDI benefits; however this outcome is not certain. Based on our review of SSDI and PDI policies and interviews with employee and advocacy groups, whether or not workers would opt for PDI benefits under voluntary expansion would depend on the attractiveness of PDI relative to SSDI and other benefits. For example, an employee benefits survey and several stakeholder groups we spoke with suggested that employees tend to value other benefits, such as health insurance, more than disability insurance. According to employee groups, lower-wage workers, in particular, may opt-out of PDI under the Babbel and Meyer proposal in favor of paying for other benefits, or forgo benefits entirely, especially if premiums are high. In addition, based on our review, PDI may not provide much additional benefit for lower-wage workers, and employee groups told us that, given a choice, lower-wage workers might choose not to participate in PDI since SSDI benefits replace a relatively high share of their wages. Based on our review of SSDI and PDI policies, current PDI policies typically do not include dependent and spousal benefits offered by SSDI, and unlike SSDI have exclusions and pre-existing condition provisions, as well as have time limits on benefit payments for some conditions, which may result in workers finding SSDI more attractive than PDI. To the extent that employees see PDI benefits as less attractive than SSDI and their willingness to participate in PDI declines, cost savings to SSDI resulting from voluntary PDI proposals would likely be affected. Two employee advocacy groups also expressed concern that all three proposals focus on employer-provided PDI, and two of three proposals do not explicitly address self-employed and part-time workers. As we have previously noted, an increasing number of people are part of the contingent workforce, with limited access to employer-sponsored benefits. Other individuals may have already left the workforce or otherwise be unemployed and thus have no connection to an employer. Further, two employee advocacy groups explained that individuals who will eventually be unable to work due to a disability initially experience symptoms that may cause them to work part-time or take a different position or job, which may affect their access to PDI through their current or new employer. On the other hand, the proposals allow for persons not covered under the proposals to apply for SSDI. Two employee advocacy groups also expressed concern that workers who are auto-enrolled under the Babbel and Meyer proposal may not make an informed choice about participating due to the complexity of disability contracts. One employee advocacy group was particularly concerned for low-wage workers who may be struggling financially and cannot afford disability insurance, but do not initially opt-out of coverage because of inertia, language barriers, or not understanding the product, including the tradeoffs involved in choosing to keep it or opt out. Employee advocacy groups told us that more needs to be done to get SSDI beneficiaries back to work, but noted a range of concerns about using PDI to do this. Their concerns included the following: Employers are moving away from providing other key employee benefits, such as health care benefits (which may be more important to workers than PDI and without which PDI would be less effective). Employers are moving away from full-time employment (which is usually a stipulation of PDI policies). Employers might discriminate in not hiring individuals at higher risk of disability under proposals that make employers responsible for the first few years of providing disability assistance. The transition from receiving PDI to qualifying and getting approved for SSDI under the proposals might delay receipt of SSDI. Insurers might not actually provide rehabilitative and accommodation services. There would not be standardization of PDI eligibility determination, coverage, and appeal processes to ensure fair and equitable treatment of workers. All employee advocacy groups we spoke to emphasized the need for consumer protections and strong oversight under the PDI proposals. One employee advocacy group said that there are too many problems, gaps, and concerns with the proposals to expand PDI, when SSDI already provides near universal coverage and is a system that is up and running. Moreover, the employee advocacy group said that SSA could identify the key reasons that PDI has had success in getting people back to work and incorporate those lessons into SSDI, because more effort needs to be spent improving SSDI and increasing its return-to-work efforts. Potential Impact on Employers Individual employers and employer associations we spoke to said that more details would be needed to determine how they might be impacted by the proposals. Regarding the Babel and Meyer proposal (which as previously discussed, cites the need for congressional action to address potential legal uncertainties regarding automatic enrollment) one employer association representative expressed concern about whether state garnishment laws would prohibit employers from making automatic deductions for PDI premiums from employees’ pay without their permission. Regarding the Greszler proposal, employers and representatives of an employer association we spoke to indicated they would need to know more details, such as the exact amount of the tax credits and how insurance premiums might be affected. Regarding the Autor and Duggan proposal, representatives of the two employer associations stated that their members would oppose a mandate. One employer association said there are often additional requirements that come along with any mandates, even for actions that employers are already taking, such as offering PDI. The employer association also expressed concern that doing more than is required under any mandate generally exposes employers to liability, which could result in employers providing only the minimum benefits and assistance required by law. One employer said that mandated PDI could crowd out the amount of other benefits an employer is willing to provide, such as the amount of medical coverage that it offers to employees. In addition, one employer association we spoke with was concerned about the potential administrative burdens associated with expanding PDI, particularly for small employers. They noted that administering any benefit requires financial resources to provide, monitor, and maintain the benefit, stating that once employers provide a benefit to employees, they are generally reluctant to take it away. Employers in one discussion group we held were also concerned that providing disability assistance through a PDI policy for 2 to 3 years under two of the proposals would require that they retain employees and provide benefits even when employees are unable to continue work. Employers in a discussion group and an employer association we spoke to also wanted to know how the PDI plans would be overseen at the state and federal levels under the proposals, and what additional requirements that would entail. Potential Impact on Insurance Companies Insurance companies and associations we spoke to generally supported efforts to expand PDI, but also expressed some concerns about related unknowns. In particular, insurers in one of our discussion groups and both insurance associations we spoke with supported the Babbel and Meyer proposal to encourage employers to automatically enroll employees, with an opt-out provision, which came out of a study funded by America’s Health Insurance Plans (AHIP) and the American Council of Life Insurers (ACLI). One insurance association said that, relative to the other proposals that provide a tax credit or mandate coverage, the Babbel and Meyer proposal to expand PDI would not be a problem for insurers’ capacity. Representatives from this insurance association suggested first taking initial steps proposed by Babbel and Meyer through encouraging automatic enrollment before considering a more major restructuring of PDI that would supplant SSDI for 2 to 3 years. On the other hand, two insurance associations expressed concerns about the potential for additional requirements that could result from implementing the Babbel and Meyer proposal, for example in relation to employee consent or the quality of coverage offered. Insurance associations and insurers we spoke with also raised concerns about the other two proposals (Autor and Duggan, and Greszler), especially related to how they would fundamentally and unpredictably change the PDI market. On the one hand, one insurance association pointed out that these proposals would eliminate the SSDI offset from PDI payments for the 2 to 3 year period, which could lead insurance companies to significantly increase PDI premiums for such policies. On the other hand, if the insurance company is only liable for 2-3 years of benefit payments and services, this could reduce insurers’ costs. In addition, in one insurer discussion group that we held, insurers said that if there was not an SSDI offset of PDI benefits during the 2 to 3 year period, the industry would be more aggressive about return to work efforts. However, in another insurer discussion group we heard that they would do less for return to work under such policies, because future savings to the insurance company are not as great under a 2 to 3 year policy as if the insurance company is liable for paying benefits until an individual reaches normal SSA retirement age, as with current policies. Finally, representatives from one insurance association said that the Greszler and Autor and Duggan proposed PDI expansions would create extreme capacity problems for insurers. Under the Autor and Duggan proposal nearly all employees would need to be covered. One insurance association also noted its view that SSDI might benefit under the proposals to expand PDI. Specifically, the insurance association said that after someone goes through the PDI claim process, a subsequent claim for SSDI may be of higher quality, potentially reducing the administrative costs of a subsequent SSDI determination. Potential Impact on Federal and State Governments Oversight The three proposals we reviewed did not specify the government’s role in overseeing the expanded PDI market. Babbel and Meyer proposed a stronger federal role in encouraging automatic enrollment by passing a law to clarify its permissibility, but the proposal did not provide details on implementation and oversight. Greszler proposed that participating employers provide benefits at least equivalent to SSDI benefits, but provided no other details on how compliance would be overseen. Neither the Autor and Duggan nor the Greszler proposals addressed whether individuals denied PDI could apply for SSDI within the 2 to 3 year period covered by their proposals. Stakeholders we spoke to expressed divergent perspectives on whether federal and state governments would need to provide additional regulation, supervision, and/or oversight related to expanded PDI markets. One insurance association said that insurance providers are already very well regulated by ERISA and by states, and a major insurer said that there already exists an array of federal and state laws governing employer-sponsored PDI coverage that establishes a robust regulatory framework for protecting participants. In contrast, representatives from all employee advocacy groups we spoke with cited problems identified with private insurance company practices and stressed the need for additional consumer protections and government oversight. We found instances of federal and state enforcement actions regarding disability insurance improper practices in the past potentially affecting hundreds of thousands of people over many years, as well as more recent rulemaking by DOL that said that “disability cases dominate the ERISA litigation landscape.” These actions suggest that expanding PDI or including new PDI requirements, in lieu of SSDI, would likely involve some degree of additional federal and state oversight. Any costs associated with expanded state and federal roles would reduce potential cost savings from the proposals, although the extent to which this might affect the Disability Insurance Trust Fund is unclear. According to DOL officials, an expansion in the number of private disability benefit plans and an increase in the complexity in the legal requirements governing the design and operation of such plans would require DOL to provide proportionally more interpretive guidance, compliance assistance, and enforcement and oversight activities. Estimating the potential impact of the proposals on DOL’s functions and capabilities would require more specific information on the statutory and regulatory changes envisioned by the proposals and the likely impact of those changes on the private disability plan marketplace. SSA officials said that whether or not SSA would experience an expanded role would depend on any changes in law regarding the proposals. Agency Comments We provided a draft of this report for review and comment to SSA and DOL. Neither SSA nor DOL provided written comments, although both provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Commissioner of Social Security, Secretary of Labor, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7215 or curdae@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, key contributors to this report included Michele Grgich (Assistant Director), Dan Meyer (Analyst-in- Charge), Lucas Alvarez, and Seyda Wentworth. Other contributors include: James Bennett, Ramona Burton, Holly Dye, Sarah Gilliland, Emei Li, Dan Meyer, Carol Petersen, Monica Savoy, Almeta Spencer, and Adam Wendel.
Why GAO Did This Study SSDI, which is administered by SSA, provides financial and other assistance to qualifying individuals who are unable to work due to their disabilities. SSDI is primarily funded by employee and employer payroll taxes that are placed in the Disability Insurance Trust Fund, which is currently projected to not be able to pay full benefits starting in 2028. While there are a number of ways to address the fiscal condition of the Disability Insurance Trust Fund, some researchers have proposed expanding employer-provided PDI. GAO was asked to review whether expanding PDI could result in potential savings to the Disability Insurance Trust Fund. This report examines (1) what is known about how coverage and key features of SSDI and PDI compare, and (2) the potential implications of three distinct proposals to expand employer-sponsored PDI on the Disability Insurance Trust Fund and various stakeholders. GAO analyzed data on SSDI and PDI coverage from SSA and BLS for 2016 and 2017; reviewed relevant federal laws, regulations, and guidance; reviewed three PDI policies that three large insurers we selected described as typical for their companies; reviewed three distinct proposals to expand PDI identified through a literature review; and interviewed SSA and Department of Labor officials, authors, researchers, and representatives of insurance, employer, employee, and disability groups for a range of perspectives. What GAO Found GAO's analysis found that coverage and key features of Social Security Disability Insurance (SSDI) and long-term employer-sponsored private disability insurance (PDI) differ in a number of ways. Key differences include the number of workers covered; characteristics of covered workers; and eligibility, benefits, and return to work assistance. For example: According to GAO's analysis of Bureau of Labor Statistics and Social Security Administration (SSA) data, SSDI covers an estimated 96 percent of workers, while 33 percent of workers have PDI coverage through their employers. Also, PDI coverage is more prevalent among workers with higher wages (e.g., management positions) and in certain business sectors (e.g., finance). GAO's review of SSDI and PDI policies found that some PDI policies may pay benefits for medical conditions that SSDI would not. However, these PDI policies may time limit payments for mental health and musculoskeletal disorders, while SSDI does not. In addition, while both SSDI and PDI policies include features designed to help beneficiaries return to work, PDI policies may provide such supports more quickly than SSDI. GAO's review of the literature identified three distinct proposals for expanding PDI that the proposals' authors believe would address SSDI's fiscal challenges. Specifically, all three proposals suggest that cost savings for the Disability Insurance Trust Fund could be expected by expanding PDI. According to the proposals, this would happen because expanding PDI would provide workers earlier access to cash and employment supports, which would reduce the number of SSDI claims or the length of time SSDI benefits are paid to claimants. However, GAO's review of the three proposals noted that none of them provide enough information to assess how SSDI enrollment and costs might be affected with an expansion of PDI. Therefore, it is unclear whether cost savings to the Disability Insurance Trust Fund would actually be realized. For example, the proposals do not provide information on the type and timing of return-to-work services that would be provided under expanded PDI, nor do they take into account the differences in the populations served by SSDI and PDI policies. Moreover, stakeholders that GAO interviewed about these proposals raised a number of issues about other implications of PDI expansion that the proposals do not explicitly or fully address. For example: Insurers told GAO that is was unclear how expanding PDI would affect PDI premiums and the impact this would have on enrollment. Employers told GAO they were concerned about potential additional requirements or administrative burdens that would be placed on them if PDI were expanded. Employee and disability advocacy groups told GAO they were concerned about whether PDI expansion would provide standard services or employee protections currently available under SSDI, especially with respect to PDI expansion proposals that would replace SSDI for 2 years.
gao_GAO-19-231
gao_GAO-19-231_0
Background Advance Directives and POLST Forms Decisions about end-of-life care are based on an individual’s personal beliefs and values. Advance care planning documents, including advance directives and POLST forms, allow individuals to express their wishes for end-of-life care. These documents serve different purposes depending on an individual’s stage of life or health condition. (See fig. 1.) According to a report by the Institutes of Medicine, advance care planning documents are most effective when used as part of broader advance care planning efforts, which may involve multiple, in-depth discussions with family members and health care providers. The report also stated that multiple discussions at various stages of life are needed, with greater specificity as an individual’s health deteriorates, because an individual’s medical conditions and treatment preferences may change over time. Therefore, a comprehensive approach to end-of-life care, rather than any one document, helps to ensure that medical treatment given at the end of life is consistent with an individual’s preferences. Advance Directive An advance directive is a written instruction recognized under state law and relating to the provision of health care when an individual is incapacitated. For example, an advance directive may be used to record an individual’s wish to receive all available medical treatment, to withdraw or withhold certain life-sustaining treatments, or to identify an agent to make medical decisions on the individual’s behalf if necessary. The most common advance directive documents are living wills and health care power of attorney. Life-Sustaining Treatment Life-sustaining treatment means the use of available medical machinery and techniques, such as heart-lung machines, ventilators, and other medical equipment and techniques, that may sustain and possibly extend life, but which may not by themselves cure the condition. Living will. A living will is a written expression of how an individual wants to be treated in certain medical circumstances. Depending on state law, a living will may permit an individual to express whether they wish to be given life-sustaining treatment in the event they are terminally ill or injured, to decide in advance whether they wish to be provided food and water via intravenous devices (known as tube feeding), and to give other medical directions that affect their health care, including at the end of life. A living will applies to situations in which the decision to use life-sustaining treatments may prolong an individual’s life for a limited period of time and not obtaining such treatment would result in death. Having a living will does not mean that medical providers would deny medications and other treatments that would relieve pain or otherwise help an individual be more comfortable. Health care power of attorney. A health care power of attorney is a document that identifies a health care agent—also called a health care proxy—as the decision maker for the patient. Under state law, the health care power of attorney typically becomes operative when an individual is medically determined as unable to make decisions. Most commonly, this situation occurs either because the individual is unconscious or because the individual’s mental state is such that they do not have the legal capacity to make decisions. As with living wills, the process for validly executing a health care power of attorney depends on the state of residence. The health care power of attorney may be designated by using a model form in state statute or it may be drafted specifically for an individual by a lawyer. Similar to the living will, medical providers will make the initial determination as to whether an individual has the capacity to make their own medical treatment decisions. Most adults in the United States do not have an advance directive. According to a 2017 study, about 37 percent of adults had an advance directive. However, the proportion of individuals with an advance directive can vary by demographic group. See appendix I for more information related to the prevalence of advance directives. POLST Form POLST forms differ from advance directives in that they are medical orders used to communicate an individual’s treatment wishes, and are appropriate for individuals with a serious illness or advanced frailty near the end-of-life. For these individuals, their current health status indicates the need for medical orders. In the event of a medical emergency, the POLST form serves as an immediately available and recognizable medical order in a standardized format to aid emergency personnel. Following the POLST form orders, emergency personnel can honor the individual’s treatment wishes as communicated to and documented by the individual’s health care provider. See appendix II for information on the types of information included on a POLST form. Information on Completing and Storing Advance Care Planning Documents Both government and non-government organizations, such as state agencies or the National POLST Paradigm, provide individuals and providers information on how to access or download blank advance care planning documents through their websites and education campaigns. For Medicare and Medicaid providers, the Patient Self Determination Act requires certain providers participating in these programs—such as hospitals and nursing homes—to maintain written policies and procedures to inform individuals about advance directives, and document information about individuals’ advance directives in their medical records. Once the advance care planning documents are completed, individuals and providers can access them through various systems. For example, an individual may have their advance directive or POLST form in their electronic health record (EHR), which can be accessed by their provider or other medical personnel in the event that the individual has a medical emergency. In addition, advance directives can be stored in a lawyer’s office or in an individual’s home; these documents would have to be found and transported to the medical setting if needed. Some states have registries (either electronic or paper-based) for advance directives or POLST forms, whereby individuals and providers can access the registry and obtain the necessary documents. Websites Related to Advance Care Planning Were Available for All States; About One Quarter of States Had Registries for Completed Documents For All States, Advance Care Planning Information, Such as Blank Documents, Was Available Online We found websites related to advance care planning for every state; however, the amount of information on these websites varied. In addition, about a quarter of states had registries to provide access to completed advance directives, POLST forms, or both. For all states, either government or non-government websites provided information, which could include blank documents, on advance care planning for individuals and providers within the state. However, the amount of available information about advance care planning varied by state. The information available online varied from having an advance care planning document available to download, to extensive information on advance care planning. For example, in Mississippi, the State Board of Medical Licensure provided a POLST document that could be downloaded from its webpage with no additional information. In contrast, California—through its state attorney general’s website—offered a blank advance directive document that could be downloaded, as well as additional information on advance directives, including who should fill out particular types of advance care planning documents, and the importance of filling out these documents; and other resources, including brochures or information packets detailing advance care planning and other relevant documents. About One-Quarter of States Had Registries for Completed Advance Directives, POLST Forms, or Both To give providers, individuals, or both access to completed advance care planning documents, about one-quarter of states (14) had active registries (either electronic or paper-based) of completed advance directives, POLST forms, or both, as of November 2018. (See fig. 2.) Specifically, 3 states had active registries for both completed advance directives 8 states had active registries solely for completed advance directives; 2 states had active registries solely for completed POLST forms, 1 state had an active registry for completed advance directives and was piloting registries for completed POLST forms, and 37 states did not have active registries for either advance directives or POLST forms. The 14 states with active registries varied in how they administered them. Some states’ registries were administered through state agencies or by contracting with an outside organization. For example, in Oregon, the state contracted with a large health system in the state to operate the technical aspects of the state’s POLST registry, while in Vermont, the Department of Health administered the state’s registry with technical support from a private national document registry company. For other states—such as New York, Virginia, and West Virginia—the state registries were administered through non-government organizations in collaboration with state agencies. Challenges to Advance Care Planning Include Understanding the Types of Documents and Ensuring Access to Completed Documents Based on our interviews with officials from national stakeholder organizations, state agencies and stakeholder organizations in selected states, and articles we reviewed, we identified two broad challenges to advance care planning: (1) a lack of understanding about advance care planning, including how to initiate conversations about advance care planning and how to complete and follow advance care planning documents; and (2) ensuring access to completed documents. In addition to these two broad challenges, the officials we interviewed identified challenges related to resources and the portability of advance care planning documents. Individuals and providers may struggle with how and when to initiate advance care planning conversations. We previously reported that providers identified informing individuals about advance care planning as a challenge due to reluctance to talk about end-of-life issues. In addition, officials from both national and state stakeholder organizations identified challenges to providers properly counseling their patients about advance care planning, either to avoid discussing death and dying with their patients, or because of their own uncertainties regarding the timing of when to hold such discussions. In addition to challenges related to having advance care planning conversations, individuals and providers may not understand that filling out the document is voluntary or how to complete and follow the advance care planning document, according to officials from national stakeholder organizations and officials in the four selected states. Officials from national stakeholder organizations and articles we reviewed noted that challenges with voluntarily completing advance care planning documents can arise when there are language or cultural barriers to understanding these documents. When individuals or providers do not understand the information being requested in advance care planning documents, it can affect whether an individual’s wishes for care are accurately represented. A state agency official in one state identified challenges in ensuring EMS providers understand the appropriate actions to take when they encounter a document that is different from a traditional POLST form. For example, the state official noted that EMS providers might assume that individuals who have a wallet card on their person do not want CPR when the card actually indicates that the individual has completed an advance directive or POLST form to express their care wishes. This could result in treatment that does not match the individual’s expressed wishes. Once advance care planning documents are completed, additional challenges exist to ensuring that providers have access to these documents when needed, such as in an emergency situation. Officials from the national stakeholder organizations, state agencies, and state stakeholder organizations we interviewed identified challenges related to accessing advance directives and POLST forms stored in EHRs. Specifically, stakeholders identified challenges related to EHR interoperability, such as where a provider in one health system cannot access advance care planning documents recorded in an EHR at a different health care system. While interoperability is not limited to advance care planning documents, the challenges associated with accessing advance care planning documents in EHRs can affect providers’ abilities to honor an individual’s wishes in an emergency if they do not have ready access to the documents. For example, when emergency providers cannot readily access advance care planning documents in another health system’s EHR, the providers might not be aware of and provide treatment inconsistent with the wishes of someone they are treating in the emergency room. National stakeholder officials also noted challenges due to a lack of standardization in EHR systems. For example, one national stakeholder official noted that EHR systems in health care facilities do not always have standardized processes for storing advance care planning documents—that is, one health care facility might enter advance directive information into a physician’s notes section of the EHR, while another might have a specific tab in the EHR for advance directives. Due to the lack of standardization, providers might not be able to find an individual’s advance care planning document, and consequently provide treatment inconsistent with the individual’s expressed wishes. In addition to challenges related to understanding and accessing advance care planning documents, officials from the national stakeholder organizations, state agencies, and state stakeholder organizations we interviewed identified other challenges related to resources and portability of advance care planning documents. State agency officials told us that the lack of dedicated resources for advance care planning efforts, such as maintaining a registry, can be challenging. For example, an Idaho official stated that, due to resource constraints within the Secretary of State’s Office—which administers its Health Care Directive registry—the office does not have the personnel to maintain the registry at current document submission rates. National stakeholder officials discussed challenges with states’ legal structures for accepting advance care planning documents—that is, the portability of documents across state lines. For example, an individual might fill out an advance directive or POLST form in one state, but become ill in another state where these documents may not be valid. Various Strategies Used in Selected States to Improve Individuals’ and Providers’ Understanding of and Access to Advance Care Planning Documents In our four selected states—California, Idaho, Oregon, and West Virginia—state agencies and state stakeholder organizations pursued various strategies to improve individuals’ and providers’ understanding of advance care planning documents, as well as to improve their access to completed advance care planning documents. Selected States Used Education and Training to Increase Understanding of the Need for and Use of Advance Care Planning Documents Officials from state agencies and stakeholder organizations in our selected states described efforts to educate individuals about the importance of advance care planning and train providers on the use of advance care planning documents. Educating Individuals To address individuals’ lack of understanding of advance care planning, state agency officials and stakeholders in our selected states used strategies to inform them about the purpose of the documents and how to fill them out. The following are some examples of these efforts. Oregon. The Oregon POLST Coalition used its relationship with stakeholder groups in the state—a large health system, and the state health authority—to educate individuals about POLST forms. These efforts included online videos and brochures intended to improve individuals’ voluntary and informed completion of the documents. West Virginia. The West Virginia Center for End-of-Life Care—which administers the state’s advance care planning registry—collaborated with the West Virginia Network of Ethics Committees and a national organization to conduct public education presentations and webinars. For three of our selected states, educational efforts also included making information about advance care planning available in other languages. For example, in California, Idaho, and Oregon, POLST forms and other information on advance care planning are available in Spanish. Articles we reviewed stated that providing culturally sensitive documents that communicate how to fill out the documents could help improve voluntary and informed completion of advance care planning documents. Training Providers Officials from state agencies and state stakeholder organizations in all four selected states reported conducting provider training, which included working with EMS and hospital providers to train them on advance care planning documents, such as how to use advance directives and POLST forms and when to conduct end-of-life care conversations. The following are examples of these efforts. California. A state stakeholder organization in California conducted train-the-trainer sessions to educate providers about POLST forms, so the providers could subsequently conduct community training events. The organization also published decision aids for providers and individuals to help facilitate advance care planning conversations. The organization, which focused on POLST education and training, noted that it holds periodic conference calls with previous session participants to provide ongoing support and continue discussions about advance care planning. Idaho. The state—through collaborations with stakeholder organizations in Idaho—focused on improving advance care planning through education efforts. Specifically, the state collaborated with stakeholder organizations to conduct trainings on locating and understanding advance care planning documents. In addition, the organizations created EMS protocols related to accessing individuals’ wishes during emergencies. An Idaho official noted that successful advance care planning education and outreach within the state has led to a large increase in the number of advance care planning documents submitted to the state’s registry. Oregon. State stakeholder organizations conducted provider training on advance directives and POLST forms. For example, an organization that focused on improving advance care planning education in the state developed an initiative, which included educational materials and training programs, to improve patient understanding of filling out and updating advance directives through health care organizations and provider training. Further, according to an official from the state health authority, POLST information is included in the curriculum for all medical education in the state ranging from emergency medical technicians to physicians. West Virginia. The West Virginia Center for End-of-Life Care created training manuals, led EMS training webinars, and provided other online education materials to improve provider education about using POLST forms and related protocols in the field. National stakeholder organizations we interviewed and articles we reviewed also noted that increasing the quality of the advance care planning conversations between providers and their patients is an important aspect of successful advance care planning efforts. One strategy to improve the advance care planning conversations is to conduct the conversations over multiple visits, according to national stakeholders and articles. Selected States’ Strategies to Improve Access to Completed Documents Included Interoperability between Electronic Health Records and Registries Officials from state agencies and stakeholder organizations in our selected states utilized strategies to improve access to current advance care planning documents, including better interoperability between EHRs and a state registry, and access to completed documents stored in registries. Access in Electronic Health Records Officials from state agencies and stakeholder organizations identified strategies to improve providers’ access to advance care planning documents stored in an EHR and to ensure the EHR has the most current copy of the document. One strategy used in Oregon enabled information sharing between EHR systems and the state’s electronic registry of completed POLST forms, allowing providers access to the most current POLST forms, according to state officials. Certain EHR systems— including those in three large health systems in the state—are interoperable with the state’s electronic POLST registry using bidirectional technology, meaning that the systems are coded in a way that they can seamlessly exchange information with each other. This allows providers to receive updated POLST forms from the registry upon the individual’s admission to the hospital. It also updates the POLST forms in the registry when changes are made in the EHR by the provider in the hospital. The Oregon officials described another strategy taken within a large health system in the state, which allows providers to quickly know whether a patient has an advance directive in an EHR by using a tab in the medical record indicating that the documents are in the EHR. Stakeholder organizations identified other strategies for increasing access to completed advance care planning documents, such as standardizing information. For example, one national stakeholder organization noted that advance care planning documents could be in a standardized location within an EHR to help providers find these documents more easily. Another strategy used in our selected states is the use of a health information exchange to facilitate access to advance care planning documents. According to a West Virginia stakeholder organization, using the state’s health information exchange allowed West Virginia to easily provide authorized individuals with direct access to completed advance care planning documents—both advance directives and POLST forms—in its registry. Access to Registry Information Officials from state agencies and stakeholder organizations also developed strategies to improve access to completed advance care planning documents in their state registries. All four selected states used registries to facilitate access to completed advance care planning documents: two states (Idaho and West Virginia) had registries for both advance directives and POLST forms, one state (California) had an advance directive registry and was piloting an electronic POLST registry in two communities, and the remaining state (Oregon) had a POLST registry. Officials in these states reported strategies to facilitate access through their registries. Below are examples of these strategies. California. To test whether partnering with a health information exchange organization would provide benefits to the state’s POLST eRegistry uptake and expansion, one of the two California communities chosen to pilot the POLST eRegistry was led by a health information exchange. The other community selected for the pilot was led by a for-profit commercial service. According to a California EMS official, using the health information exchange allowed advance care planning documents to be exchanged quickly between ambulances and hospitals. West Virginia. West Virginia’s registry used the state-wide EMS structure, enabling EMS providers to access the information in an individual’s POLST form while en route to an emergency call. The medical director at the EMS state office noted that EMS providers could call one of its five medical command centers, which could access the registry online to “pre-screen” individuals, to determine if there was a valid advance care planning document on file. EMS providers then received the individual’s information from the medical command center. According to an official involved with the state registry, authorized individuals—i.e., individuals with a registry-issued username and password—could also directly view registry documents. Oregon. State officials reported using an opt-out strategy for the submission of POLST forms to the state’s registry to help ensure that the information in the registry was current. That is, the state has a legislative mandate for providers to submit all POLST forms to the state’s POLST registry unless the patient elected to opt out of the submission. According to Oregon stakeholders, Oregon attributes the widespread use and adoption of the registry to this strategy. One article noted that, in Oregon, successful access to POLST forms through the registry by EMS providers influenced the treatment of individuals. Oregon officials and stakeholders told us that they have not experienced many challenges related to administering its POLST registry and providing access to completed POLST forms, because they leveraged their existing centralized EMS system and created a state administered registry that is interoperable and available to all health systems within the state. Oregon officials stated that the state’s registry success is largely attributable to the fact that it was designed to meet the access and workflow needs of both EMS providers in the field and acute care providers. At the federal level, to support state registry efforts, in February 2016, CMS published a State Medicaid Director letter alerting states to the availability of federal Medicaid funding for the development of and connection to public health systems, such as registries. A July 2018 report by the Office of the National Coordinator for Health Information Technology noted that end-of-life care advocacy groups should consider working with State Medicaid Directors to apply for CMS funding to pilot POLST registries. According to CMS, as of October 2018, one state, Louisiana, received approval to fund an electronic registry for advance directives. Additional Strategies Used in Selected States Address Resource Needs for Advance Care Planning Registries and the Portability of Documents. Officials from state agencies and stakeholder organizations in our selected states discussed the importance of having adequate funding and staff resources to administer their registries. For example, according to an Oregon stakeholder organization, dedicated state funding for the state’s registry allows multiple benefits, such as continuous availability of the registry for individuals and providers. Oregon POLST officials stated that in order to ensure access to individuals’ POLST forms between health systems within a state, they believe POLST registries should be state funded and administered. According to the Office of the National Coordinator for Health Information Technology report and a West Virginia registry official, the state’s registry, which received state-funding from 2009 until 2017, functioned as a central source of information on individuals’ wishes, which were recorded in documents such as advance directives and POLST forms and alleviated multiple access issues. However, officials involved in receiving and providing registry services reported challenges when the registry did not receive state funding in 2018. As a result, online access to advance directives and POLST forms through the registry was discontinued. In California, officials involved with the POLST eRegistry pilot stated that one goal of the pilot project was to identify potential plans for sustainable funding of a registry. Regarding acceptance of out-of-state advance care planning documents—that is, the portability of documents across state lines—we found that all four selected states have statutes that address the validity of advance care planning documents executed in another state. To ensure individuals’ wishes are honored, according to an American Bar Association official, states need to engage in efforts to develop processes and protocols that will allow advance care planning documents to be accepted between states. While the states’ language varies, all selected states allow use of out-of-state documents. Under Idaho’s statute, out-of- state documents that substantially comply with Idaho’s requirements are deemed to be compliant with Idaho’s statute. California’s, Oregon’s, and West Virginia’s statutes note that out-of-state documents executed in compliance with that state’s laws are valid within their states. For more information on the states’ statues related to advance care planning, see appendix IV. Agency and Third Party Comments We provided a draft of this report to the Department of Health and Human Services. HHS provided technical comments, which we incorporated as appropriate. We also provided relevant information from the draft report to state officials and stakeholders in each of the four selected states in our review (California, Idaho, Oregon, and West Virginia), and to one national stakeholder organization (the National POLST Paradigm), and incorporated their technical comments, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, the National Coordinator for Health Information Technology, the National Institute on Aging, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Information on the Extent to Which Individuals Have Advance Directives Officials from the National Institutes of Health’s National Institute on Aging, the Centers for Disease Control and Prevention’s National Center for Health Statistics, and representatives of national stakeholder organizations identified specific surveys and a comprehensive national study of the prevalence of individuals who have completed advance directives. Table 1 provides information from selected research on the prevalence of advance directives. Table 2, below, shows the percentage of individuals age 65 and older responding to the Health and Retirement Survey who reported having a living will or power of attorney in 2012, 2014, and 2016. Appendix II: Types of Information Found on a POLST Form Physician orders for life-sustaining treatment (POLST) forms are different in each state, and the order of the sections or the options within a section may differ. However, according to the National POLST Paradigm, POLST forms cover the same information. Information about the forms, including sections on cardiopulmonary resuscitation (CPR), medical interventions, artificially administered nutrition, and signatures, is provided below. Section A: Cardiopulmonary Resuscitation This section only applies when the individual is unresponsive, has no pulse, and is not breathing. This is similar to a do-not-resuscitate order, but the individual only has a do-not-resuscitate order when they do not want CPR. The POLST form allows individuals to clearly show they do want CPR. If this is left blank, the standard protocol is for emergency personnel to provide CPR if medically indicated. (See fig. 3.) Section B: Medical Interventions This section gives medical orders when CPR is not required, but the individual still has a medical emergency and cannot communicate. There are three options and a space for a health care professional to write in orders specific for the individual. Care is always provided to individuals. This section is for letting emergency personnel know what treatments the individual wants to have. (See fig. 4.) 1. Full treatment. The goal of this option is to provide all treatments necessary (and medically appropriate) to keep the individual alive. In a medical emergency, individuals want to go to the hospital and, if necessary, be put in the intensive care unit and on a breathing machine. 2. Limited treatment / select treatment. The goal of this option is to provide basic medical treatments. These individuals want to go to the hospital, but do not want to be put in the intensive care unit or on a breathing machine. They are okay with antibiotics and intravenous fluids. 3. Comfort measures only. The goal of this option is to focus on making the individual as comfortable as possible where they are. These individuals do not want to go to the hospital. If the individual’s comfort cannot be taken care of where they are, transfer to the hospital may be necessary. According to the National POLST Paradigm, in many states, if an individual chooses CPR—or leaves Section A blank—the individual is required to choose “Full Treatment” in Section B. This is because CPR usually requires intubation and a breathing machine, which are only options under “Full Treatment.” If an individual has a medical emergency, but does not want CPR, this is the section emergency personnel will look at to see whether the individual wants to go to the hospital or not (for Full Treatment and Limited Interventions: yes; for Comfort Measures Only: no). If the individual only has a do-not-resuscitate order, emergency personnel would take them to the hospital. Section C: Artificially Administered Nutrition This section is where orders are given about artificially administered nutrition (and in some states artificially administered hydration) for when the individual cannot eat. All POLST forms note that individuals should always be offered food by mouth, if possible. (See fig. 5.) Other Section: Signatures Health care professional. Since this document is a medical order, a health care professional is required to sign it in order for it to be valid. Which health care professionals can sign (e.g., physician, nurse practitioner) varies by state. The document has a statement saying that, by signing the form, the health care professional agrees that the orders on the document match what treatments the individual said they wanted during a medical emergency based on their current medical condition. Patient or surrogate. According to the National POLST Paradigm, most states require the patient or the surrogate to sign this form. This helps to show the patient or surrogate was part of the conversation and agrees with the orders listed on the form. Backside of a POLST Form The backside of the POLST form has directions and information, usually for health care professionals. Other information it may have includes information on how to void a POLST form; contact information for surrogates; and information on who completed the POLST form. Appendix III: Information on CMS’s Promoting Interoperability Programs Related to Advance Care Planning Documents This appendix provides information about incentive programs provided by the Centers for Medicare & Medicaid Services (CMS) to encourage providers to use electronic health records related to advance care planning documents. CMS provided incentive payments to eligible providers who reported certain measures through its Medicare electronic health records (EHR) Incentive Program (meaningful use program), which started in 2011. At certain points in the program, measures related to advance care planning were optional measures. In 2017, eligible professionals (physicians) began reporting “promoting interoperability” measures through the Merit-based Incentive Payment System (MIPS). The American Recovery and Reinvestment Act of 2009 established the Medicare and Medicaid EHR Incentive Program. This program provided incentive payments for certain eligible providers—certain hospitals and physicians—that successfully demonstrated meaningful use of certified EHR technology and met other program requirements established by CMS. The program was implemented in three stages—measures were established at each stage to promote the use of EHRs in the delivery of health care and to ensure that providers capture information in their EHRs consistently. For example, one measure assessed whether providers have the technical capability in their EHRs to notify the provider of potential interactions among the patients’ medications and with patients’ allergies. In all three stages of meaningful use, providers had to report certain mandatory or core measures, as well as on a set of optional or menu measures. The recording of advance directives was not included as a mandatory measure for eligible providers during any stage of meaningful use. For stages 1 and 2 of meaningful use (2011 through 2015) the recording of advance directives was an optional measure, meaning hospitals could choose to report it or could choose to report a different measure. This optional measure for eligible hospitals was a yes/no measure of whether users could record whether a patient has an advance directive. In October 2015, CMS released the stage 3 final rule that also modified elements of stage 2 reporting; this modification eased reporting requirements and aligned them with other quality reporting programs, according to agency officials. For both modified stage 2 and stage 3 (2015 through 2017), the original advance directive measures were no longer included. CMS noted that a goal for stage 3 measures was to include more advanced EHR functions, and one stage 3 measure addressed capturing and incorporating a broad range of data into the EHR, including advance directives. One national stakeholder organization recommended a measure to ensure that if there are any advance care planning documents in the medical record, that the documents be accessible to all health care providers. CMS noted that advance care planning directives can be included in the notes and is addressed by certification requirements applicable to EHRs. Participants in these CMS programs must use certified EHR technology, which is technology that has been determined to conform to certification criteria developed by the Department of Health and Human Services’ Office of the National Coordinator for Health Information Technology. The 2015 certified EHR technology criteria—the most recent edition—includes a criterion that relates to advance care planning documents. The Medicare Access and CHIP Reauthorization Act of 2015 established the Quality Payment Program, which consolidated components of three previously used payment incentive programs, including the Medicare EHR Incentive Program, into MIPS. Under the MIPS program, which affects clinician payments beginning in 2019, participating clinicians will generally be assessed in four areas, one of which is the “promoting interoperability” performance category that aims to achieve the same objectives as the original meaningful use program. MIPS-eligible clinicians report measures and activities to earn a score in the performance categories. Under the “improvement activities” performance category, one optional activity—advance care planning—covers items such as implementation of practices or processes to develop advance care planning that includes documenting the advance care plan or living will, and educating clinicians about advance care planning. Clinicians who meet the criteria for this activity can report this advance care planning activity to earn credit for the “improvement activities” performance category. Further, the advance care planning activity could earn bonus points in the “promoting interoperability” category, if the activity was conducted using certified EHR technology in 2017 and 2018. Appendix IV: Selected State Statutes Related to Advance Care Planning Documents Our four selected states—California, Idaho, Oregon, and West Virginia— had statutes with similar provisions that affected access to advance care planning documents; however, the statutes differed in the specificity of these provisions. This appendix provides information on provisions related to (1) document execution requirements, such as signature and witness requirements; (2) the validity of other advance care planning documents; (3) provider objections to advance care planning directions; and (4) provider liability protections. Document Execution Requirements Statutes in the four selected states required advance care planning documents to contain specific elements for the documents to be valid. The document requirements included the following: Signature requirements. All four selected states required individuals or designated representatives to sign the advance care planning document for the document to be legally valid. In addition, California allows individuals to sign the documents with a digital signature. Witness requirements. Three of the states (California, Oregon, and West Virginia) have statutes that require at least one witness to be present during the completion of advance care planning documents for that document to be legally valid. These states varied regarding the relationship the witness could have with the individual and number of required witnesses. For example, for advance care planning documents that were signed by witnesses, California required that at least one of the witnesses not be related to the individual by blood, marriage, or adoption, nor be entitled to any portion of the individual’s estate upon the individual’s death under an existing will. In contrast, according to state officials in Idaho, the state removed witness requirements from its advance care planning documents in 2012 to make the documents easier to complete. Format of Advance Care Planning Documents All four selected states’ statutes contained model forms that could be used as a valid advance care planning document. All of the states contained provisions regarding the acceptance of documents other than the forms set out in statute. A document other than the model form is valid if it includes required statutory elements (e.g., signature requirements). For example, in Idaho, the document must be substantially like the model form or contain the elements laid out in the statute. In Oregon, the advance directive statute states that, except as otherwise provided, Oregon residents’ advance directives must be the same as the statutory model form to be valid. Provider Objections to Advance Care Planning Directions All four selected states’ advance care planning statutes had provisions related to provider objections—the statutes address situations in which the provider is unable or unwilling to comply with advance care planning directions. However, the statutes varied on the grounds for provider objection, the required steps to be taken, and the extent to which providers were responsible for taking those steps. For example, California’s and Idaho’s statutes allow providers to object on ethical and professional grounds; and California’s, Idaho’s, and West Virginia’s statutes allow providers to object on reasons of conscience. In addition, the four states’ statutes specified the steps that providers or health systems must take after an objection is made. For example, all four selected states require that specified steps be taken with regard to transferring the individual to a provider that will honor their wishes. Further, California and Oregon explicitly require patient or health care representative notification as soon as provider objections are made. Provider Liability Protections All four states also had statutes that addressed the circumstances under which providers would not be subject to civil or criminal liability, or professional disciplinary action with regard to administering advance care planning documents and directions. The states’ statutes varied with regard to the actions that were covered under these liability provisions. For example, California’s statute addresses situations in which a provider or institution either complied with or objected to the directions provided in advance care planning documents, while Idaho’s, Oregon’s, and West Virginia’s statutes only addressed situations in which providers and other parties complied in good faith with the directions. Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kim Yamane (Assistant Director), Shirin Hormozi (Analyst-in-Charge), Leia Dickerson, Drew Long, Ian P. Moloney, Monica Perez-Nelson, and Vikki Porter made key contributions to this report.
Why GAO Did This Study Many individuals receive medical care for a serious or life-limiting condition during the last months of life, which may involve making difficult decisions about life-sustaining treatment. Advance care planning helps ensure that physicians, families, and friends have documentation outlining individuals' wishes under these circumstances. GAO was asked to identify issues related to completing and accessing advance care planning documents. This report describes, among other things, (1) the challenges individuals and providers face completing and accessing the documents, and (2) selected states' strategies for improving individuals' and providers' understanding of and access to advance care planning documents. GAO reviewed documents and interviewed officials from national stakeholder organizations involved in advance care planning or aging issues, and conducted a literature review of relevant articles published from January 2012 to April 2018 in peer-reviewed and other publications. In addition, GAO interviewed officials from state agencies and stakeholder organizations in California, Idaho, Oregon, and West Virginia. GAO selected those four states because they were active in encouraging advance care planning and had registries for completed documents that were in different stages of development. The Department of Health and Human Services, states, and stakeholders provided technical comments on a draft of this report, which GAO incorporated as appropriate. What GAO Found Advance care planning documents—including advance directives and physician orders for life sustaining treatment (POLST)—allow individuals to express their wishes for end-of-life care. Advance directives, which include living wills and health care power of attorney, provide direction regarding care when an individual becomes incapacitated. POLST documents are appropriate for seriously ill individuals whose health status indicates the need for medical orders to be documented in their medical records. Stakeholders from national organizations and officials in the four states GAO selected to review cited several challenges—affecting both individuals and health care providers—related to the use of advance care planning documents. In particular, they noted a lack of understanding about how to complete the documents and how to initiate conversations about advance care planning. They also cited challenges related to the difficulty of ensuring access to completed documents when needed, such as in an emergency situation. Officials from state agencies and stakeholder organizations in the four selected states reported pursuing various strategies to improve understanding of advance care planning documents by conducting education efforts for individuals and providers. In addition, the states utilized strategies to improve access to completed documents, such as improving the electronic exchange of information between health records and a state registry, which is a central repository intended to improve access to the documents. Further, stakeholder officials reported strategies related to the acceptance of out-of-state advance care planning documents; all four selected states had statutory provisions that address the validity of documents executed in another state.
gao_GAO-19-160
gao_GAO-19-160_0
Background Air Force Aircraft Maintenance Specialties Air Force aircraft maintainers are assigned to a specific maintenance specialty and, in some cases, also to a specific aircraft on which they are qualified to perform maintenance. As of April 2018, the Air Force had 37 enlisted maintenance specialties, each designated by an Air Force Specialty Code. See table 1 for examples of various Air Force maintenance specialties and examples of aircraft specific to those specialties, if applicable. Maintainer Training Process and Skill Level Advancement According to officials, following basic training, most airmen assigned to the aircraft maintenance career field attend some portion of technical school at Sheppard Air Force Base in Texas. Depending on the maintenance specialty, some maintainers may continue their technical training at a second location. For example, maintainers specializing on the F-35 complete additional training at Eglin Air Force Base in Florida after completing initial courses at Sheppard Air Force Base. Maintainers spend anywhere from 23 to 133 academic days in technical school learning about aircraft maintenance fundamentals and their specific maintenance specialties through a mix of classroom instruction and hands-on training. Hands-on training is conducted on both partially- functioning components of aircraft—called “trainers”—that replicate tasks on working aircraft, and on ground instructional training aircraft. Figure 1 shows various training equipment used by maintainers during technical school. Air Force aircraft maintainers complete technical school as 3-levels, or apprentices. Maintainers are eligible to advance to the 5-level (journeyman) after completing additional coursework and a minimum of 12 months of on-the-job training. According to Air Force data, depending on the maintenance specialty, it takes an average of 1 to 2 years to advance to the 5-level. Maintainers are eligible to enter upgrade training to advance to the 7-level after being selected for the rank of Staff Sergeant. According to Air Force officials, the average time in service for promotion selection is 4.4 years. The 7-level is achieved by completing additional coursework, and completing a minimum of 12 months of on- the-job training. Depending on the maintenance specialty, it takes maintainers an average of 1 to 2 years after entering upgrade training to advance to the 7-level. Figure 2 shows an overview of the Air Force’s aircraft maintainer training process and skill-level advancement. Air Force Process for Determining Maintainer Positions Department of Defense (DOD) Directive 1100.4 states that staffing requirements are driven by workload and shall be established at the minimum levels necessary to accomplish mission and performance objectives. In addition, assigned missions shall be accomplished using the least costly mix of personnel (military, civilian, and contract) consistent with military requirements and other needs of DOD as prescribed in Title 10, United States Code. Air Force officials reported that they fill their requirements based on the number of those requirements that are funded—called authorized staffing levels—and the number of trained and qualified personnel available to be staffed to those positions. In this report, we refer to the number of maintainers available to fill authorized staffing levels as actual staffing levels. The Air Force uses the Logistics Composite Model to determine maintainer staffing requirements. The model is a statistical simulation that estimates monthly labor-hours and personnel required to accomplish direct maintenance tasks. According to an Air Force official, locations are staffed according to the worldwide average for each particular maintenance specialty. For example, if the crew chief maintenance specialty worldwide is staffed at 88 percent, the Air Force would staff each overseas Major Command at 88 percent and distribute those resources to ensure the bases are staffed at that worldwide average, followed by domestic locations. An Air Force official stated that there are a number of reasons why a particular location may be staffed below or over the worldwide average, such as early releases from tours. Commercial Aviation Industry and Airframe and Power Plant Certificates Maintainers in the commercial aviation industry are commonly employed by commercial air carriers, corporate flight departments, repair stations, or manufacturers of aircraft or aircraft components. Aircraft mechanics inspect, service, and repair aircraft bodies (airframe) and engines (power plant). Aircraft mechanics can earn a mechanic certificate from the Federal Aviation Administration with an airframe rating, power plant rating, or combined airframe and power plant rating, and are referred to as certificated mechanics. According to Federal Aviation Administration data, almost all certificated mechanics hold airframe and power plant ratings. Certification is not necessary to work as an aircraft mechanic; however, without it, a mechanic cannot approve an aircraft for return to service and must be supervised by a certificated mechanic. Certificated mechanics that hold airframe and power plant ratings generally earn a higher wage and are more desirable to employers than mechanics who are not certificated, according to the Bureau of Labor Statistics. For an applicant to be authorized to take the mechanics examination for the combined airframe and power plant rating, the applicant must either (1) complete a Federal Aviation Administration-certificated aviation maintenance technician school, and demonstrate and document relevant airframe and power plant work experience gained through on-the-job training, or (2) demonstrate and document work experience or some combination of work experience and education gained through the military working with airframes and engines. Since 2002, the Community College of the Air Force has administered the Federal Aviation Administration-approved Joint Services Aviation Maintenance Technician Certification Council (the Joint Services Council) program that, upon completion, confers a certificate of eligibility— equivalent to a training program diploma—to take the airframe and power plant exam. According to Community College of the Air Force officials, although the airframe and power plant certificate is not required for Air Force maintainer work, it does benefit maintainers’ potential career prospects. The Joint Services Council’s program is available to members of all services who have attained minimum requirements in aviation maintenance—typically after 3 years of experience in a related position— and includes three self-paced courses taken online in addition to on-the- job training. Additionally, the Air Force has established its Credentialing Opportunities On-Line program to help airmen find information on certifications and licenses related to their jobs. The program requires that the courses be accredited and be sought after within their industry or sector as a recognized, preferred, or required credential. The program also provides some funding assistance in obtaining airframe and power plant certificates. The Air Force Has Significantly Reduced Overall Aircraft Maintainer Staffing Gaps but Continues to Lack Experienced Maintainers Since fiscal year 2016, the Air Force has taken steps to significantly reduce the gap between actual aircraft maintainer staffing levels and authorized levels, a gap which exceeded 4,000 maintainers in fiscal year 2015. However, gaps remain for experienced maintainers—those at the 5- and 7-levels who are most qualified to meet mission needs. The Air Force’s reserve component has also experienced aircraft maintainer staffing gaps over the past 8 fiscal years, although the Air National Guard’s gaps have been more consistent and significant than those of the Air Force Reserve Command. The Air Force Has Made Significant Reductions to Overall Aircraft Maintainer Staffing Gaps Since fiscal year 2016, the Air Force has taken steps to significantly reduce overall enlisted aircraft maintainer staffing gaps. According to our analysis of Air Force data, for all aircraft maintenance specialties combined, the Air Force reduced the gap between actual staffing levels and authorized levels from a peak of 4,016 maintainers (94 percent of authorized levels filled) in fiscal year 2015 to 745 maintainers (99 percent) in fiscal year 2017. In addition to a reduction in overall gaps, the number of maintenance specialties experiencing staffing gaps also decreased over this period. Specifically, while 12 maintenance specialties had actual staffing levels that were less than 90 percent of authorized levels in fiscal year 2015, only 4 did in fiscal year 2017. Additionally, in fiscal year 2017, actual staffing levels for 18 of the Air Force’s maintenance specialties met or exceeded authorized levels. While the Air Force had a surplus of 1,705 maintainers in fiscal year 2010 (103 percent of authorized levels filled), actual staffing levels decreased to 99 percent of authorized levels in fiscal year 2011, and continued to decrease through fiscal year 2015. Air Force officials attributed these staffing gaps to an increase in authorized positions—due to the acquisition of the F-35 and increased maintenance needs for legacy aircraft, such as the F-15, F-16, and B-52—and a decrease in actual staffing levels, due to a reduction in end-strength from fiscal years 2014 through 2015. These officials stated that the Air Force reduced its actual maintainer staffing levels through involuntary separations and reduced accessions due, in part, to the planned divestiture of the A-10 and other aircraft. However, these officials stated that the divestiture did not occur, which contributed to further staffing gaps. Since fiscal year 2016, the Air Force has taken a number of steps to reduce aircraft maintainer staffing gaps, such as increasing accessions and, beginning in fiscal year 2017, contracting out some maintenance positions. The Air Force also issued memorandums in August 2016 and September 2017 that restricted the ability of certain maintainers to retrain to a career field outside of aircraft maintenance. Additionally, from fiscal years 2016 through 2018, through the High Year of Tenure Extension Program, the Air Force extended the maximum number of years that maintainers in certain maintenance specialties could remain on active duty. According to October 2018 testimony, the Secretary of the Air Force stated that the Air Force planned to eliminate the overall maintainer staffing gap by December 2018. Air Force officials acknowledged that while staffing levels have started to improve since the reduction in end- strength, they anticipate that the Air Force will continue to experience maintainer staffing gaps off and on through fiscal year 2023, when the gap is projected to be about 500 maintainers, due, in part, to an increase in F-35 maintenance requirements. According to these officials, this estimate is based on recruitment cycles and retention trends, and could change if there are any programmatic changes, such as the addition or divestment of any aircraft types. Over the past 8 fiscal years, the Air Force has accepted some level of risk in deciding how much of its maintainer requirements to fund. For example, according to our analysis, from fiscal years 2010 through 2017, the Air Force authorized or funded 95 to 97 percent of its maintainer requirements across maintenance specialties—that is, about 1,800 to 3,900 requirements were not funded each year. According to DOD officials, across all Air Force specialties decisions have to be made about how to fund requirements, and it is not uncommon for authorized levels to fall below requirements. Figure 3 compares the Air Force’s active component aircraft maintainer staffing levels, authorized levels, and requirements for all maintenance specialties combined over the past 8 fiscal years. Air Force officials acknowledged that when taking into account increases in requirements—due in part to aging aircraft systems—maintainer staffing gaps have been higher than reported. Specifically, while the gap between actual and authorized staffing levels exceeded 4,000 maintainers in fiscal year 2015, when considering the number of requirements that were not funded, the gap was about 5,800 maintainers. Moreover, while maintainer requirements increased by about 1,200 between fiscal years 2015 and 2017, the number of authorized positions only increased by 120. The Air Force Continues to Have Staffing Gaps of Experienced Aircraft Maintainers Our analysis of Air Force data found that the Air Force has had staffing gaps of experienced aircraft maintainers—those at the 5- and 7-levels—in 7 of the past 8 fiscal years. While the Air Force’s actual maintainer staffing levels were 99 percent of authorized levels in fiscal year 2017, 3- level maintainers were the only skill level without a staffing gap. Specifically, in fiscal year 2017, the Air Force had a gap of 2,044 5-level maintainers (94 percent of authorized levels filled) and a gap of 439 7- level maintainers (97 percent). However, the Air Force had a surplus of 1,745 3-level maintainers (112 percent). Figure 4 compares, by skill level, actual aircraft maintainer staffing levels with authorized levels for all active component maintenance specialties combined over the past 8 fiscal years. In fiscal years 2015 and 2016, the Air Force had significant gaps of 3- level maintainers—3,536 and 2,401, respectively—due to a decrease in accessions as part of its reduction in end strength. Air Force officials stated that these previous staffing gaps of 3-level maintainers have contributed to the current staffing gap of 5-level maintainers, since maintainers who were at the 3-level in fiscal years 2015 and 2016 would have likely upgraded to the 5-level by fiscal year 2017. These officials stated that, similarly, the current staffing gap of 5-level maintainers is expected to contribute to an increase in the size of the 7-level maintainer staffing gap over the next few fiscal years. In fiscal year 2017, certain maintenance specialties and aircraft faced greater experience gaps than others. For example, the advanced fighter aircraft integrated avionics specialty had a gap of 140 7-level maintainers (70 percent of authorized levels filled) and a gap of 56 5-level maintainers—all specifically trained on the F-35 (78 percent). In contrast, the aerospace ground equipment specialty had a surplus of 28 7-level maintainers (104 percent). Table 2 shows authorized versus actual staffing levels for select active component maintenance specialties and aircraft, by skill level, in fiscal year 2017. Air Force officials stated that it is important to have a balance of maintainer experience levels, but noted that current experience imbalances cannot be corrected as quickly as overall staffing gaps because rebuilding experience takes time. As previously discussed, depending on the maintenance specialty, the average time to upgrade from a 3-level to a 5-level ranges from 1 to 2 years, and the average time to upgrade from a 5-level to a 7-level after entering upgrade training is 1 to 2 years. Air Force officials highlighted that there is no substitute for experience. Noting that new 3-level maintainers will initially lack the experience and proficiency needed to meet mission needs—and will require supervision to oversee their technical progression—the Air Force has taken steps to ensure that experienced maintainers are assigned to maintenance roles that will improve operational readiness and influence the growing workforce. Specifically, the Air Force Deputy Chief of Staff for Logistics, Engineering and Force Protection issued a memorandum in July 2016 to all of the Major Command Vice Commanders noting the importance of maximizing utilization of experienced maintenance personnel in mission generation and repair network jobs. Air Force officials stated that it is critical that experienced maintainers be in the field training the surplus of new 3-level maintainers and getting them the experience they need. In addition, beginning in fiscal year 2017, in order to retrain 600 experienced maintainers on the F-35, the Air Force contracted some aircraft maintenance for three legacy aircraft in certain locations. These maintenance contracts are to run from fiscal years 2017 through 2020. The Air National Guard Has Had Consistent Aircraft Maintainer Staffing Gaps, While Air Force Reserve Gaps Have Been Smaller Over the past 8 fiscal years, the Air Force’s reserve component has also experienced aircraft maintainer staffing gaps; however, the Air National Guard’s gaps have been more consistent and significant than those of the Air Force Reserve Command. Figure 5 compares actual aircraft maintainer staffing levels with authorized levels for the Air National Guard and the Air Force Reserve Command over the past 8 fiscal years. According to our analysis, the Air National Guard has had consistent aircraft maintainer staffing gaps from fiscal years 2010 through 2017— ranging from 84 percent to 89 percent of authorized levels filled. In fiscal year 2017, the Air National Guard had a staffing gap of 3,219 maintainers (87 percent of authorized levels filled), which was primarily spread evenly across 5- and 7-level maintainers. The Air National Guard’s staffing gaps have remained despite a significant decrease in authorizations over this period. Specifically, the Air National Guard’s authorized positions decreased from 28,654 in fiscal year 2010, to 24,198 in fiscal year 2017. Air National Guard officials stated that the decrease in authorizations is a result of mission and aircraft changes—in particular, while the Guard has increased its use of unmanned aerial systems, it primarily relies on contract maintenance for those systems, reducing the need for Air Force maintainers. In comparison, the Air Force Reserve Command experienced smaller maintainer staffing gaps over the past 8 fiscal years. According to our analysis, the percent of authorized levels filled ranged from a low of 95 percent in fiscal year 2010 (a gap of 733 maintainers), to a high of 103 percent in fiscal year 2013 (a surplus of 514). In fiscal year 2017, the Air Force Reserve Command had an overall staffing gap of 374 maintainers (97 percent of authorized levels filled), which primarily consisted of 7-level maintainers. Specifically, in fiscal year 2017, the Air Force Reserve Command had a gap of 777 7-level maintainers (89 percent of authorized levels filled), and a surplus of 566 5-level maintainers (108 percent). Officials from both the Air National Guard and the Air Force Reserve Command stated that aircraft maintainer staffing levels differ by wing and location. For example, Air Force Reserve Command officials noted that maintainer requirements have recently increased at certain Air Force bases due to the arrival of fifth-generation fighter aircraft, and that while those locations are working to increase their maintainer staffing levels, they are currently below authorized levels. Air Force Reserve Command officials identified a strong economy with multiple civilian employment opportunities, disparities in active duty versus technician pay, and long hiring processes as factors affecting its full-time maintainer staffing levels. As a result, these officials noted that that they are looking at ways to improve maintainer retention. Air National Guard officials stated that any maintainer-specific recruitment or retention challenges would be identified and addressed at the local level and that, as a result, they were unable to describe challenges Air National Guard-wide. The Air Force Has Increasingly Lost Experienced Aircraft Maintainers and Does Not Have Goals and a Strategy to Improve Retention The Air Force has had challenges retaining experienced maintainers, with loss rates of 5-level maintainers increasing over the past 8 fiscal years. While the commercial aviation industry is experiencing similar staffing challenges, the effects of these challenges on the Air Force’s maintainer workforce are unknown. In addition, since fiscal year 2015, the Air Force has increased retention bonuses to improve retention among certain critical maintenance specialties, but the Air Force does not have retention goals or an overall strategy to help retain maintainers and sustain recent staffing level improvements. Air Force Losses of Experienced Maintainers Have Increased since Fiscal Year 2010 The Air Force monitors maintainer retention through loss rates—the percentage of maintainers who leave the career field or the Air Force during a given fiscal year for reasons such as separation or retirement— and reenlistment rates, according to Air Force officials. Our analysis of Air Force data found that overall enlisted aircraft maintainer loss rates have remained relatively stable over the past 8 fiscal years. Specifically, overall loss rates ranged from 9 to 10 percent—mirroring overall enlisted loss rates across the Air Force—with the exception of fiscal year 2014, when the loss rate was 13 percent due, in part, to reductions in end strength. Air Force officials stated that they need to retain more maintainers than in past fiscal years to help address experience gaps. However, gaps of experienced maintainers—those at the 5-level—have increased. Specifically, loss rates among 5-level maintainers increased from 9 percent in fiscal year 2010 to 12 percent in fiscal years 2016 and 2017. Loss rates of 7-level maintainers were 8 and 9 percent in fiscal years 2016 and 2017, respectively. Figure 6 compares, by skill level, active component maintainer loss rates with loss rates for all Air Force enlisted personnel over the past 8 fiscal years. While loss rates of 7-level maintainers were comparable to overall maintainer loss rates in fiscal years 2016 and 2017, Air Force officials expect those rates to increase over the next few fiscal years due to changes in reenlistment behaviors and the current staffing gap of 5-level maintainers. According to our analysis of Air Force data, overall reenlistment rates for aircraft maintainers have generally decreased since fiscal year 2010, from a peak rate of 82 percent in fiscal year 2011, to a low of 73.4 percent in fiscal year 2017—similar to reenlistment rates for all Air Force enlisted personnel. Over this period, reenlistment rates decreased most significantly for maintainers making their first reenlistment decision—from 70.5 percent in fiscal year 2010, to 58.3 percent in fiscal year 2017. Reenlistment rates at the second reenlistment decision point decreased as well—from 88 percent in fiscal year 2010, to 81.3 percent in fiscal year 2017. Table 3 provides reenlistment rates for active component aircraft maintainers over the past 8 fiscal years. In 2015 and 2017, the Air Force conducted aircraft maintenance retention surveys in order to identify areas of opportunity to improve career experiences, job satisfaction, and to understand retention drivers. Air Force officials stated that these surveys and reports are used as informational tools, but that they are researching methods to further dive into specific concerns. Maintainers who responded to the 2017 survey cited job stress, overall job satisfaction, and satisfaction with the career field as top factors influencing them to leave the Air Force. Survey respondents also stated that military benefits, the retirement program, and job security were the top reasons to remain in the Air Force. The survey also found that mid-tier enlisted personnel—Senior Airmen, Staff Sergeants, and Tech Sergeants—reported lower levels of satisfaction with leadership than did higher enlisted ranks. Participants in all five of our discussion groups with maintainers cited job dissatisfaction as a factor affecting their reenlistment decisions. Specifically, participants discussed the stress of the job, physical toll of the work, heavy workload, and undesirable working conditions. In addition, participants in all discussion groups noted challenges in providing on-the-job training to the large number of 3-level maintainers arriving at their squadrons due to staffing gaps of 5- and 7-level maintainers—who are needed to supervise that training. Participants stated that the lack of experienced maintainers has increased workloads and stress levels, which they stated may negatively affect reenlistment decisions. Some participants in all five discussion groups were interested in retraining into other specialties outside of aircraft maintenance as a way to continue their Air Force careers. However, as previously discussed, since 2016, the Air Force has placed certain restrictions on retraining to non-maintenance career fields in an effort to address maintainer staffing challenges. Hiring Difficulties May Exist in the Commercial Aviation Industry, but Its Effects on the Air Force’s Maintainer Workforce Are Unknown According to our analysis of Bureau of Labor Statistics data from 2012 through 2017, unemployment rate, employment, and wage earnings for the aircraft mechanic and service technician, and aerospace engineer occupations were consistent with the existence of hiring difficulties. While no single metric can be used to say whether a labor shortage exists, it is possible to look at certain “indicators” in conjunction with views of stakeholders. Specifically, we previously found that according to economic literature, if a job shortage were to exist, one would expect (1) a low unemployment rate signaling limited availability of workers in that profession, (2) increases in employment due to increases in demand for that occupation, and (3) increases in wages offered to draw people into that profession. Table 4 shows these specific indicators from 2012 to 2017, since we last reported, measured using the Bureau of Labor Statistics’ Current Population Survey. As table 4 indicates, the direction of all three of these indicators is consistent with difficulty in hiring of both aircraft mechanics and aerospace engineers. However, the indicators should be viewed with appropriate caveats. First, from 2012 to 2017, median wages for aerospace engineers and aircraft mechanics increased at a greater percentage than wages for all occupations, approximately 1.5 and 2.0 percent per year, respectively, compared to about 1 percent for all occupations. However, while median wages increased for aerospace engineers and aircraft mechanics during this entire period, it did not increase in every year, and it exhibited swings by as much as 13 percent. Second, from 2012 to 2017, employment for aerospace engineers and aircraft mechanics increased by approximately 1.3 and 1.2 percent per year, respectively. In comparison, for all occupations, employment increased by about 2 percent per year over this period. Finally, over this period, the average unemployment rate for aerospace engineers and aircraft mechanics was approximately 1.5 and 2.5 percent on average, respectively, compared to about 6 percent for all occupations. In addition, according to the Bureau of Labor Statistics Occupational Outlook Handbook, overall employment of aircraft and avionics equipment mechanics and technicians is projected to grow 5 percent from 2016 to 2026, about as fast as the average for all occupations. Job opportunities are expected to be good because there will be a need to replace those workers leaving the occupation. Industry stakeholders we spoke with anticipate similar growth in demand for labor, and cited ways companies were recruiting maintainers into the industry, such as raising wages, incorporating additional training, and paying maintainers during their airframe and power plant certificate coursework. The effects of the commercial aviation industry’s hiring difficulties on the Air Force’s maintainer workforce are unknown. Air Force officials stated that the Air Force has not assessed the effects, and that while some maintainers will leave the Air Force to work for the commercial aviation industry, they do not believe it is an overarching issue. However, Air National Guard and Air Force Reserve Command officials noted that a base’s location, in particular its proximity to commercial aviation industry opportunities, may affect its ability to recruit and retain maintainers. While the industry stakeholders we spoke with noted that military maintainers are attractive to the commercial aviation industry because of their previous training, work ethic, and discipline, they also noted challenges in recruiting military maintainers. Specifically, one stakeholder stated that many military maintainers require similar training for private sector positions as their non-military peers, citing to the specificity of training military maintainers receive compared to the broader approach taken by the commercial aviation sector. Only one study we identified through our literature search examined the potential effects of the commercial aviation industry—specifically the commercial airlines—on Air Force aircraft maintainer staffing levels. This study, published in 2016 by RAND and reviewing data from fiscal years 2004 through 2013, did not estimate the effect of any specific development in the commercial aviation industry on the Air Force. However, it identified several factors that suggest that the effects, if any, are likely to be limited. It found this based on four indicators: (1) the Air Force kept steady maintainer retention rates while the airline maintainer population fluctuated over the same period of time; (2) the Air Force offered competitive maintainer salaries compared with several airlines, making it unlikely that maintainers would separate or retire for better earnings potential alone; (3) few Air Force maintainers seemed to be pursuing airframe and power plant certification, which is often a prerequisite to employment in the airline industry; and (4) on average, there were considerably more qualified Air Force maintainers separating or retiring than projected airline maintenance jobs available. However, the report focused only on the commercial airlines. Air Force officials stated that they are more likely to experience outside recruitment of maintainers from defense contractors than from commercial airlines. Participants in four of our five discussion groups with maintainers cited better pay as a reason to transition from the Air Force to the commercial aviation industry. They also noted consistent schedules, 8-hour work days, and overtime pay as additional benefits. However, participants in all of our discussion groups also discussed an interest in careers outside of aircraft maintenance, such as police work, firefighting, cyber security, information technology, and real estate, among others. For maintainers who want to pursue a career in the commercial aviation industry upon separation or retirement from the Air Force, DOD has undertaken several actions to facilitate airframe and power plant certification of its servicemembers. For example, as previously discussed, since 2002 the Community College of the Air Force has administered the Federal Aviation Administration-approved Joint Services Council program that, upon completion, confers a certificate of eligibility to take the airframe and power plant exam. According to Community College of the Air Force data, in fiscal year 2017, there were 95 graduates from the Joint Services Council’s airframe and power plant preparation program. Table 5 shows the number of Air Force personnel that enrolled in and graduated from the Joint Services Council’s airframe and power plant program from fiscal years 2010 through 2017. Air Force officials noted a decrease in enrollments since fiscal year 2015 due to additional enrollment requirements, including completing initial coursework. From fiscal years 2015 through 2017, about 900 personnel used Air Force funding for airframe and power plant certificates through the Air Force Credentialing Opportunities On-Line program, which was established in fiscal year 2015. The Air Force Has Increased Its Use of Retention Bonuses for Some Maintenance Specialties, but Does Not Have Retention Goals or a Maintainer Specific Strategy to Improve Retention The Air Force has increased its use of retention bonuses since fiscal year 2015 to help retain critical maintenance specialties. Per DOD Instruction 1304.31, the secretary of a military department may use service retention bonuses to obtain the reenlistment or voluntary extension of an enlistment in exchange for a military service member’s agreement to serve for a specified period in at least one of the following categories: a designated military skill, career field, unit, or grade; or to meet some other condition of service. In fiscal year 2015, the Air Force awarded 1,590 bonuses to aircraft maintainers in certain specialties, totaling more than $60 million. Bonuses increased in fiscal year 2016—with 2,415 bonuses awarded at a total cost of more than $87 million. Bonuses decreased slightly in fiscal year 2017—with 1,797 bonuses awarded primarily to 5-level maintainers, at a total cost of over $65 million. Figure 7 shows the increases in the number and total costs of Air Force active component retention bonuses awarded to aircraft maintainers over the past 8 fiscal years. According to Air Force officials, retention bonuses remain a critical incentive for reenlistment. Participants in four of our five discussion groups with maintainers highlighted retention bonuses as a motivating factor to remain in the Air Force. Some participants stated that they were a major factor in their decision-making, while others were unsure of the availability or amount of bonuses, making it difficult to appropriately consider them in their decisions. Air Force officials have stated that they need to retain more maintainers than in past fiscal years to help address experience gaps, but the Air Force has not established retention goals for maintainers. Standards for Internal Control in the Federal Government states that management should establish and operate monitoring activities and evaluate the results. In addition, the Standards provide that, in reviewing actual performance, management tracks achievements and compares them to plans, goals, and objectives. While the Air Force has mechanisms to monitor the health of the maintenance career field, such as through loss and reenlistment rates, it has not developed annual retention goals for maintainers. As a result, the Air Force cannot identify how many 5-level and 7-level maintainers it needs to retain to support new 3-level maintainers in training and certification of flight line work. Given increases in losses of experienced maintainers and decreasing reenlistment rates, the Air Force faces challenges in managing the overall maintenance workforce, including ensuring that there are enough experienced maintainers to fulfill mission and training needs. Without annual retention goals—for both loss and reenlistment rates—the Air Force cannot assess how many maintainers it needs to retain each year, by skill level, to sustain recent staffing level improvements and, ultimately, to ensure the health of its maintenance workforce. The Air Force also lacks a retention strategy to focus its efforts in retaining maintainers. As previously discussed, the Air Force has conducted aircraft maintenance retention surveys to gauge the health of the workforce and identify opportunities to improve the career field, but Air Force officials have stated that these surveys are currently used only for informational purposes. In addition, while the Air Force offers retention bonuses for certain maintenance specialties—and has extended the maximum number of years maintainers in certain specialties can remain on active duty through the High Year of Tenure Extension Program— according to Air Force officials, it does not have a maintainer specific strategy or other initiatives (either monetary or non-monetary) that address the factors the Air Force has identified through its biennial surveys as negatively influencing maintainer retention. A key principle of strategic workforce planning is developing strategies that are tailored to address gaps in number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies. Without a retention strategy—including initiatives that are tailored to the specific needs and challenges of maintainers—the Air Force may be missing opportunities to retain experienced 5- and 7-level maintainers, who are needed to train the recent increase of new 3-level maintainers in the field. According to participants from our discussion groups with maintainers, increases in 3-level maintainers could negatively affect retention of experienced maintainers if this increase continues to affect their workloads. While the Air Force has some tools in place to monitor retention and identify factors affecting reenlistment decisions, such as its retention surveys, without a retention strategy to address concerns raised in these surveys, and goals against which to measure progress, it may not be able to sustain recent staffing level improvements or improve the overall health of the maintenance workforce as effectively. The Air Force Has Consistently Met Technical School Completion Rate Goals for Aircraft Maintainers Over the past 8 fiscal years, the Air Force has consistently met overall aircraft maintainer technical school completion rate goals. However, after technical school, additional on-the-job training is required to produce a fully qualified maintainer. In addition, the Air Force reserve component’s programmed technical school completions have consistently exceeded actual completions over this period. The Air Force Has Met Overall Technical School Completion Rate Goals for Aircraft Maintainers Since Fiscal Year 2010 Our analysis of Air Force data found that the Air Force consistently met technical school completion rate goals from fiscal years 2010 through 2017. According to Air Education and Training Command (AETC) officials, AETC established a maintainer technical school completion rate goal for the active component of 90 percent—that is, the number of actual technical school completions compared to the number of programmed or expected completions. According to AETC officials, the goal is not documented, but it has been in place since at least fiscal year 2010 and is intended to measure the health and well-being of the training program. In fiscal year 2017, the completion rate was 97 percent, with all but two maintenance specialties meeting their goals. According to AETC officials, there are a number of reasons a particular maintenance specialty may not meet its technical school completion rate goals, such as low technical school entry rates, security clearance delays, and challenging course topics. Figure 8 shows the Air Force’s active component technical school completion rates for all maintenance specialties combined over the past 8 fiscal years. In fiscal year 2017, approximately 9,600 active component maintainers completed technical school, an increase from about 7,200 and 5,700 in fiscal years 2016 and 2015, respectively. While increased technical school completions help to address overall aircraft maintainer staffing gaps, they cannot immediately resolve staffing imbalances across experience levels. Air Force officials noted that while they track the number of maintainers they are producing by technical school completions (the number of new 3-level maintainers), maintainers are not fully qualified for the job until they are 5-levels, which requires, as previously discussed, at least a year of on-the-job training, among other things. Technical school instructors agreed that while technical school is important for teaching basic concepts, on-the-job training is what produces a fully-qualified maintainer. AETC officials stated that the technical schools continue to have the capacity to meet completion rate goals even with the increase in students, but that they have experienced significant challenges in recent years receiving enough instructors in a timely manner—both civilian and military—and getting them qualified to teach. These officials stated that this is a result of issues with the formula that determines instructor staffing needs, the instructor staffing process for military personnel, and civilian hiring delays, among other things. According to AETC officials, they have been able to consistently meet completion rate goals despite these challenges by waiving some course requirements for multiple instructors (except when there are safety concerns), contracting some instruction, and assigning temporary duty personnel to serve as instructors. These officials noted that while those actions have allowed them to continue to meet their mission, they have also masked the severity of the instructor staffing challenges and increased existing instructors’ stress and workloads. This was confirmed by the technical school instructors with which we spoke. Additionally, AETC officials noted challenges with aging infrastructure and hangars, and in obtaining high fidelity, realistic aircraft and trainers. However, they did highlight a recent success in acquiring updated avionics trainers. Over the past few fiscal years, AETC has conducted annual field interviews with technical school graduates and graduate supervisors to evaluate the technical school training program. Specifically, AETC uses the interviews to gauge satisfaction with the graduates’ abilities to perform tasks required in the career field, and to identify areas to improve training quality or revise training standards. In the memorandum resulting from the fiscal year 2017 field interviews, AETC made a number of recommendations to improve maintainer technical school training, such as improving knowledge and task retention by increasing hands-on repetition and decreasing delays between technical school and a maintainer’s first assignment, reexamining aspects of the technical school training curriculum, and improving instruction related to maintenance forms and technical orders. The memorandum also noted that while there are initiatives that the technical schools can undertake to increase overall satisfaction, there are some disconnects between supervisor expectations in the field and the training program that should be resolved. Technical school instructors agreed that there is a disconnect between what students learn in technical school and what their supervisors in the field expect them to have learned in technical school versus on the job. The memorandum identified opportunities to clarify these expectations, such as workshops to identify training requirements. The Air Force Reserve Component’s Programmed Technical School Completions Have Consistently Exceeded Actual Completions Over the past 8 fiscal years, the Air Force reserve component’s programmed technical school completions for aircraft maintainers have consistently exceeded actual completions. Specifically, according to our analysis, from fiscal years 2010 through 2017, the Air National Guard’s actual technical school completions, as compared to programmed completions, ranged from about 60 to 95 percent. Similarly, the Air Force Reserve Command’s completion rates ranged from about 50 to 85 percent. The highest completion rate for both was in fiscal year 2017. According to Air National Guard and Air Force Reserve Command officials, they do not have technical school completion rate goals like the active component since they also recruit prior servicemembers, as discussed below. Table 6 compares the Air Force reserve component’s programmed versus actual technical school completions over the past 8 fiscal years. According to an AETC official, it is common for the reserve component to have significantly more programmed completions than actual technical school completions in a given fiscal year. For example, this official stated that the Air National Guard and Air Force Reserve Command program their training spaces 2 to 3 years in advance and it can be difficult to anticipate training needs. Specifically, Air National Guard officials stated that the number of training spaces requested each year are to fill vacancies and that those vacancies are filled by both prior servicemembers (who may have already attended maintainer technical school and do not need to do so again) and non-prior servicemembers (who will need to attend technical school). An AETC official noted that the number of personnel that will fall into each category each year is difficult to anticipate. For example, according to Air Force Reserve Command officials, the number of non-prior service accessions has decreased over the past 8 fiscal years, accounting for about 33 percent of accessions in fiscal year 2017, a decrease from about 43 percent in fiscal year 2010. Air National Guard officials stated that if they do not program enough training spaces, it can be difficult to add spaces later. Air National Guard officials stated that they have been conservative in programming training spaces since fiscal year 2016—to minimize unfilled spaces—which, along with high maintainer turnover, is reflected in increased completion rates. Specifically, in fiscal year 2017, the Air National Guard programmed 1,528 completions and the number of actual completions was 1,418, amounting to a completion rate of 93 percent—its highest rate over the past 8 fiscal years. Air National Guard officials noted that the training spaces it did not fill over the past 2 fiscal years were generally due to last minute cancellations for health, family, or civilian employment issues. AETC officials stated that they can fill unused reserve component training spaces with active duty maintainers or students from international partners, which has provided AETC more flexibility to increase active duty maintainer training over the past few fiscal years. Conclusions The Air Force has significantly reduced overall aircraft maintainer staffing gaps since fiscal year 2016, in part by increasing accessions. While the Air Force has consistently met its technical school completion rate goals for newly accessed aircraft maintainers, it continues to have staffing gaps of experienced maintainers—who are needed to supervise and provide on-the-job training to those new maintainers following technical school. Air Force officials have highlighted the need to retain more aircraft maintainers to help address experience gaps, but losses of experienced maintainers have increased since fiscal year 2010, and the Air Force expects losses to continue to increase for certain maintainers over the next few fiscal years. While the Air Force has increased its use of retention bonuses for some critical maintenance specialties, it does not have annual retention goals for aircraft maintainers or a maintainer- specific retention strategy to help it meet such goals and to sustain recent staffing level improvements. As a result, the Air Force may continue to face challenges in managing its largest enlisted career field and may miss opportunities to retain a sufficient number of experienced maintainers to meet mission needs. Recommendations for Executive Action We are making the following two recommendations to DOD: The Secretary of the Air Force should develop annual retention goals for aircraft maintainers by skill level—for both loss and reenlistment rates—in alignment with authorized levels. (Recommendation 1) The Secretary of the Air Force should develop an aircraft maintainer retention strategy, including initiatives that are tailored to the specific needs and challenges of maintainers to help ensure that the Air Force can meet and retain required staffing levels. (Recommendation 2) Agency Comments In written comments on a draft of this report, the Air Force concurred with both of the recommendations. The Air Force also noted initial actions it has taken to develop an aircraft maintainer retention strategy. The Air Force’s comments are reprinted in appendix III. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Defense, and the Secretary of the Air Force. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology To assess the extent to which the Air Force had aircraft maintainer staffing gaps, we compared staffing levels authorized by the Air Force for enlisted aircraft maintainers—for the active and reserve components— with the actual number of maintainers available to staff those positions for fiscal years 2010 through 2017. We selected this timeframe to capture staffing levels before and after the Air Force’s fiscal year 2014 reduction in end strength, and fiscal year 2017 was the most recent year for which complete data were available at the time of our review. Specifically, we analyzed the data to identify overall maintainer staffing gaps, as well as any gaps by maintenance specialty and skill level. In addition, we compared maintainer personnel requirements to authorized staffing levels—the number of those requirements that are funded—for the overall maintainer population, each maintenance specialty, and each skill level. To assess the reliability of the Air Force’s requirements, authorized staffing levels, and actual staffing levels (for both the active and reserve components), we reviewed related documentation; assessed the data for errors, omissions, and inconsistencies; and interviewed officials. We determined that the data were sufficiently reliable to describe the Air Force’s aircraft maintainer staffing levels and associated gaps from fiscal years 2010 through 2017. Additionally, we conducted interviews with relevant Air Force, Air National Guard, and Air Force Reserve Command officials to identify reasons for staffing challenges and actions the Air Force has taken to address them. To assess the extent to which the Air Force experienced attrition of aircraft maintainers, we calculated maintainer loss rates—the number of maintainers who leave the career field or the Air Force within the fiscal year (for reasons such as separation or retirement) over the number of maintainers at the start of the fiscal year—for fiscal years 2010 through 2017. We calculated loss rates for the overall maintainer population as well as by skill level and maintenance specialty for the active and reserve components. We also analyzed overall aircraft maintainer reenlistment rates—the number of maintainers reenlisting each fiscal year over the number of maintainers eligible to reenlist—for the active component for fiscal years 2010 through 2017. To assess the reliability of the Air Force’s maintainer loss and reenlistment rate data, we reviewed related documentation; assessed the data for errors, omissions, and inconsistencies; and interviewed officials. We determined that the data were sufficiently reliable to describe the Air Force’s aircraft maintainer loss and reenlistment rates from fiscal years 2010 through 2017. In addition, we reviewed the Air Force’s 2015 and 2017 aircraft maintainer retention survey analyses and conducted discussion groups with a non- generalizable sample of aircraft maintainers to obtain their views on factors affecting maintainer retention, on-the-job training capacity, and commercial aviation industry opportunities, among other things. We selected Tinker Air Force Base in Oklahoma and Eglin Air Force Base in Florida as the locations for these discussion groups based on geographic diversity, base size, and the types of aircraft maintained at each base. At each location, we moderated two to three discussion groups with aircraft maintainers for a total of five discussion groups ranging from between 3 and 12 maintainers per group. While these discussion groups allowed us to learn about many important aspects of the aircraft maintenance workforce from the perspective of aircraft maintainers, they were designed to provide anecdotal information and not results that would be representative of all the Air Force’s more than 100,000 aircraft maintainers as of fiscal year 2017. To review the state of the commercial labor market for aircraft mechanics and aerospace engineers, we analyzed data from the Department of Labor’s Bureau of Labor Statistics’ Current Population Survey on the unemployment rate, employment, and median weekly earnings from 2012 through 2017, in accordance with economic literature we reviewed for a prior report. These data can be used as indicators of whether labor market conditions are consistent with a shortage. We chose this period because we had previously reported on the data from 2000 through 2012, and 2017 was the most recent data at the time of our review. We reviewed documentation about the Bureau of Labor Statistics data and the systems that produced them, as well as our prior report that used the data. Based on prior testing of the data from these systems, we determined the data were sufficiently reliable for the purposes of our indicator analysis to provide context on the labor market. We also reviewed the Bureau of Labor Statistics’ Occupational Outlook for Aircraft and Avionics Equipment Mechanics and Technicians for 2016 to 2026 to determine anticipated future workforce trends. In addition, we conducted interviews with four commercial aviation industry stakeholders regarding any imbalances in demand and supply, and actions the industry is taking to address them. Specifically, we conducted interviews with officials from the Aeronautical Repair Station Association, the Aerospace Industries Association, Aerotek, and the General Aviation Manufacturers Association. We selected three of these organizations based on our previous work and one based on a recommendation from one of the three organizations. To determine what is known about the extent to which the commercial aviation industry affects the Air Force’s aircraft maintainer staffing levels, we conducted a literature search and review to identify relevant studies. Specifically, we conducted a literature search for studies published in books, reports, peer-reviewed journals, and dissertations since fiscal year 2010. We chose fiscal year 2010 as a starting point so that the scope of the search would match the timeframe for which we analyzed Air Force maintainer loss rates. We searched five databases, including ProQuest, Scopus, and EBSCO. Our search used Boolean search phrases, including variations of words such as aviation, maintenance, and retention. We identified and screened 49 studies using a multi-step process to gauge their relevance and evaluate their methodology. We excluded studies that did not specifically focus on our objective, military maintainers, or the U.S. commercial aviation industry. We retained 1 study after screening and reviewed its methodology, findings, and limitations. Three GAO staff (two analysts and an economist) were involved in the screening and a systematic review of the study, which was determined to be sufficiently relevant and methodologically rigorous. We also analyzed data on the number of Air Force personnel completing the Joint Services Aviation Maintenance Technician Certification Council (Joint Services Council) airframe and power plant certificate program from fiscal years 2010 through 2017, and the number of Air Force personnel receiving airframe and power plant certificate funding from the Community College of the Air Force’s Air Force Credentialing Opportunities On-line program from fiscal years 2015 through 2017. We selected this timeframe because the Air Force’s airframe and power plant funding program began in fiscal year 2015, and fiscal year 2017 was the most recent data available at the time of our review. To assess the reliability of the Air Force’s airframe and power plant certificate program data, we interviewed officials. We determined that the data were sufficiently reliable to describe the number of Air Force personnel completing the Joint Services Council’s airframe and power plant certificate program from fiscal years 2010 through 2017 and the number of personnel receiving funding from fiscal years 2015 through 2017. To assess the extent to which the Air Force has taken steps to help retain maintainers, we analyzed the number and total costs of selective retention bonuses (retention bonuses) that the Air Force awarded, by maintenance specialty and skill level, from fiscal years 2010 through 2017 for the active and reserve components. We normalized the cost data to constant fiscal year 2017 data. To assess the reliability of the Air Force’s retention bonus data, we reviewed related documentation; assessed the data for errors, omissions, and inconsistencies; and interviewed officials. We determined that the data were sufficiently reliable to describe the number and total costs of the Air Force’s aircraft maintainer retention bonuses from fiscal years 2010 through 2017. In addition, we conducted interviews with relevant Air Force officials regarding retention goals and monetary and non-monetary incentives to improve maintainer retention, and Department of Defense officials regarding retention bonuses. We compared this information to Standards for Internal Control in the Federal Government related to monitoring activities and key principles of strategic workforce planning that we have identified in our prior work, such as developing strategies that are tailored to address gaps in numbers of people, skills, and competencies. To assess the extent to which the Air Force met its annual technical school completion rate goals for aircraft maintainers, we calculated technical school completion rates—the number of aircraft maintainers completing technical school compared to the number of programmed or expected completions—for the overall maintainer population and each maintenance specialty for the active component, for fiscal years 2010 through 2017. We compared those completion rates to the Air Education and Training Command (AETC) established active component completion rate goal. For the Air National Guard and Air Force Reserve Command, we compared programmed completions to actual completions to determine the extent to which they met their technical school training needs. To assess the reliability of the technical school completion data (for both the active and reserve components), we assessed the data for errors, omissions, and inconsistencies, and interviewed officials. We determined that the data were sufficiently reliable to describe the Air Force’s aircraft maintainer technical school completion rates from fiscal years 2010 through 2017, rounded to the nearest hundreds up to fiscal year 2013, and more-precisely from fiscal years 2014 and beyond. In addition, we observed maintainer technical school training—both classroom-based and hands-on—as well as training equipment at Sheppard Air Force Base in Texas and Eglin Air Force Base in Florida. We selected these locations because they are two of the primary locations where aircraft maintainer technical school training occurs. Specifically, according to Air Force officials, the majority of aircraft maintainers receive at least a portion of their technical school training at Sheppard Air Force Base, and all F-35-specific maintainer training occurs at Eglin Air Force Base. Additionally, as part of our previously discussed non-generalizable sample of discussion groups with aircraft maintainers, we obtained maintainers’ perspectives on technical school and on-the-job training. We also reviewed training policies as well as other documentation, such as Career Field Education and Training Plans and training evaluations. Finally, we conducted interviews with technical school instructors and supervisors about the maintainer training process as well as AETC, Air National Guard, and Air Force Reserve Command officials about training challenges and programmed training needs. We conducted this performance audit from April 2018 to February 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Air Force Reserve Component Aircraft Maintainer Retention from Fiscal Years 2010-2017 According to Air National Guard and Air Force Reserve Command officials, they monitor retention of aircraft maintainers through loss rates— the number of maintainers who leave the career field or the Air Force within the fiscal year, over the number of maintainers at the start of the fiscal year—and have used selective retention bonuses (retention bonuses) and taken other actions to improve retention. According to our analysis of Air National Guard data, aircraft maintainer loss rates have fluctuated over the past 8 fiscal years. For example, loss rates increased significantly for all maintenance specialties and skill levels combined, from 12 percent in fiscal year 2010, to 36 percent and 30 percent in fiscal years 2012 and 2013, respectively. While Air National Guard maintainer loss rates decreased from fiscal years 2014 through 2017, they remained higher than fiscal year 2010 rates. Table 7 provides loss rates for Air National Guard aircraft maintainers over the past 8 fiscal years. Air National Guard officials stated that maintainer loss rates are often location dependent, and that retention bonuses are the primary tool used to improve retention. According to these officials, while the Air National Guard looks at nationwide staffing when determining which occupational specialties are eligible for bonuses, some locations may have more critical needs than others. The number of retention bonuses that the Air National Guard has awarded to aircraft maintainers has decreased over the past 8 fiscal years, while the total cost has increased. Specifically, in fiscal year 2010, the Air National Guard awarded 1,587 retention bonuses at a total cost of $4,580,295. However, in fiscal year 2017, the Air National Guard awarded 653 retention bonuses at a total cost of $5,373,000. Over the past 8 fiscal years, the majority of its retention bonuses were awarded to 7-level maintainers. The Air Force Reserve Command’s aircraft maintainer loss rates over the past 8 fiscal years have ranged from 10 to 13 percent. In addition, the loss rates of 5- and 7-level maintainers have been similar to the loss rates of all skill levels combined over this period. Similar to the Air National Guard, Air Force Reserve Command officials stated that maintainer staffing challenges and loss rates are partly location dependent, though they also cited opportunities and higher pay in the civilian labor market; high operations tempo; lack of career growth, opportunities, and flexibility; and pay disparities with the active component as factors affecting retention. Table 8 provides loss rates for Air Force Reserve Command aircraft maintainers over the past 8 fiscal years. The Air Force Reserve Command has also used retention bonuses to help improve retention. Specifically, over the past 8 fiscal years, the Air Force Reserve Command has increased the number of retention bonuses awarded and their total costs. For example, in fiscal year 2012, the Air Force Reserve Command awarded 15 retention bonuses totaling $242,593. In fiscal year 2015, it increased to 572 bonuses awarded totaling $8,913,229. In fiscal year 2017, the Air Force Reserve Command awarded 317 retention bonuses at a total cost of $4,550,000. According to Air Force Reserve Command officials, the Air Force Reserve Command has taken a number of steps to help improve technician retention, such as paid permanent change of station and student loan repayment. These officials stated that they are also currently working to improve career path options and medical benefits for technicians. Further, Air Force Reserve Command officials highlighted Human Capital Management 2.0 as an effort focused on balancing the human capital supply and demand across the Air Force Reserve Command, including improving recruitment and retention. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contacts named above, Lori Atkinson (Assistant Director), Benjamin Bolitzer, Molly Callaghan, Timothy Carr, Christopher Curran, Matthew Dobratz, Amie Lesser, Grant Mallie, Mike Silver, Carter Stevens, and Lillian M. Yob made significant contributions to this report. Related GAO Products DOD Depot Workforce: Services Need to Assess the Effectiveness of Their Initiatives to Maintain Critical Skills. GAO-19-51. Washington, D.C.: December 14, 2018. Air Force Readiness: Actions Needed to Rebuild Readiness and Prepare for the Future. GAO-19-120T. Washington, D.C.: October 10, 2018. Military Aviation Mishaps: DOD Needs to Improve Its Approach for Collecting and Analyzing Data to Manage Risks. GAO-18-586R. Washington, D.C.: August 15, 2018. Military Personnel: Collecting Additional Data Could Enhance Pilot Retention Efforts. GAO-18-439. Washington, D.C.: June 21, 2018. Military Personnel: DOD Needs to Reevaluate Fighter Pilot Workforce Requirements. GAO-18-113. Washington, D.C.: April 11, 2018. Department of Defense: Actions Needed to Address Five Key Mission Challenges. GAO-17-369. Washington, D.C.: June 13, 2017. Military Compensation: Additional Actions Are Needed to Better Manage Special and Incentive Pay Programs. GAO-17-39. Washington, D.C.: February 3, 2017. Unmanned Aerial Systems: Air Force and Army Should Improve Strategic Human Capital Planning for Pilot Workforces. GAO-17-53. Washington, D.C.: January 31, 2017. Air Force Training: Further Analysis and Planning Needed to Improve Effectiveness. GAO-16-864. Washington, D.C.: September 19, 2016. Unmanned Aerial Systems: Further Actions Needed to Fully Address Air Force and Army Pilot Workforce Challenges. GAO-16-527T. Washington, D.C.: March 16, 2016. Unmanned Aerial Systems: Actions Needed to Improve DOD Pilot Training. GAO-15-461. Washington, D.C.: May 14, 2015. Air Force: Actions Needed to Strengthen Management of Unmanned Aerial System Pilots. GAO-14-316. Washington, D.C.: April 10, 2014. Aviation Workforce: Current and Future Availability of Airline Pilots. GAO-14-232. Washington, D.C.: February 28, 2014. Aviation Workforce: Current and Future Availability of Aviation Engineering and Maintenance Professionals. GAO-14-237. Washington, D.C.: February 28, 2014. Military Cash Incentives: DOD Should Coordinate and Monitor Its Efforts to Achieve Cost-Effective Bonuses and Special Pays. GAO-11-631. Washington, D.C.: June 21, 2011.
Why GAO Did This Study Air Force aircraft maintainers are responsible for ensuring that the Air Force's aircraft are operationally ready and safe for its aviators—duties critical to successfully executing its national security mission. With more than 100,000 maintainers across the Air Force's active and reserve components, according to Air Force officials, aircraft maintenance is the Air Force's largest enlisted career field—accounting for about a quarter of its active duty enlisted personnel. The conference report accompanying the National Defense Authorization Act for Fiscal Year 2018 included a provision for GAO to review the adequacy of the Air Force's aircraft maintainer workforce. This report assesses the extent to which, from fiscal years 2010 through 2017, the Air Force (1) had aircraft maintainer staffing gaps, (2) experienced attrition of maintainers and took steps to help retain maintainers, and (3) met its annual technical school completion rate goals for maintainers. GAO analyzed aircraft maintainer staffing levels, loss and reenlistment rates, and technical school completion rates from fiscal years 2010-2017, the most recent data available; conducted five non-generalizable discussion groups with maintainers; and interviewed aviation industry, Department of Defense, and Air Force officials. What GAO Found The Air Force has reduced overall aircraft maintainer staffing gaps, but continues to have a gap of experienced maintainers. The Air Force reduced the overall gap between actual maintainer staffing levels and authorized levels from 4,016 maintainers (out of 66,439 authorized active component positions) in fiscal year 2015, to 745 in fiscal year 2017 (out of 66,559 positions). However, in 7 of the last 8 fiscal years, the Air Force had staffing gaps of experienced maintainers—those who are most qualified to meet mission needs and are needed to train new maintainers. Maintainers complete technical school as 3-levels and initially lack the experience and proficiency needed to meet mission needs. Following years of on-the-job training, among other things, maintainers upgrade to the 5- and 7-levels. In fiscal year 2017, the Air Force had gaps of more than 2,000 5-level and 400 7-level maintainers, and a surplus of over 1,700 3-levels. Air Force officials anticipate that staffing gaps will continue off and on through fiscal year 2023. Over the past 8 fiscal years, the Air Force has increasingly lost experienced aircraft maintainers, and it does not have goals and a strategy to help retain maintainers. While overall maintainer loss rates have remained generally stable, loss rates of 5-levels increased from 9 percent in fiscal year 2010 to 12 percent in fiscal years 2016 and 2017 (see figure). Air Force officials expect 7-level loss rates to also increase. Air Force officials stated that they need to retain more maintainers to help address experience gaps, but the Air Force has not developed annual retention goals for maintainers. In addition, while the Air Force has increased its use of retention bonuses since fiscal year 2015, according to Air Force officials, it does not have a strategy to improve retention. Without goals to measure progress and a retention strategy to guide efforts, the Air Force could face further challenges in managing its maintenance workforce, including ensuring there are enough experienced maintainers to meet mission needs. The Air Force consistently met technical school completion rate goals for aircraft maintainers from fiscal years 2010 through 2017. In fiscal year 2017, about 9,600 active component maintainers completed technical school, an increase from about 5,700 in fiscal year 2015. This increase in completions has helped to address overall staffing gaps, but cannot immediately resolve experience imbalances, due to the time and training needed to reach the 5- and 7- levels. What GAO Recommends GAO recommends that the Air Force develop annual retention goals and a retention strategy for aircraft maintainers. The Air Force concurred with both recommendations.
gao_GAO-18-257
gao_GAO-18-257_0
Background Federal agencies’ personal property may include commonly used items, such as computers, office equipment, and furniture, and more specialized property reflective of their mission, such as scientific devices, fire control equipment, heavy machinery, precious metals, generators, and chemicals. Some items require special handling, such as hazardous materials, animals, and firearms. See figure 1 for examples of federal personal property. Federal agencies manage personal property while they are using it. Specifically, executive agencies are required by law to: maintain adequate inventory controls and accountability systems for property under their control; continually survey property under their control to identify excess; promptly report excess property to GSA and dispose of it in accordance with GSA regulations; and use existing agency property or obtain excess property from other federal agencies before purchasing new property. GSA assists agencies when they no longer need personal property and has established a government-wide personal-property disposal process in federal regulation. The process generally begins when an agency declares a personal property item as “excess”—that is, the agency determines it no longer needs the item to carry out its mission. Agencies are to make this determination only after ensuring the property is not needed elsewhere within the agency. Once property is declared excess, there are four potential property disposal methods: transfer to another federal agency or certain non-federal entities, donation, sale, and abandonment or destruction. Federal agencies and some non-federal entities have the priority to acquire excess property, through transfer. If none of these eligible entities have requested the property for transfer after 21 days, the property becomes “surplus”—that is, GSA determines that federal agencies no longer need the item to carry out their missions. Surplus property may be donated to eligible entities through a State Agency for Surplus Property, representing the state of the prospective donee. Property not donated within 5 days after the close of the 21-day screening period may be sold to the general public and, finally, unsold property may be abandoned or destroyed. See appendix II for an expanded description of the personal property disposal process. OMB is responsible for establishing government-wide management policies and requirements and provides guidance to agencies to implement them. OMB has issued guidance for specific types of personal property, such as for government aircraft and information technology systems. OMB also implemented the Freeze the Footprint and Reduce the Footprint initiatives, starting in 2012, to reduce the amount of domestic office and warehouse space needed by the federal government, in part, through consolidations and improved space utilization. As a result, federal agencies have reported achieving space reductions, and they have goals for additional reductions in the future. Although these reductions are a relatively small part of the federal government’s overall footprint, according to OMB, through this and other efforts agencies collectively reduced their office and warehouse space by about 25 million square feet from fiscal years 2012 through 20156. As federal agencies continue to reduce office and warehouse space, they will also likely have to manage or dispose of personal property, such as office furniture or stored property, from these spaces. Selected Agencies Had Personal Property Accountability and Inventory Control Processes but Most Did Not Have a Formal Process for Assessing Property for Continued Need Agencies Inventory Their Most Valuable and Sensitive Property Each of the five selected agencies we reviewed have policies and processes for carrying out their responsibilities to maintain adequate accountability systems and inventory controls for property under their control: All five agencies have policies for regularly inventorying their personal property to physically locate and verify property tracked in their asset management systems. EPA, GSA OAS, and IRS policies require physical inventories of personal property once a year, while Forest Service’s policy requires inventories every other fiscal year and a 10 percent sample inventory in the alternate years. HUD policies require inventories every 2 years at its headquarters, but according to HUD officials, field locations conduct inventories annually. All of the agencies also have an electronic asset-management system for maintaining information on personal property. Although each agency has its own system, and the type of information maintained varied by agency or type of property, generally each system generates a record for each property item that provides descriptive information about the item, such as manufacturer name, model number, serial number or other identifier, acquisition cost, condition, and current location. We found that the five agencies use these policies and processes to track and inventory certain property determined by each agency to be “accountable.” Accountable property is nonexpendable personal property with an expected useful life of 2 years or longer that an agency determines should be tracked in its property records, based on an item’s acquisition cost and sensitivity. Each agency determines its own appropriate acquisition cost threshold: four of the agencies—EPA, Forest Service, HUD, and IRS—consider property with an original acquisition cost of $5,000 or greater to be accountable; GSA OAS’s accountable threshold is $10,000 or greater. In addition, certain sensitive property— such as digital cameras, laptop computers with hard drives, and firearms—is considered accountable regardless of acquisition cost because it could be easily stolen or can store data or personal information. Table 1 provides a snapshot of accountable personal property items— including the reported original acquisition cost, amount, and examples— reported from 4 of the selected agencies’ asset management systems in 2017. The agencies in our review generally did not track in their asset management systems or formally inventory their remaining—or “non- accountable”—personal property that did not meet their definition of accountable property. According to agency officials we interviewed, they do not track or inventory low value items because: (1) the cost and manpower required to do so are too high; (2) certain property, such as office furniture, is less susceptible to theft; or (3) agencies believe they are not required by law to inventory low value items. While agencies are required to have systems of accounting and internal controls that provide effective control over, and accountability for, their assets, they generally have latitude in how they implement these procedures, including which property to track and inventory. Most Selected Agencies Did Not Have a Systematic Process for Assessing the Continued Need for Personal Property While the five selected agencies had policies and processes for their property accountability and inventory control responsibilities, they largely did not have policies and processes for carrying out their responsibilities, as established in law, to continually survey property under their control to identify excess. According to officials at each of the selected agencies, the responsibility for identifying unneeded property generally lies with that agency’s property custodians—designated officials who are assigned responsibility for the property—or the agency program or individual using the property. Four of the five selected agencies’ policies do not require property custodians or other property users to assess property for continued need. Furthermore, these four agencies’ policies did not have specific criteria for the property custodian or user to assess property for continued need. Only IRS’s personal-property management policy specifies that the property custodian is responsible for identifying excess property and provides criteria to be applied in doing so, such as whether property is still needed in its location and the feasibility of transferring it to other locations, taking into account the property’s condition and transportation charges. An official at one of the selected agencies identified several specific criteria that should be used to assess property for continued need, including the item’s serviceability, whether it poses a safety hazard, and the feasibility of relocating it. However, the official acknowledged that these or any other criteria are not part of the agency’s formal policy. The personal property policy of an agency not included in our review— NASA—includes requirements and criteria to review NASA property for continued need in multiple ways. For example, it requires a high-level NASA official to conduct a walk-through inspection annually to identify idle or underused equipment that is no longer needed and report it as excess. It also requires, as part of an annual property inventory, that property that appears to be excess, worn out, or in obvious need of repair be noted as such and that guidance on identifying unneeded property be provided to personnel involved in conducting the inventories as well as employees assigned to use the property. In addition to not having policies on identifying and assessing property for continued need, agencies we reviewed also did not have a systematic process for doing so. Instead, when describing situations in which they declared property as excess, officials said they typically did so as a result of a “triggering event.” The types of triggering events the officials cited include an office move or consolidation or a lifecycle replacement of laptops. For example, officials from field locations of three of these agencies reported declaring most of their existing furniture as excess as the result of an office relocation or renovation. Agency officials said they were unable to use their existing furniture and had to declare it excess because it did not conform to new space utilization standards. At another agency, officials were disposing of a large number of laptop computers that had been declared excess because they had been replaced by new computers. Officials at two agencies said an assessment of property for continued need is an assumed practice that is part of the inventory for accountable property. However, an official from one of these agencies acknowledged that assessing need is not addressed in the written instructions provided to those conducting the inventory. Officials from two other agencies acknowledged that they continue to retain unneeded property that should be declared excess in storage on-site but had not pursued disposal due to other competing responsibilities with higher priorities. Proactively assessing personal property for continued need instead of responding to a triggering event can help agencies achieve both effective and efficient operations by ensuring that only needed property is retained and unneeded property is identified and declared excess. Federal internal control standards require that agencies design and maintain internal control activities—such as policies and procedures—to identify risks arising from mission and mission-support operations, and to provide reasonable assurance that agencies are operating in an efficient manner that minimizes the waste of resources. Such a system also provides reasonable assurance that agency property is safeguarded against waste, loss, or unauthorized use. OMB staff and GSA officials agreed that assessing all types of property—accountable and non-accountable—for continued need is important and called-for by internal control standards. Because the agencies we reviewed did not have systematic processes for assessing the continued need for personal property, they may not be aware of potential risks of maintaining property that may no longer be needed for operational purposes. Furthermore, previous work others have performed has shown that inaction on unneeded or idle property can limit efficient use of the government’s personal property, unnecessarily use an agency’s resources, or miss opportunities for potential cost savings, for example: The Department of Homeland Security’s Inspector General found that the U.S. Coast Guard could not ensure that personal property was efficiently reutilized or properly disposed of to prevent unauthorized use or theft because the Coast Guard did not have adequate policies, procedures, and processes to identify and screen, reutilize, and dispose of excess personal property properly, including criteria for identifying such property. The EPA’s Inspector General estimated EPA could save $8.9 million in reduced warehouse costs through improved management of stored personal property. GSA personal property asset management studies conducted in 2003 and 2005 found, among other things, that personal property is not being used to its fullest extent in some agencies and that no government-wide usage assessment or standard exists to detect whether property is no longer needed and can be reported as excess. Without a triggering event, agencies may not be seeking out or identifying property that is no longer needed and declaring it excess as often as they should. Such unneeded property may be put to better use elsewhere within the agency or the federal government, or agencies may purchase or lease new property instead of using another agency’s property that is unneeded but not reported as excess. In addition, agencies may be missing opportunities to realize cost savings by identifying and disposing of unneeded property, such as property stored in warehouses, to reduce or make better use of that space. While the requirements for agencies to continually survey property under their control to identify excess is established in law, according to GSA officials, there are no government-wide regulations on managing personal property or fulfilling this specific requirement. According to GSA OGP officials, GSA does not have the authority to promulgate regulations or issue formal guidance on personal property that is in use by executive agencies. Furthermore, according to the officials, GSA is only authorized by law to prescribe regulations on excess and surplus personal property. OMB staff stated that they could issue a notification, such as a controller alert to agencies’ chief financial officers, to reinforce the statutory requirement that agencies conduct assessments of personal property for continued need. OMB periodically issues such alerts to highlight emerging financial management issues for agencies and also issues guidance to agencies through bulletins, circulars, and memorandums. By issuing a controller alert or other guidance, OMB can help ensure that agencies are proactively taking steps to evaluate their property for continued need, including developing appropriate policies for doing so, and can thereby improve efforts to promote maximum use of excess personal property. Selected Agencies Used GSA’s Disposal Process to Dispose of Unneeded Property, Including Property from Space Reductions Selected Agencies Used a Structured Disposal Process for Personal Property Officials from the five agencies we reviewed reported that they followed GSA’s automated process to dispose of property once they had made the determination it was no longer needed to support their agency’s mission. As previously described, GSA regulations on disposing of property establish a specific process for all executive agencies to follow, and GSA has also issued guidance to help agencies dispose of property under this process. In particular, once an agency has determined that the property it has is no longer needed within the agency, the agency is required to promptly report the property to GSA as excess, typically by entering information about it into GSAXcess, GSA’s web-based system for facilitating personal property disposal. This method requires agency employees to manually enter information using data entry screens that include help screens and error messages. GSA encourages agencies to provide a complete description of the property and to include multiple photographs of it. Officials from the five agencies we reviewed reported no significant difficulties with entering information into GSAXcess; collectively, these agencies reported over 37,000 items as excess property from fiscal year 2012 through 2016. Figure 2 indicates the number of items each selected agency reported to GSA as excess during that period. Once information entry is completed, the disposal process begins. If the property is not disposed of during one stage, it advances to the next stage. The disposal process is shown in figure 3. Agency officials we interviewed told us that responsibility for disposing of property is decentralized and typically occurs at the property’s location, whether at an agency headquarters, regional office, or lower level. Because of the large federal government presence in the Washington, D.C., area, agency offices in that area may have access to resources to facilitate the disposal process that are unavailable elsewhere, such as transferring excess property to certain entities that complete some or all aspects of the disposal process for a fee. Two such entities are GSA’s Personal Property Center in Springfield, Virginia, which takes full accountability and control of an agency’s excess property for a fee and handles all the details of the disposal process, and USDA’s Centralized Excess Property Operation in Beltsville, Maryland. According to USDA’s Agriculture Property Management Regulations, property not needed by USDA or its bureau offices in the Washington, D.C., area must be transferred to this office for final disposal actions. It also provides these same services to some non-USDA agencies. Agencies also use GSAXcess to search for and select available excess property. Agency officials told us that the system also sends disposition instructions to the property-holding agency, when the property is to be transferred to other federal agencies, donated, or sold and that the agencies follow these instructions. For example, when an agency requests an excess item in GSAXcess and GSA approves the request, the system notifies the requesting agency and the property-holding agency and provides contact information to arrange to complete the transaction. None of the selected agency officials reported difficulties completing a transfer or donation transaction. For property not transferred, donated, or sold, GSA notifies the agency that the property has no commercial value and can be abandoned or destroyed. All of our selected agencies reported trying to recycle such property. Selected Agencies Reported Little Difficulty Disposing of Personal Property from Space Reduction Initiatives Selected agency officials told us they disposed of property from space reduction efforts, such as Freeze the Footprint and Reduce the Footprint, the same way as other personal property—using GSA’s disposal process. To meet space reduction goals, selected agencies are undertaking projects at dozens of locations. Projects have primarily involved leased space for offices and warehouses and have included office moves, consolidations, and closures. As federal agencies carry out these space reduction projects, they must also address any personal property in the project spaces. Selected agencies reported several factors that affected their decisions about this property, which for three of the agencies was primarily office furniture. Four agencies reported needing less space than they previously occupied because of changes in agency missions or staffing levels. Furthermore, officials from GSA OAS and IRS noted that workplace trends, including teleworking and decreased staffing, reduced the space needed. Finally, agencies also reported that the office furniture itself was mostly unsuitable because it was old and because it could not be configured for use in more efficient office space designs. As a result, some selected agency locations that completed an office move or renovation project reported that most of their existing furniture was not needed in their new space. For example, in its Reduce the Footprint plan for fiscal years 2017 through 2021, HUD noted that many of its locations were designed and furnished when it had a much larger staffing level and reported that in 2016, its usable square feet per employee was 356. Subsequently, HUD revised its space design standards, requiring future office spaces to adhere to a utilization rate of 175 square feet or less. At the HUD project we visited, an official told us the furniture in use before the project was old and was generally too large to be used to achieve space design standards. In 2017, Housing and Urban Development (HUD) reduced its Denver regional office space by 30 percent. HUD’s lease was expiring and it needed less space because it had fewer employees in the office, in part due to increased telework. Adhering to new space utilization standards in its office and furniture design further reduced HUD’s overall required space. An example of a new workstation is shown above. Before the project, the agency occupied about five floors of a commercial building. HUD renovated in place, one floor at a time, and replaced its existing office furniture with new. Personal property at this office included primarily office furniture, such as desks and 25-year old modular systems, and equipment, such as telephones. As each floor was completed and employees moved to new workstations, the property official on-site disposed of their old furniture and workstations by entering its information in GSAXcess. The official reported selling some of the excess furniture after completing the first floor but recycled or discarded excess furniture in subsequent rounds. In some cases, agencies did not dispose of all the personal property after a space reduction project but instead were able to retain it for other uses within the agency. For example, IRS officials reported closing an office in Englewood, Colorado, and transferred its furniture to Ogden, Utah, for storage for an upcoming project. GSA OAS officials in Denver said that after a space reduction project in which GSA decreased the size of its regional office, it retained the unneeded furniture and office space for temporary use by other agencies. For property that was declared excess following a space reduction project, agencies reported transferring, donating, and selling property to dispose of it, using GSA’s process. For example, officials in GSA OAS, Forest Service, and IRS locations told us they transferred some excess property to other federal agencies. The Forest Service in Denver transferred some modular office furniture to the Bureau of Land Management and the U.S. Postal Service. The Forest Service and IRS also reported donating property, such as office furniture and equipment, through the State Agencies for Surplus Property program. Four agencies reported selling some of their property from a space reduction project. For example, HUD’s regional office in Denver sold some of its excess office furniture, which dated to 1992, and recycled or discarded the remainder. When disposing of property from a space reduction project, some agencies sought assistance from GSA. GSA’s Office of Personal Property Management (GSA OPPM) assists agencies, when requested, in disposing of personal property, and officials at selected agency locations reported receiving assistance and training. In one example, GSA officials told us that a regional office of a selected agency needed to dispose of an office full of furniture and, in addition to using the disposal process, contacted GSA OPPM for additional assistance. Because of the large amount of property, GSA OPPM took steps to make other agencies in the area aware of the available property and facilitated access to allow agencies to view the property. In another example, GSA OPPM officials met with officials from another agency in the planning stages of a relocation to answer questions and provided advice and guidance for disposing of personal property. When the Forest Service’s lease on its Denver-area office expired, the agency leased space in another location, requiring a move but reducing its office by over 21,000 square feet. The agency sought to conform to new space utilization standards, which required more efficiently-designed furniture than its existing office furniture. Because the Forest Service did not reuse most of its old furniture in its new space, it no longer had a need for it. The Forest Service retained some of the furniture for use in other Forest Service offices within the region and declared the remainder as excess. Through GSAXcess, the Forest Service transferred some of its excess furniture to other federal agencies, such as the Bureau of Land Management and the U.S. Postal Service. The Forest Service sold some furniture at auction; broken items were recycled. Agencies may dispose of large amounts of property during a space reduction project, but overall, agency officials reported few challenges in doing so. This may be in part because any effects from space reductions are distributed across an entire agency. Although selected agencies’ average Reduce the Footprint space reduction goals ranged from 97,000 square feet to 662,000 square feet each fiscal year from 2016 to 2020, each agency’s efforts consisted of dozens of geographically dispersed projects of various sizes to be completed over several years. For example, as of fiscal year 2016, EPA had 21 space reduction projects planned from fiscal years 2016 through 2021, with individual anticipated reductions ranging from less than 1,000 square feet to more than 140,000 square feet. At least one project is present in 8 of EPA’s 10 regions. Agencies’ ability to pay for space reduction projects may also have affected these projects’ effects. Two selected agencies said they delayed projects because of a lack of funding. Agencies may reduce costs over the long term because of lower rent for smaller spaces but they may have to pay some expenses upfront, such as for moving, renovations, and new furniture. Although officials from all five agencies told us they have been able to manage personal property disposals from space reductions, they identified factors that can impact the efficient use of the disposal process during a space reduction project and some strategies taken to address them: Inventorying non-accountable property: As a space reduction project commenced at a location, most selected agencies reported that they did not have a complete list of the personal property affected by the project. As previously described, selected agencies do not maintain an itemized list of non-accountable personal property and for four agencies, office furniture is generally non-accountable. During a space reduction project, property personnel had to develop some type of inventory to identify property that would be needed and property that should be disposed of. Selected agencies had various methods for conducting such an inventory. For example, officials from two agencies said they walked through the affected space and created a list of all the items. Officials from one agency said a contractor was hired for this purpose. Most agencies reported using the inventory they created to enter information on excess property into GSAXcess. Officials at GSA’s OPPM offices in Philadelphia and Fort Worth said that they offer training and guidance to agencies in conducting inventories. Managing disposals within time frames: Agencies generally are not able to begin the disposal process until the property is no longer in use. For example, agency staff continue to use their old workspaces until they can move to new workspaces. Agencies also face deadlines, such as vacating space due to a lease expiration or commencement of renovation work. Officials from three agencies described challenges completing the disposal process—reporting excess personal property as well as completing transactions to transfer, donate, sell or abandon or destroy it—within required time frames. Some agency officials reported using different strategies to address this timing challenge. For example, one agency official was able to enter information about the excess property items into GSAXcess while employees were still using them. According to the official, this was possible because a note could be included in the property item’s description in GSAXcess, with the date when the property would be available. When the property was no longer in-use within the agency, the transfers or other transactions were completed. Additionally, an agency may conduct an on-site screening of its unneeded property to allow other federal agencies or authorized parties to physically view and identify any furniture they want. For example, GSA OPPM officials in Philadelphia conducted an on-site screening of unneeded office furniture resulting from the agency’s regional office relocation. Conclusions Federal agencies collectively have billions of dollars’ worth of personal property, ranging from office furniture to highly specialized equipment that, when in use, supports agency missions. However, the agencies in our review did not have policies and systematic processes for identifying unneeded property. Furthermore, other’s previous work has shown that agencies across the government may not be effectively assessing their property for continued need, leading to idle property that could be put to better use elsewhere within the agency or the federal government and potential unnecessary storage costs. Consequently, agencies may be retaining property that is no longer needed. GSA has recognized that opportunities may exist for agencies to more effectively manage property under their control, but according to GSA OGP officials, GSA’s authority is limited to agency property that has been declared excess or surplus. According to OMB staff, OMB has the authority to issue guidance, such as controller alerts, emphasizing agencies’ property management obligations, and thus, it is well-positioned to assist agencies to more effectively manage their property and to ensure unneeded property is made available to others, as appropriate. Recommendation for Executive Action The Director of OMB should provide guidance to executive agencies on managing their personal property, emphasizing that agencies’ policies or processes should reflect the requirement to continuously review and identify unneeded personal property. (Recommendation 1) Agency Comments We provided a draft of this report to OMB, EPA, the Forest Service, GSA, HUD, and IRS for comment. OMB stated that it did not have any comments on our draft report in an email and provided a technical clarification to the report, which we incorporated. GSA and IRS provided technical comments in an email, which we incorporated as appropriate. EPA, the Forest Service, and HUD each stated in an email that they did not have any comments on the draft report. We are sending copies of this report to the appropriate congressional committees, the Director of the Office of Management and Budget, the Administrator of the Environmental Protection Agency, the Secretary of the U.S. Department of Agriculture, the Administrator of the General Services Administration, the Secretary of the Department of Housing and Urban Development, and the Secretary of the Department of the Treasury. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to examine (1) how selected federal agencies assess whether personal property is needed and (2) how selected federal agencies dispose of unneeded personal property, and how, if at all, space reduction efforts have affected disposals. We excluded certain types of personal property, such as aircraft and vehicles, from our review because of our prior or ongoing work. To address our objectives, we reviewed applicable federal statutes and regulations pertaining to personal property management and disposal, our prior work, and reports by federal agencies’ Offices of Inspector General on personal property issues. In addition, to determine how selected federal agencies assess whether personal property is needed, we conducted background searches to inform our understanding of key practices for personal property and asset management through a search of databases containing peer-reviewed articles, government reports, general news, hearings and transcripts, and association and think tank papers. We also reviewed relevant asset management practices, such as ASTM standards and the General Services Administration’s (GSA) Federal Asset Management Evaluation and Personal Property Asset Management Study. In order to select agencies that may have had recent experiences with excess personal property, we selected 5 of the 24 agencies that were included in the Freeze the Footprint and Reduce the Footprint initiatives. We selected agencies based on their overall Freeze the Footprint results, in terms of the amount of square feet reduced, and Reduce the Footprint goals for reducing domestic office and warehouse space, and the amount of personal property declared excess over the last 5 years, as reported to GSA’s GSAXcess system from fiscal years 2012 to 2016, to coincide with the Freeze the Footprint time frame. Specifically, we obtained information on the Freeze the Footprint results and Reduce the Footprint goals from the Office of Management and Budget’s public website and from Performance.gov. We limited our scope to civilian federal agencies with personal property within the United States. Although we have previously reported that the overall accuracy of data that agencies reported on office and warehouse space reductions could be improved, we found that the data were generally reliable for our purposes. After reviewing the data for any inconsistencies and discussing the information with selected agency officials to ensure that the reported numbers for the Reduce the Footprint initiative were current, we determined that the quality of the data were sufficient for our use in selecting agencies. In order to select agencies that were more likely to have relevant, recent experience with excess personal property from space reduction efforts, we ranked these agencies based on their Freeze the Footprint results, Reduce the Footprint goals, and the amount of declared excess personal property, and eliminated the bottom third of the agencies. We selected GSA as our first agency due to its central role in excess personal property disposal, and randomly selected four additional agencies from the remaining agencies. These agencies were the Environmental Protection Agency, the U.S. Department of Agriculture, the Department of Housing and Urban Development, and the Department of the Treasury. The organizational structure of two selected agencies, the Department of Agriculture and the Department of the Treasury, is different than the other three agencies in that they are comprised primarily of sub-agencies. Therefore, we selected the largest sub-agency for both departments—the Forest Service within the Department of Agriculture and the Internal Revenue Service within the Department of the Treasury. We obtained information from the five selected federal agencies on the total value and number of items in their asset management systems in 2017 to understand the size and scope of personal property assets they manage. As we used the information to describe the scope of the agencies’ property holdings, we did not verify the data. We also analyzed documents, such as the selected agencies’ personal property management policies, along with policies from the National Aeronautics and Space Administration and Department of Energy, to understand how they addressed requirements for managing personal property. We included these agencies’ policies based on our review of prior work related to personal property. We interviewed officials from the selected agencies about their processes for managing personal property assets, such as their inventory procedures. However, we did not independently assess agencies’ inventory practices. We also interviewed staff from the Office of Management and Budget (OMB) to discuss regulations and policies pertaining to personal property and OMB’s role in personal property management. To determine how selected federal agencies dispose of excess and surplus personal property and how space reduction efforts may have affected disposals, in addition to the above, we obtained information from each selected agency on its space reduction projects and interviewed officials about their roles and responsibilities in the agency’s space reduction planning efforts and personal property disposal process. We also conducted site visits to Philadelphia, Pennsylvania, and Denver, Colorado to meet with regional and local officials from each selected agency responsible for managing and disposing of personal property. These locations were chosen based on the number of our selected federal agencies present, the amount of excess personal property declared, and the existence of space reduction projects. We discussed property accountability policies, overall personal property disposal processes, and how the disposal processes were affected by government-wide space savings initiatives, such as Freeze the Footprint and Reduce the Footprint, and any efforts to prepare for them, and requested supporting documentation on the amount of property declared as excess and the disposition outcomes of that property. We did not independently verify the information that was provided, as data reported as excess from space reduction projects are not always tracked separately from other property disposed of for other reasons. We reviewed documents and interviewed officials from GSA’s Office of Personal Property Management (GSA OPPM) in GSA’s headquarters, in Philadelphia and in Fort Worth, Texas, to discuss their role in assisting agencies in disposing of personal property and to obtain their views on how personal property disposals have been affected by space reductions. Finally, we interviewed GSA’s Office of Government-wide Policy (GSA OGP) officials about the Interagency Committee on Property Management and the Property Management Executive Council regarding their personal property and asset management efforts and met with officials and representatives from the following: the U.S. Department of Agriculture’s Centralized Excess Property Operation, the Users and Screeners Association–Federal Excess Personal Property, and the National Association of State Agencies for Surplus Property to discuss their roles in the reuse and disposal of Federal personal property. We conducted this performance audit from July 2016 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: The Disposal Process for Federal Personal Property The Federal Property and Administrative Services Act of 1949, as amended, requires executive agencies, in part, to promptly report excess property to the General Services Administration (GSA) and dispose of it in accordance with GSA regulations. Each executive agency is also required to fulfill requirements for personal property by using existing agency property or by obtaining excess property from other federal agencies before purchasing new property. GSA’s disposal process, as laid out in federal regulation, incorporates and facilitates these requirements, providing a means for both disposing of and acquiring unneeded property: agencies with excess personal property can dispose of it and other agencies, authorized non-federal entities, and, eventually, the general public can acquire this property. Disposal before Declaring Property as Excess to GSA After determining that a property item is no longer needed to complete its mission, an agency may have several options for proceeding before formally declaring the property as excess to GSA: Immediately authorize abandonment or destruction of the property: Determine, in writing, that the property has no commercial value or the estimated cost of its continued care and handling would exceed the estimated proceeds from its sale. If an agency makes such a determination, it may abandon or destroy the property without reporting it to GSA as excess. In lieu of abandonment or destruction, an agency may donate excess personal property to a public body without going through GSA. Directly transfer the property to another federal agency: Agencies usually become aware of available property through informal means, such as a contact at the disposing agency, according to GSA. GSA approval for such a transfer is not needed if the total original acquisition cost for each item does not exceed $10,000. If this cost is greater than $10,000, the acquiring agency must obtain prior approval from GSA. In either case, the acquiring agency must notify GSA of the transfer. Directly transfer the property to an eligible recipient under a special authority: Special authorities are legal provisions that are designed to give excess assets to groups that may use them for a particular purpose, such as universities that can use the National Aeronautics and Space Administration’s scientific equipment in their research. Some authorities exist to collectively support all federal agencies and some support an agency-specific program. According to GSA, the primary government-wide programs are the Stevenson-Wydler Technology Innovation Act of 1980 and Executive Order 12999, also known as the Computers for Learning program. Recipients meeting eligibility requirements of the special authority contact agencies to determine the availability of property, and the agency and recipient must complete the appropriate documentation to make a record of the transfer. Disposal Process after Declaring Property as Excess to GSA An agency initiates GSA’s disposal process by formally declaring property as excess, either by completing and submitting a form to GSA or, more typically, by electronic entry of an item into GSAXcess, GSA’s real-time, Web-based site for facilitating the disposal process. The latter method requires agency employees to enter information about the excess property using data entry screens that include help screens and error messages. GSA encourages reporting agencies to provide a complete description of the property and to include multiple photographs of the property. The disposal process generally consists of four sequential stages in which personal property may be transferred to another agency or eligible recipient, donated, sold, or abandoned or destroyed, as described below. If the property is not disposed of during one stage, it advances to the next stage, though the holding agency generally retains physical custody of the property until it is disposed of. Table 2 illustrates actions a disposing agency and eligible property recipients take during each of the four stages of the disposal process after an agency declares property excess. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, the following individuals made important contributions to this report: David J. Wise (Director), Nancy Lueke (Assistant Director), Travis Thomson (Analyst-in-Charge), Lacey Coppage, Rosa Leung, Josh Ormond, Amy Rosewarne, Pamela Vines, and Elizabeth Wood.
Why GAO Did This Study The federal government owns billions of dollars of personal property—such as office furniture, scientific equipment, and industrial machinery. By law, each agency is required to follow GSA's disposal process so that an agency's unneeded property can be used by other agencies or certain non-federal entities. Since 2012, agencies have reduced their office and warehouse space due to government-wide initiatives, a reduction that in turn has required agencies to dispose of some affected personal property. GAO was asked to review how federal agencies identify and dispose of unneeded personal property. This report examines (1) how selected agencies assess whether personal property is needed and (2) how these agencies dispose of unneeded property and how, if at all, space reduction efforts have affected disposals. GAO reviewed federal statutes and regulations, and selected five agencies—EPA, Forest Service, GSA, HUD, and IRS—mainly based on space reduction results and goals. GAO reviewed these agencies' property disposal data for 2012 through 2016 and interviewed headquarters and field staff about their property management and disposal processes. What GAO Found The five agencies GAO reviewed—the Environmental Protection Agency (EPA), Forest Service, General Services Administration (GSA), Department of Housing and Urban Development (HUD), and Internal Revenue Service (IRS)—generally do not have policies or processes for identifying unneeded personal property, such as office furniture, on a proactive basis. Instead, officials from these agencies said they typically identified unneeded property as a result of a “triggering event,” such as an office space reduction. Executive agencies are required by law to continuously review property under their control to identify unneeded personal property and then dispose of it promptly. Without such policies or processes, agencies may not be routinely identifying unneeded property that could be used elsewhere, and efforts to maximize federal personal property use and minimize unnecessary storage costs may not be effective. GSA has issued regulations establishing a government-wide disposal process for unneeded personal property. However, according to GSA officials, the agency lacks the authority to promulgate regulations or formal guidance on management of in-use agency property, and there is no government-wide guidance to agencies on identifying unneeded personal property. Agencies are required to have internal control activities—such as policies and procedures—for reasonable assurance of efficient operations and minimal resource waste, and the Office of Management and Budget (OMB) provides guidance to agencies on implementing such activities. Guidance from OMB that emphasizes agencies' internal control responsibilities could help ensure that agencies are proactively and regularly identifying property that is no longer needed. The selected agencies reported little difficulty in following GSA's personal property disposal process, reporting over 37,000 items as unneeded property in fiscal years 2012 through 2016. This property was disposed of through transfers to other agencies, donations to authorized recipients, sales, or discarding. When disposing of personal property from space reduction projects at locations GAO visited, agencies also reported using GSA's process (see figure). Overall, agencies said they have not experienced major challenges with disposing of personal property from space reduction efforts. This lack of challenges could be because projects are geographically dispersed and spread over several years. What GAO Recommends OMB should provide guidance to executive agencies on managing their personal property, emphasizing that agencies' policies or processes should reflect the requirement to continuously review and identify unneeded personal property. OMB did not comment on GAO's recommendation.
gao_GAO-18-500
gao_GAO-18-500_0
Background Many consumer products—such as deodorants, shaving products, and hair care products—are differentiated to appeal specifically to men or women through differences in packaging, scent, or other product characteristics (see fig. 1). These differences related to gender can affect manufacturing and marketing costs that may contribute to price differences in products targeted to different genders. However, firms may also charge consumers different prices for the same (or very similar) goods and services even when there are no differences in costs to produce. To maximize profits, firms use a variety of techniques to charge prices close to the highest price different consumers are willing to pay. Firms may attempt to get segments of the consumer market to pay a higher price than another segment by slightly altering or differentiating the product. Based on the differentiated products, consumers self-select into different groups according to their preferences and what they are willing to pay. For example, some consumer goods have different versions of what is essentially the same product—except for differences in packaging or features, such as scent—with one version intended for women and another version intended for men. The two products may be priced differently because the firm expects that one gender will be willing to pay more for the product than the other based on preference for certain product attributes. Firms may also use some group characteristic, such as age or gender, to charge different prices because some groups may have differences in willingness or ability to pay. For example, a firm may offer discounted movie tickets to students or seniors, as they may have less disposable income. For the seller the cost of providing the movie is the same for any customer, but the seller is able to maximize its profits by offering tickets to different groups of customers at different prices. A firm’s ability to differentiate prices depends on multiple factors, such as the firm’s market power (so that competitors cannot put downward pressure on prices to eliminate the price differences), the presence of consumer segments with different demands and willingness to pay, and control over the sale of its product so it cannot be easily resold to exploit price differences. In addition, the extent to which consumers pay different prices for the same or similar goods can depend on other factors, such as consumers’: willingness to purchase an item they believe may be priced higher for ability to compare prices and product characteristics and choose a product based on its characteristics rather than its price, choices about whether to purchase a more expensive version of the product (e.g., a branded item versus a cheaper store brand), choices about where to purchase the item (i.e., when different retailers sell the same item at different prices), and use of coupons or promotions. No federal law expressly prohibits businesses from charging different prices for the same or similar consumer goods and services targeted to men and women. However, consumer protection laws do prohibit sex discrimination in credit and real estate transactions. Specifically, the Equal Credit Opportunity Act (ECOA) prohibits creditors from discriminating against credit applicants based on sex or certain other characteristics and the Fair Housing Act (FHA) prohibits discrimination in the housing market on the basis of sex or certain other characteristics. ECOA and FHA (collectively known as the fair lending laws) prohibit lenders from, among other things, refusing to extend credit or using different standards in determining to extend credit based on sex. Credit, such as a credit card account or mortgage loan, is generally made available and priced based on a number of risk factors, including credit score, income, and employment history. A borrower with a lower credit score is likely to pay a higher interest rate on a loan, reflecting the greater risk to the lender that the borrower could default on the loan. In addition to the interest rate, borrowing costs for consumers can also include fees and other costs charged by lenders or brokers. However, there may be differences in average outcomes for men and women—such as for availability of credit or interest rates—if there are differences related to gender in the factors that determine creditworthiness, such as income. BCFP, FTC, the federal prudential regulators, and DOJ have the authority to investigate alleged violations of ECOA and are primarily responsible for enforcing the act’s requirements, while HUD and DOJ share responsibility for enforcing the provisions of FHA. Further, BCFP and the prudential regulators oversee regulated entities for compliance with ECOA by, among other things, collecting complaints from the public and through routine inspections of the financial institutions they oversee. HUD and DOJ have the authority to bring enforcement actions for alleged violations of FHA. Prices Differed Significantly for Selected Men’s and Women’s Personal Care Products, but We Could Not Attribute the Differences to Bias as Opposed to Other Factors In 5 out of 10 product categories we analyzed, personal care products targeted to women sold at higher average prices than those targeted to men after controlling for certain observable factors. For 2 of the 10 product categories, men’s versions sold at higher average prices. While the factors we controlled for likely proxy for various costs and consumer preferences, we could not fully observe all underlying differences in costs and demand for products targeted to different genders. As a result, we could not determine the extent to which the gender-based price differences we observed may be attributed to gender bias as opposed to other factors. For 5 of 10 Product Categories Analyzed, Women’s Products Sold at Higher Average Prices Than Men’s after Controlling for Some Observable Factors Women’s versions of personal care products sold at a statistically significant higher average price than men’s versions for 5 of the 10 personal care product categories we analyzed—using two different price measures and after controlling for observable factors that could affect price, such as brands, product size or quantity, promotional expenses (see table 1) and other product-specific attributes (e.g., scent, special claims, form). Because women’s and men’s versions of the same product were frequently sold in different sizes, we compared prices using two price measures: average item price and average price per ounce or count of product. For 2 of the 10 product categories—shaving gel and nondisposable razors—men’s versions sold at a statistically significant higher price using both price measures. For one category (razor blades), women’s versions sold at a statistically significant higher average price per count, but there was no gender price difference using average item prices. Additionally, for two product categories—disposable razors and mass-market perfumes—there were no statistically significant price differences between men’s and women’s products using either price measure. In addition to this analysis of retail price scanner data, we also manually collected advertised online prices for a limited selection of personal care products targeted to women and men from several online retailers. Some price comparisons of advertised online prices for men’s and women’s versions of a product were similar to comparisons of average prices paid based on the Nielsen retail price scanner data. For example, for three pairs of comparable underarm deodorants, the women’s deodorant was listed at a higher price per ounce on average than the men’s deodorant (see app. II). In addition, for one pair of shaving gel products we analyzed, the men’s shaving gel was listed at a higher price per ounce on average. However, for both pairs of nondisposable razors we analyzed, the women’s razors were listed at a higher average price per count than the men’s razors. This contrasted with the Nielsen data showing that men’s nondisposable razors sold at higher prices on average than women’s. An important limitation of our analysis of these advertised prices is that we were unable to determine the extent to which consumers actually paid these prices and in what volume the products were sold, and our results are not generalizable to the broader universe of prices for these products sold at other times or by other online retailers. We Could Not Determine the Extent to Which Price Differences May Be Due to Market Factors as Opposed to Gender Bias Though we found that the target gender for a product is a significant factor contributing to price differences we identified, we do not have sufficient information to determine the extent to which these gender- related price differences were due to gender bias as opposed to other factors. Versions differentiated to appeal to men and women can result in different costs for the manufacturer. Our econometric analysis controlled for many observable factors related to costs, such as product size, promotional activity, and packaging type. We also controlled for many product attributes such as forms, scents, and special claims that products make to account for underlying manufacturing cost differences. In addition, we controlled for brands, which can reflect consumer preferences. However, we do not have firm-level data on all cost differences—for example, those related to advertising and packaging. As a result, we could not determine the extent to which the price differences we observed may be explained by remaining cost differences between men’s and women’s products. We also do not have the data to determine the extent to which men and women have different demands and willingness to pay for a product, which would be expected to affect the prices firms charge for differentiated products. For example, some academic experts we spoke with said that women may value some product attributes, such as design and scent, more than men do. If products differentiated to incorporate those attributes do not result in different costs, then differences in prices could be part of a firm’s pricing strategy based on the willingness of one gender to pay more than another. The conditions necessary for firms to be able to implement a strategy of price differentiation likely exist for the personal care products we analyzed. First, our analysis suggests that due to industry concentration, there is limited market competition for the 10 personal care products we analyzed. With more market power, firms can more easily set different prices for different consumer segments. Second, firms have the ability to segment the market for personal care products by tailoring product characteristics related to gender, such as by labeling the product as women’s deodorant or men’s deodorant, or by altering scent or colors. Third, while men and women are able to freely purchase a product targeted to the opposite gender, certain factors may limit the extent to which this occurs. For example, some product differences such as scents may discourage one gender from buying products targeted to another gender. In addition, consumers may find it difficult and time- consuming to compare prices for similar men’s and women’s products because of the ways they are differentiated (such as product size and scents) and because they may be sold in different parts of a store. Studies We Reviewed Found Limited Evidence of Price Differences for Men and Women for Mortgages, Small Business Credit, and Auto Purchases We reviewed studies that compared prices for men and women in four markets where the product or service is not differentiated by gender: mortgages, small business credit, auto purchases, and auto repairs. First, we reviewed studies on mortgage and small business credit that analyzed interest rates and access to credit to identify any differences for men and women. Second, we reviewed studies that compared prices quoted to men and women in auto purchase and repair markets. However, several of these studies have important limitations, such as using nonrepresentative data samples, and the results are not generalizable. Studies on Mortgages Found Mixed Evidence of Disparities in Borrowing Costs between Men and Women Studies we reviewed found that women as a group pay higher interest rates on average than men in part due to weaker credit characteristics. After controlling for borrower credit characteristics and other factors, three studies did not find statistically significant differences in interest rates between men and women for the same type of mortgage, while one study found that women paid higher mortgage rates for certain subprime loans. In addition, one study found that female borrowers defaulted less frequently on their loans than male borrowers with similar credit characteristics, suggesting that women as a group may pay higher mortgage rates than men relative to their default risk. While these studies attempted to control for factors other than gender or sex that could affect borrowing costs, several lacked important data on certain borrower risk characteristics. For example, several studies we reviewed rely on Home Mortgage Disclosure Act of 1975 (HMDA) data, which did not include data on risk factors such as borrower credit scores that could affect analysis of disparities between men and women. Also, several studies analyzed nonrepresentative samples of loans, such as subprime loans or loans originated more than 10 years ago, which limits the generalizability of the results (see table 2). Three of the studies we reviewed found that while women on average were charged higher interest rates on mortgage loans than men, this difference was not statistically significant after controlling for other factors. For example, one study found that differences in mortgage interest rates between men and women became insignificant after controlling for differences in how men and women shop for mortgage rates. The authors used data from the 2004 Survey of Consumer Finances (SCF) to analyze the effect on interest rates of mortgage features, borrower characteristics such as gender, and market conditions. However, their analysis did not include data on some borrower credit characteristics such as credit score and debt-to-income ratio that could affect borrowing costs. Another study found that women were charged higher interest rates for subprime loans made in 2005, but once the authors controlled for observed risk characteristics there was no evidence of disparity in interest rates by gender of the borrower in the subprime market. However, the authors’ data did not include any fees paid at loan origination, which could affect the overall cost of borrowing. A third study that examined disparities between men and women in subprime loans found no significant evidence that gender affected the cost of borrowing within the subprime market, though it did find that women—particularly African American women—were more likely to have subprime loans. The authors found that, even after controlling for some financial characteristics and loan terms, single African American women were more likely than non-Hispanic white couples to have subprime loans. One study analyzed subprime loans made by one large lender from 2003 through 2005 and found that women paid more for subprime mortgages than men after controlling for some risk factors. This study found that women had higher average borrowing costs—as measured by annual percentage rate—than men, and controlling for credit characteristics such as credit scores and debt-to-income ratios did not fully explain the differences. However, the authors did not control for other factors that could also affect borrowing costs, such as differences in education, shopping behaviors, and geographic location. Additionally, a research paper found that female-only borrowers—that is, where the only borrower is a woman—default less than male-only borrowers with similar loans and credit characteristics. The authors found that female-only borrowers on average pay more for their mortgage loans because they generally have weaker credit characteristics, such as lower income, and also because a higher percentage of these mortgage loans are subprime. However, after controlling for credit characteristics such as credit score, loan term, and loan-to-value ratio, among others, the analysis showed that these weaker credit characteristics do not accurately predict how well women pay their mortgage loans. Since pricing is tied to credit characteristics and not performance, women may pay more relative to their actual risk than do similar men. Studies on Small Business Credit Did Not Identify Gender Differences in Borrowing Costs but Found Mixed Evidence of Differences for Access to Credit Studies we reviewed on small business loans generally did not find differences in interest rates, though some found differences in denial rates and other accessibility issues between female- and male-owned firms. Most of the studies we reviewed used data from the 1993, 1998, or 2003 Survey of Small Business Finances (SSBF), which could limit the applicability or relevance of their findings today. A study that analyzed data from the 1993 SSBF did not find evidence that businesses owned by women paid more for credit than firms owned by white men. However, when the authors took into account the market concentration and competition, they found that white female-owned firms experienced increased denial rates in less competitive markets. In addition, the study found that women may avoid applying for credit in those markets because of the fear of being denied. For example, almost half of all small business owners who needed credit reported that they did not apply for credit, and these rates were even higher for businesses owned by women and minorities. Other studies found that women may have less access to small business credit than men, in part because of higher denial rates and because they may not apply for credit out of fear of rejection. For example, one study found that women-owned firms have higher loan denial rates compared with men; however, this is mainly due to differences in business characteristics of female- and male-owned firms. The authors also found that even when denial rates are the same for small businesses with similar characteristics, women’s loan application rates are lower, suggesting that women may be discouraged from applying for credit by the higher overall denial rates for female-owned firms. Another study by one of the same authors examined the reasons why female borrowers may be discouraged from applying for a business loan compared to male business owners and found that it was mainly because they fear that their application will be rejected. A third study by the same author found that women in general did not have less access to credit than men, though newer female-owned firms received significantly lower loan amounts than requested compared to their male-owned counterparts. Similarly, the study also found that women with few years of experience managing or owning a business received significantly lower loan amounts compared with men with similar years of experience. A fourth study looked at six different types of loans, including lines of credit, and found that white women were significantly more likely than white men to avoid applying for a loan because they assume they would be denied. However, once the authors’ model controlled for education differences, all gender disparities in applying for credit disappeared, though white women were still less likely than white men to have loans. Studies Found That Men and Women Paid or Were Quoted Different Prices for Auto Purchases and Auto Repairs Studies we reviewed on auto purchases and repairs found that a seller’s expectation of what customers are willing to pay and how informed they seemed can differ by gender, which can affect the price customers are quoted. However, these studies were published in 1995 and 2001, which may limit the applicability or relevance of their findings today. The 2001 study we reviewed on auto purchases found that though women paid higher prices than men for car purchases on average, these differences declined when cars were purchased online. The authors suggest that this may be because Internet consumers can effectively convey their level of price knowledge and therefore may seem better informed to the sellers. They also suggest it could be because the dealerships have less information about online consumers and their willingness to pay, which may limit the extent of price differentiation. The 1995 study on auto purchases found that the dealers quoted significantly lower prices to white males than to female or African American test buyers using identical, scripted bargaining strategies in part because dealers may have made assumptions about women’s willingness to bargain for lower prices. We also reviewed one study on auto repairs that found that women were quoted higher prices than men if they seemed uninformed about the cost of car repair when requesting a quote, but the price differences disappeared if the study participant mentioned an expected price. The study suggests that a potential explanation for this result could be that auto repair shops expect women to accept a price that is higher than the market average and men to accept a price below it. Federal Agencies Have Identified Limited or No Consumer Concerns about Price Differences Based on Sex or Gender Federal Agencies Monitor Consumer Complaints and Identified Limited Examples of Concerns of Price Differences Based on the Consumer’s Sex or Gender BCFP and HUD have responsibilities to monitor consumer complaints in the consumer credit and housing markets, respectively. Additionally, FTC monitors complaints about the consumer credit and consumer goods markets. All three agencies play a role in potentially monitoring or addressing issues of gender-related price differences and have online complaint forms for submission of consumer complaints: BCFP collects and reviews consumer complaints about financial products and services and provides complaints and related data in its Consumer Complaint Database. In 2017 BCFP received approximately 320,200 consumer complaints. The products that generated the most complaints in 2017 were “Credit or consumer reporting,” “Debt collection,” and “Mortgage." According to BCFP officials, BCFP also analyzes loan and demographics data collected through HMDA and other data sources to monitor and identify market trends. In addition, BCFP and the federal financial regulators examine fair lending practices of the institutions they regulate, and these examinations have uncovered sex discrimination in credit products by FDIC and NCUA. FTC receives complaints and the complaints are stored in the Consumer Sentinel Network, a database of consumer complaints received by FTC, as well as those filed with other federal and state agencies and organizations, such as mass marketing fraud complaints from the Council of Better Business Bureaus. The complaints in the Consumer Sentinel Network focus on consumer fraud, identity theft, and other consumer protection matters, such as debt collection, and can include complaints related to consumer credit markets. HUD receives consumer complaints about potential FHA violations through its website, via its toll-free phone hotline, and in writing. HUD monitors those complaints through its online HUD Enforcement Management System. HUD investigates all complaints for which it has jurisdictional authority. HUD may monitor complaints to identify trends, but HUD officials stated that the agency does not generally monitor consumer credit and housing market data, absent a specific complaint. In cases where HUD has jurisdictional authority under FHA, HUD offers conciliation between the parties. If resolution is not reached, and HUD determines there is reasonable cause to believe a violation has occurred, the parties may elect to have the matter heard in U.S. District Court or at HUD. In their oversight of federal antidiscrimination statutes, BCFP officials said they have not identified significant consumer concerns about price differences based on a consumer’s sex or gender. FTC and HUD officials identified some examples of concerns of this nature. For example, FTC has taken enforcement actions alleging unlawful race- and gender-related price differences. HUD has also identified several cases where pregnant women and their partners applied for a mortgage while the woman was on maternity leave, and the couple’s mortgage loan application was denied. Our Analysis of Federal Agency Data Identified Few Consumer Complaints about Price Differences Based on Sex or Gender BCFP, FTC, and HUD have received few consumer complaints about price differences related to sex or gender, according to our analysis of a sample of each agency’s 2012–2017 complaint data (see table 3). In separate samples of 100 gender-related complaints at BCFP, HUD, and FTC, we found that 0, 4, and 1 complaint, respectively, were related to price differences based on sex or gender. Three of the complaints from HUD also cited differences in price based on other protected classes (such as race or ethnicity). Half of the academic experts and consumer groups we interviewed told us that in some markets it is difficult for consumers to observe and compare prices paid by other consumers, such as when prices are not posted or can be negotiated (e.g., car sales). In such cases, consumers may not know if other consumers are paying a higher or lower price than the price quoted to them. Most academic experts also told us that when consumers are aware that price differences could exist, they may make different decisions when making purchases. Additionally, officials from BCFP noted that price differences related to gender may be difficult for consumers to identify, or that consumers may not know where to complain. Agencies Provide Resources on Discrimination and Have Not Developed Other Consumer Education Efforts on Gender in Part Due to Limited Public Complaints The consumer education resources of BCFP, FTC, and HUD provide general consumer education resources on discrimination (i.e., consumer user guide or a website) and consumer awareness. Officials from BCFP and HUD said they have not identified a need to develop other consumer education resources specific to gender-related price differences. For example, BCFP’s print and online consumer education materials are intended to inform consumers of their rights and protections related to credit discrimination, which includes discrimination based on sex or gender. The three agencies’ consumer education materials also provide advice that could help consumers avoid paying higher prices regardless of their gender—such as home-buying resources and resources on comparison shopping. However, the agencies have not developed additional educational resources focused specifically on potential gender- related price differences in part because few complaints on this topic have been collected in their databases, agency officials told us. FTC officials noted that it tries to focus its education efforts on topics that will have the greatest benefit to consumers, often determined by information it gathers through complaints and investigations. Representatives of five consumer groups and industry associations told us that they have received few complaints about gender-related price differences. However, four consumer groups noted that low concern could be the result of consumers being unaware of price differences related to gender. For example, as indicated above, price differences related to gender may be difficult for consumers to identify when they cannot determine whether they are paying a higher price than others. Representatives of two retailing industry associations similarly stated that they have not heard concerns about price differences related to gender. Some State and Local Governments Have Passed Laws to Address Concerns about Gender-related Price Differences In response to consumer complaints or concerns about gender disparities in pricing, at least one state (California) and two municipalities (Miami- Dade County and New York City) have passed laws or ordinances to prohibit businesses from charging different prices for the same or similar goods or services solely based on gender (see table 4). In addition, two of these laws included requirements related to promoting price transparency. California enacted the Gender Tax Repeal Act of 1995, which prohibits businesses from charging different prices for the same or similar services based on a consumer’s gender. The law also requires certain businesses to display price information and disclose prices upon request, according to state officials with whom we spoke. Similarly, in 1997, Miami-Dade County passed the Gender Pricing Ordinance, which prohibits businesses from charging different prices based solely on a consumer’s gender (though businesses are permitted to charge different prices if the goods or services involve more time, difficulty, or cost). In the same year, it also passed an ordinance that prohibits dry cleaning businesses from charging different prices for similar services based on gender. This ordinance also requires those businesses to post all prices on a clear and conspicuous sign, according to county officials with whom we spoke. State and local officials we interviewed identified benefits and challenges associated with these laws. For example, California, New York City, and Miami-Dade County officials noted that these laws give them the ability to intervene to address pricing practices that may lead to discrimination based on gender. In addition, California state officials said that the state’s efforts to implement the Gender Tax Repeal Act helped to improve consumer awareness about gender price differences. However, officials from California and Miami-Dade County cited challenges associated with tracking relevant complaints. For example, Miami-Dade County’s online complaint form includes a narrative section but does not ask for the complainant’s gender. Consumers do not always identify their gender in the narrative or state that that was the reason for their treatment. Additionally, officials from California and Miami-Dade County stated that seeking out violations would be very resource-intensive, and they rely on residents to submit complaints about violations. Agency Comments We provided a draft of this report to BCFP, DOJ, FTC, and HUD. BCFP, FTC, and HUD provided technical comments on the report draft, which we incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, BCFP, DOJ, FTC, HUD, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Nielsen Retail Price Data Analysis Methodology We used a multivariate regression model to estimate the effect of gender (to which a product is targeted to) on the price of that product while controlling for other factors that may also affect the product’s price. The factors that we controlled for were the product size, promotional and packaging costs, and other product characteristics discussed in detail later. We used scanner data from the Nielsen Company (Nielsen) for calendar year 2016 and analyzed the following 10 product categories: (1) underarm deodorants, (2) body deodorants, (3) shaving cream, (4) shaving gel, (5) disposable razors, (6) nondisposable razors, (7) razor blades, (8) designer perfumes, (9) mass-market perfumes, and (10) mass-market body sprays. We estimated the following regression model for each of our 10 product categories: P=α+β*Male + λ* Size + θ*Owner +η*Promotion+ μ*X + δ*Y + ε The dependent variable P in the above equation represents price. For our analysis, we constructed two measures of price. The first is the item price, estimated as the total dollar sales of an item (each item is depicted by a unique Universal Product Code (UPC) in the Nielsen data), divided by the total units sold of that item. The second measure of price that we use is price per ounce or price per count. This is estimated as the item price divided by the total quantity of product, where quantity or size depicts the number of ounces (as in the case of fragrances) or the count of blades in razor blade packs. The total quantity of the product is the ounces or counts of one item multiplied by the number of items included in a specific product configuration. For example, a 2-pack of deodorant sticks where each deodorant stick is 2.7 ounces would be a total quantity of 5.4 ounces. The variable Male in the above equation is an indicator variable depicting whether the product is designated as a “men’s” product in the Nielsen data. It is represented as a value of “1” for men’s products and a value of “0” for women’s products. The co-efficient for this variable, parameter β, would therefore show the price difference between a men’s and women’s product. A negative value would imply a lower price for products designated as men’s products. The variable Size represents the most appropriate specification of the size of the product. Owner is a set of indicator variables representing all the brand owners selling a particular product. The brand of a product can be expected to have a substantial effect on prices for the kind of products we analyze because brands can be a proxy for quality for some consumers. However, we also found that firms often create gender-specific brands, so holding brands constant rendered most gender-based price comparisons infeasible. To overcome this, we hold owners instead of brands constant for our price comparison analysis. The variable Promotion represents the percentage of dollar sales that were sold on any type of promotion. This variable proxies for promotional costs to some extent based on the assumption that the greater the proportion of sales due to promotional activity, the greater the promotional costs. The variables X represent a set of indicator variables for packaging characteristics such as package delivery method (for example, roll-on or aerosol spray deodorants) or package shape (for example, bottle, tube, or can). We expect these characteristics to proxy for different costs associated with different packaging methods. The variables Y represent a set of indicator variables representing different product characteristics (for example, forms such as gel stick or smooth solid and claims such as “active cooling” or “anti-wetness” for underarm deodorants, and blade types such as “triple edge” and “flexible six” for razors). These product characteristics may proxy for some underlying manufacturing costs or even consumer preferences. Since firms may create gender-specific product attributes—scents like “sweet petals” and “pure sport” or razor head types and colors to differentiate products between genders—we did not always keep every product attribute constant when comparing prices. The idiosyncratic error term is represented by ε. All of our regressions are weighted, with the proportion of units sold for a particular item in that year as the weight. This is because, for personal care products, there are large differences in units sold of various product types and brands, and therefore it not useful to compare simple un- weighted average prices. For example, for one company the highest selling men’s deodorant stick sold almost 12 million units in 2016, and the highest selling women’s deodorant stick sold over 8 million units. The average units sold for underarm deodorants as a whole was just over 300,000 units, and 1,000 products out of a total of almost 3,000 products had less than 100 units sold in 2016. The linear model we used has the usual shortcomings of being subject to specification bias to the extent the relationship between price and each of the independent variables is not linear. The model also does not include complete data on costs, such as advertising and packaging, or consumers’ willingness to pay, both of which have an effect on the price differences. The model may thus also be subject to omitted variable bias. In addition, the model may have some endogeneity issues to the extent the product characteristics themselves are influenced by consumers’ willingness to pay for some of those product features. To reduce the impact of any model misspecifications or heteroscedasticity, we used the robust (or Huber-White sandwich) estimator. We estimated the regression model above for each of the 10 products separately and for each of the two measures of price. We used Nielsen’s in-store, retail price scanner data, which include information on total volume sold and dollar sales for items purchased at 228 retailers including grocery stores, drug stores, mass merchandisers (such as Target), dollar stores, club stores (such as Sam’s Club), and convenience stores. The data capture 82 percent of all U.S. sales. Nielsen also projects sales for the remaining noncooperating retailers, and that information is included in this dataset. We excluded some very small brands that did not have enough units sold from our regression analysis in order to avoid outliers. These brands usually had less than 50,000 units sold over the entire year, and for some products they represented less than 1 percent of all units sold. We found that average retail prices paid were significantly higher for women’s products than for men’s in 5 out of 10 personal care products. In 2 categories, men’s versions sold at a significantly higher price. One category had mixed results based on two price measures analyzed, and two others showed no significant gender price differences. A summary of our regression results is presented in table 5. Appendix II: Collection of Online Prices for Selected Personal Care Products We manually collected prices for 16 pairs of selected personal care products from the websites of four online retailers that also operated physical store locations. We selected comparable pairs of similar men’s and women’s products that were differentiated by product attributes, such as scent or color, and were sold at most or all of the four retailers. The products were selected based on several comparability factors such as brand, product claims, and number of blades in a razor. For two 1-week time periods in January and March 2018, we collected prices manually between 1:00 p.m. and 7:00 p.m. (ET) over two 7-day time periods. We collected listed prices and did not adjust the prices for any promotions that were available, such as online coupons or buy-one-get-one-free offers. Table 6 presents the results of our online price collection. These results have important limitations: The average prices shown are not generalizable to the broader universe of prices for these products sold at other times or by other online retailers. The data reflect prices advertised to consumers rather than the prices consumers actually paid. The data do not capture the volume of sales for each item for each retailer; in our analysis, we weighted all advertised prices equally across the retailers. As a result, differences we found within these advertised prices may not have translated into comparable differences in prices female and male consumers paid for these products online. The prices do not reflect any promotional discounts, volume discounts, or other discounts that may have been available to some or all consumers. Appendix III: Objectives, Scope, and Methodology This report examines (1) how prices compared for selected categories of consumer goods that are differentiated for men and women, and potential reasons for any significant price differences; (2) what is known about the extent to which men and women may pay different prices in, or experience different levels of access to, markets for credit and goods and services that are not differentiated based on gender; (3) the extent to which federal agencies have identified and taken steps to address any concerns about gender-related price differences; and (4) state and local government efforts to address concerns about gender-related price differences. To compare prices for selected goods that are differentiated for men and women, we purchased and analyzed Nielsen Company (Nielsen) data on retail prices paid for 10 personal care product categories for calendar year 2016. The product categories included underarm deodorants, body deodorants (typically sold as a spray), disposable razors, nondisposable razors, razor blades, shaving creams, shaving gels, and three categories of fragrances. We selected these categories of personal care products because they are commonly purchased consumer goods that were categorized by gender in the Nielsen data. The women’s and men’s versions of personal care products we selected are generally more similar in terms of the form, size, and packaging in comparison to certain other consumer product categories that are also differentiated by gender, such as clothing. We used regression models to analyze data on retail prices paid for the 10 categories of personal care products differentiated for women and men. To assess the reliability of the Nielsen data, we reviewed relevant documentation and conducted interviews with Nielsen representatives to review steps they took to collect and ensure the reliability of the data. In addition, we electronically tested data fields for missing values, outliers, and obvious errors. We determined that these data were sufficiently reliable for our purposes. For more details on the methodology for, and limitations of, our analysis of these retail price data, see appendix I. We also manually collected listed prices for 16 pairs of selected personal care products from four different retailer websites over two 7-day periods in January and March 2018. For each pair, we selected comparable men’s and women’s products that were differentiated by product attributes, such as scent or color, and were commonly sold across retailers. For more details on our online price data collection and the limitations associated with interpreting the results, see appendix II. To examine what is known about the extent to which men and women may be offered different prices or access for the same goods or services, we reviewed academic literature identified through a literature search covering the last 25 years. To identify existing studies from peer-reviewed journals, we conducted searches using subject and keyword searches of various databases, such as EconLit, Scopus, ProQuest, and Social SciSearch. We also used a snowball search technique—meaning we reviewed relevant academic literature cited in our selected studies—to identify additional studies. We performed these searches and identified articles from December 2016 to April 2018. From these searches, we identified 21 studies that appeared in peer-reviewed journals or research institutions’ publications from 1995 through 2016 and were relevant to gender-related price differences for the same products. We reviewed and assessed each study’s evaluation methodology based on generally accepted social science standards. See the bibliography at the end of this report for a list of the 21 studies. We then summarized the research findings. A GAO economist read and assessed each study, using the same data collection instrument. The assessment focused on information such as the types of disparities examined, the research design and data sources used, and methods of data analysis. The assessment also focused on the quality of the data used in the studies as reported by the researchers and any limitations of data sources for the purposes for which they were used. A second GAO economist reviewed each completed data collection instrument to verify the accuracy of the information included. As a result, the 21 studies that we selected for our review met our criteria for methodological quality. We found the studies we reviewed to be reliable for purposes of determining what is known about price differences for the same products. However, these studies have important limitations, such as using nonrepresentative data samples, and the results are not generalizable. To examine the federal role in overseeing gender-related price differences, we reviewed relevant federal statutes and agency guidance, and interviewed officials from the Federal Trade Commission (FTC), Bureau of Consumer Financial Protection (BCFP), the Department of Housing and Urban Development (HUD), and the Department of Justice (DOJ). To help identify the extent of concerns about gender-related price differences, we interviewed representatives from eight consumer groups, three industry associations, and four academic experts. Additionally, we reviewed a sample of consumer complaints from databases managed by BCFP, FTC, and HUD (Consumer Complaint Database, Consumer Sentinel Network, and Enforcement Management System, respectively). Complaints were submitted by consumers across the United States about various financial products, housing grievances, and other consumer protection concerns. To identify our universe of gender-related consumer complaints in BCFP and FTC databases, we used the following search terms that targeted sex or gender discrimination: discriminat, unfair, treat, decept, abus, female, woman, women, man, men, male, gender, sex, female, woman, women, man, men, male, gender, and sex. HUD’s consumer complaint database is categorized by protected class (e.g., race, sex, national origin), so we did not need to use search terms to identify gender-related complaints. For the years 2012 through 2017, we identified 6,117 BCFP consumer complaint narratives; 10,472 FTC consumer complaints narratives; and 5,421 HUD consumer complaint narratives that were relevant to our scope. We then drew a stratified random probability sample of 100 gender-related consumer complaints from each database. To determine which complaints in our samples were about price differences related to gender or sex, two team members read through each complaint narrative and coded whether or not the complainant’s narrative indicated that they felt that they paid or were charged more because of their gender or sex. A third team member conducted a final review of the results, and made a final determination in cases where there were differences in the first two team member’s assessments. With this probability sample, each member of the study population had a nonzero probability of being included, and that probability could be computed for any member. We followed a probability procedure based on random selections and our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (with a margin of error of 5.9 percent). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. We assessed the reliability of these data by reviewing documentation and interviewing agency officials about the databases used to collect these complaints. We determined that these data were sufficiently reliable for our purposes of identifying complaints of gender- related price differences. To explore state and local efforts to address concerns about gender- related price differences, we conducted a literature search and identified three state or local laws or ordinances that specifically address gender- related price differences: California, Miami-Dade County, Florida, and New York City, New York. We reviewed these laws and ordinances and interviewed officials from these jurisdictions to discuss motivations for, oversight of, and the impact of these laws. We conducted this performance audit from October 2016 to August 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix IV: Descriptive Statistics of Nielsen Retail Price Data For each of 10 personal care product categories we analyzed, we compared the overall average prices for women’s products and men’s products using two measures of average price: average item price and average price per ounce or count. While the second price measure adjusts the average price for quantity of product, these comparisons did not take into account the effect on price of differences in product brand, packaging, and other characteristics. As shown in table 7, adjusting the average item price to account for differences in product quantity (ounces or count) significantly affected the size and magnitude of gender price differences for several product categories. This is because men’s products in the dataset were frequently larger in size or count compared with women’s products in the same category. For example, women’s disposable razors sold for 11 percent less than those targeted to men when we compared average item prices. However, when we compared average price per count of razors, women’s disposable razors sold for 19 percent more on average than men’s. This is because women’s disposable razors had on average about one fewer razor per package. In 5 out of 10 product categories, women’s versions of the product on average sold for a higher price per ounce or count than men’s and these differences were statistically significant at the 95 percent confidence level for 4 products and at the 90 percent level for one product. Information about sales and relative sizes of different products targeted to men and women are presented in table 8 below. Appendix V: Selected Federal Agency Consumer Complaint Processes This appendix provides additional details about the consumer complaint processes at the Bureau of Consumer Financial Protection (BCFP), Federal Trade Commission (FTC), and Department of Housing and Urban Development (HUD). Consumers with a complaint about unfair treatment related to gender could submit a complaint to one of these agencies. BCFP and FTC monitor consumer complaints related to violations under the Equal Credit Opportunity Act, while HUD and the Department of Justice (DOJ) investigate housing discrimination complaints under the Fair Housing Act. These complaints could be about price differences because of gender. Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Alicia Puente Cackley, (202) 512-8678 or cackleya@gao.gov. Staff Acknowledgments In addition to the contact named above, John Fisher (Assistant Director), Jeff Harner (Analyst in Charge), Vida Awumey, Bethany Benitez, Namita Bhatia-Sabharwal, Kelsey Kreider, and Kelsey Sagawa made key contributions to this report. Also contributing to this report were Abigail Brown, Michael Hoffman, Jill Lacey, Oliver Richard, Tovah Rom, and Paul Schmidt. Appendix VII: Bibliography We reviewed literature to identify what is known about the extent to which female and male consumers may face different prices or access in markets for credit and goods and services that are not differentiated based on gender. This bibliography contains citations for the 20 studies and articles that we reviewed that compared prices or access for female and male consumers in markets where the product is not differentiated by gender (mortgages, small business credit, auto purchases, and auto repairs). Asiedu, Elizabeth, James A. Freeman, and Akwasi Nti-Addae. “Access to Credit by Small Businesses: How Relevant Are Race, Ethnicity, and Gender?” The American Economic Review, vol. 102, no. 3 (2012): 532- 537. Ayers, Ian and Peter Siegelman. “Race and Gender Discrimination in Bargaining for a New Car.” The American Economic Review, vol. 85, no. 3. (1995): 304-321. Blanchard, Lloyd, Bo Zhaob, and John Yinger. “Do lenders discriminate against minority and woman entrepreneurs?” Journal of Urban Economics 63 (2008): 467–497. Blanchflower, David G., Phillip B. Levine, and David J. Zimmerman. “Discrimination in the Small-Business Credit Market.” The Review of Economics and Statistics, vol. 85, no. 4 (2003): 930-943. Busse, Meghan R., Ayelet Israeli, and Florian Zettelmeyer. “Repairing the Damage: The Effect of Price Expectations on Auto Repair Price Quotes.” National Bureau of Economic Research, Working Paper 19154 (2013). Cavalluzzo, Ken S., Linda C. Cavalluzzo, and John D. Wolken. “Competition, Small Business Financing, and Discrimination: Evidence from a New Survey.” The Journal of Business, vol. 75, no. 4 (2002): 641- 679. Cheng, Ping, Zhenguo Lin, and Yingchun Liu. “Do Women Pay More for Mortgages?” The Journal of Real Estate Finance and Economics, vol. 43 (2011): 423-440. Cheng, Ping, Zhenguo Lin, and Yingchun Liu. “Racial Discrepancy in Mortgage Interest Rates.” The Journal of Real Estate Finance and Economics, vol. 51 (2015): 101-120. Cole, Rebel, and Tatyana Sokolyk. “Who Needs Credit and Who Gets Credit? Evidence from the Surveys of Small Business Finances”. Journal of Financial Stability, vol. 24 (2016), 40-60. Coleman, Susan. “Access to Debt Capital for Women- and Minority- Owned Small Firms: Does Educational Attainment Have an Impact?” Journal of Developmental Entrepreneurship, vol. 9, no. 2 (2004): 127-143. Duesterhas, Megan, Liz Grauerholz, Rebecca Weichsel, and Nicholas A. Guittar. “The Cost of Doing Femininity: Gendered Disparities in Pricing of Personal Care Products and Services,” Gender Issues, vol. 28, (2011): 175-191. Goodman, Laurie, Jun Zhu, and Bing Bai. “Women Are Better than Men at Paying Their Mortgages.” Urban Institute, Research Report (2016). Haughwout, Andrew, et al. “Subprime Mortgage Pricing: The Impact of Race, Ethnicity, and Gender on the Cost of Borrowing.” Brookings- Wharton Papers on Urban Affairs (2009): 33-63. Mijid, Naranchimeg. “Gender differences in Type 1 credit rationing of small businesses in the US.” Cogent Economics & Finance, vol. 3 (2015). Mijid, Naranchimeg. “Why are female small business owners in the United States less likely to apply for bank loans than their male counterparts?” Journal of Small Business & Entrepreneurship, vol. 27, no. 2 (2015): 229- 249. Mijid, Naranchimeg and Alexandra Bernasek. “Gender and the credit rationing of small businesses.” The Social Science Journal, vol. 50 (2013): 55-65. Morton, Fiona Scott, Florian Zettelmeyer, and Jorge Silva-Risso. “Consumer Information and Price Discrimination: Does the Internet Affect the Pricing of New Cars to Women and Minorities?” National Bureau of Economic Research, Working Paper 8668 (2001). O’Connor, Sally. “The Impact of Gender in the Mortgage Credit Market.” University of Wisconsin-Milwaukee Doctoral Dissertation (1996). Van Rensselaer, Kristy N., et al. “Mortgage Pricing and Gender: A Study of New Century Financial Corporation.” Academy of Accounting and Financial Studies Journal, vol. 18, no. 4 (2014): 95-110. Wyly, Elvin and C.S. Ponder. “Gender, age, and race in subprime America.” Housing Policy Debate, vol. 21, no. 4 (2011): 529-564. Zimmerman Treichel, Monica and Jonathan A. Scott. “Women-Owned Businesses and Access to Bank Credit: Evidence from Three Surveys Since 1987.” Venture Capital, vol. 8, no. 1 (2006): 51-67.
Why GAO Did This Study Gender-related price differences occur when consumers are charged different prices for the same or similar goods and services because of factors related to gender. While variation in costs and consumer demand may give rise to such price differences, some policymakers have raised concerns that gender bias may also be a factor. While the Equal Credit Opportunity Act and Fair Housing Act prohibit discrimination based on sex in credit and housing transactions, no federal law prohibits businesses from charging consumers different prices for the same or similar goods targeted to different genders. GAO was asked to review gender-related price differences for consumer goods and services sold in the United States. This report examines, among other things, (1) how prices compared for selected goods and services marketed to men and women, and potential reasons for any price differences; (2) what is known about price differences for men and women for products not differentiated by gender, such as mortgages; and (3) the extent to which federal agencies have identified and addressed any concerns about gender-related price differences. To examine these issues, GAO analyzed retail price data, reviewed relevant academic studies, analyzed federal consumer complaint data, and interviewed federal agency officials, industry experts, and academics. What GAO Found Firms differentiate many consumer products to appeal separately to men and women by slightly altering product attributes like color or scent. Products differentiated by gender may sell for different prices if men and women have different demands or willingness to pay for these product attributes. Of 10 personal care product categories (e.g., deodorants and shaving products) that GAO analyzed, average retail prices paid were significantly higher for women's products than for men's in 5 categories. In 2 categories—shaving gel and nondisposable razors—men's versions sold at a significantly higher price. One category—razor blades--had mixed results based on two price measures analyzed, and two others—disposable razors and mass-market perfumes—showed no significant gender price differences. GAO found that the target gender for a product is a significant factor contributing to price differences identified, but GAO did not have sufficient information to determine the extent to which these gender-related price differences were due to gender bias as opposed to other factors, such as different advertising costs. Though the analysis controlled for several observable product attributes, such as product size and packaging type, all underlying differences in costs and demand for products targeted to different genders could not be fully observed. Studies GAO reviewed found limited evidence of gender price differences for four products or services not differentiated by gender—mortgages, small business credit, auto purchases, and auto repairs. For example, with regard to mortgages, women as a group paid higher average mortgage rates than men, in part due to weaker credit characteristics, such as lower average income. However, after controlling for borrower credit characteristics and other factors, three studies did not find statistically significant differences in borrowing costs between men and women, while one found women paid higher rates for certain subprime loans. In addition, one study found that female borrowers defaulted less frequently than male borrowers with similar credit characteristics, and the study suggested that women may pay higher mortgage rates than men relative to their default risk. While these studies controlled for factors other than gender that could affect borrowing costs, several lacked important data on certain borrower risk characteristics, such as credit scores, which could affect analysis of gender disparities. Also, several studies analyzed small samples of subprime loans that were originated in 2005 or earlier, which limits the generalizability of the results. In their oversight of federal antidiscrimination statutes, the Bureau of Consumer Financial Protection, Federal Trade Commission, and Department of Housing and Urban Development have identified limited consumer concerns based on gender-related pricing differences. GAO's analysis of complaint data received by the three agencies from 2012–2017 found that they had received limited consumer complaints about gender-related price differences. The agencies provide general consumer education resources on discrimination and consumer awareness. However, given the limited consumer concern, they have not identified a need to incorporate additional materials specific to gender-related price differences into their existing consumer education resources.
gao_GAO-18-613T
gao_GAO-18-613T_0
DHS Has Made Progress Addressing Past Challenges, But Some Actions are Still In Progress Our past work has identified progress and challenges in a number of areas related to DHS’s management of the CFATS program including (1) the process for identifying high risk chemical facilities; (2) how it assesses risk and prioritizes facilities; (3) reviewing and approving facility security plans; (4) how it conducts facility compliance inspections; and (5) efforts to conduct stakeholder outreach and gather feedback. DHS has made a number of programmatic changes to CFATS in recent years that may also impact its progress in addressing our open recommendations; these changes are included as part of our ongoing review of the program. Identifying High-Risk Chemical Facilities In May 2014, we found that more than 1,300 facilities had reported having ammonium nitrate to DHS. However, based on our review of state data and records, there were more facilities with ammonium nitrate holdings than those that had reported to DHS under the CFATS program. Thus, we concluded that some facilities that were required to report may have failed to do so. We recommended that DHS work with other agencies, including the Environmental Protection Agency (EPA), to develop and implement methods of improving data sharing among agencies and with states as members of a Chemical Facility Safety and Security Working Group. DHS agreed with our recommendation and has since addressed it. Specifically, DHS compared DHS data with data from other federal agencies, such as EPA, as well as member states from the Chemical Facility Safety and Security Working Group to identify potentially noncompliant facilities. As a result of this effort, in July 2015, DHS officials reported that they had identified about 1,000 additional facilities that should have reported information to comply with CFATS and subsequently contacted these facilities to ensure compliance. DHS officials told us that they continue to engage with states to identify potentially non-compliant facilities. For example, as of June 2018, DHS officials stated they have received 43 lists of potentially noncompliant facilities from 34 state governments, which are in various stages of review by DHS. DHS officials also told us that they recently hired an individual to serve as the lead staff member responsible for overseeing this effort. DHS has also taken action to strengthen the accuracy of data it uses to identify high risk facilities. In July 2015, we found that DHS used self- reported and unverified data to determine the risk categorization for facilities that held toxic chemicals that could threaten surrounding communities if released. At the time, DHS required that facilities self- report the Distance of Concern—an area in which exposure to a toxic chemical cloud could cause serious injury or fatalities from short-term exposure—as part of its Top-Screen. We estimated that more than 2,700 facilities with a toxic release threat had misreported the Distance of Concern and therefore recommended that DHS (1) develop a plan to implement a new Top-Screen to address errors in the Distance of Concern submitted by facilities, and (2) identify potentially miscategorized facilities that could cause the greatest harm and verify that the Distance of Concern of these facilities report is accurate. DHS has fully addressed both of these recommendations. Specifically, DHS implemented an updated Top-Screen in October 2016 and now collects data from facilities and calculates the Distance of Concern itself, rather than relying on the facilities’ calculation. In response to our second recommendation, in November 2016, DHS officials stated they completed an assessment of all Top-Screens that reported threshold quantities of toxic release chemicals of interest and identified 158 facilities with the potential to cause the greatest harm. As of May 2017, according to ISCD officials, 156 of the 158 facilities submitted updated Top-Screens and 145 of the 156 Top-Screens had undergone a quality assurance review process. Assessing Risk and Prioritizing Facilities DHS has also taken actions to better assess regulated facilities’ risks in order to place the facilities into the appropriate risk tier. In April 2013, we reported that DHS’s risk assessment approach did not consider all of the elements of threat, vulnerability, and consequence associated with a terrorist attack involving certain chemicals. Our work showed that DHS’s risk assessment was based primarily on consequences from human casualties, but did not consider economic consequences, as called for by the National Infrastructure Protection Plan (NIPP) and the CFATS regulation. We also found that (1) DHS’s approach was not consistent with the NIPP because it treated every facility as equally vulnerable to a terrorist attack regardless of location or on-site security and (2) DHS was not using threat data for 90 percent of the tiered facilities—those tiered for the risk of theft or diversion—and using 5-year-old threat data for the remaining 10 percent of those facilities that were tiered for the risks of release or sabotage. We recommended that DHS enhance its risk assessment approach to incorporate all elements of risk and conduct a peer review after doing so. DHS agreed with our recommendations and has made progress towards addressing them. Specifically, with regard to our recommendation that DHS enhance its risk assessment approach to incorporate all elements of risk, DHS worked with Sandia National Laboratories to develop a model to estimate the economic consequences of a chemical attack. In addition, DHS worked with Oak Ridge National Laboratory to devise a new tiering methodology, called the Second Generation Risk Engine. In so doing, DHS revised the CFATS threat, vulnerability, and consequence scoring methods to better cover the range of CFATS security issues. Additionally, with regard to our recommendation that DHS conduct a peer review after enhancing its risk assessment approach, DHS conducted peer reviews and technical reviews with government organizations and facility owners and operators, and worked with Sandia National Laboratories to verify and validate the new tiering approach. We are currently reviewing the reports and data that DHS has provided about its new tiering methodology as part of our ongoing work and will report on the results of this work later this summer. To further enhance its risk assessment approach, in fall 2016, DHS also revised its Chemical Security Assessment Tool (CSAT), which supports DHS efforts to gather information from facilities to assess their risk. According to DHS officials, the new tool—called CSAT 2.0—is intended to eliminate duplication and confusion associated with DHS’s original CSAT. DHS officials told us that they have improved the tool by revising some questions in the original CSAT to make them easier to understand; eliminating some questions; and pre-populating data from one part of the tool to another so that users do not have to retype the same information multiple times. DHS officials also told us that the facilities that have used the CSAT 2.0 have provided favorable feedback that the new tool is more efficient and less burdensome than the original CSAT. Finally, DHS officials told us that as of June 2018, DHS has completed all notifications and has processed tiering results for all but 226 facilities. DHS officials stated they are currently working to identify correct points of contact to update registration information for these remaining facilities. We are currently assessing DHS’s efforts to assess risk and prioritize facilities as part of our ongoing work and will report on the results of this work in our report later this summer. Reviewing and Approving Facility Site Security Plans DHS has also made progress reviewing and approving facility site security plans by reducing the time it takes to review these plans and eliminating the backlog of plans awaiting review. In April 2013, we reported that DHS revised its procedures for reviewing facilities’ security plans to address DHS managers’ concerns that the original process was slow, overly complicated, and caused bottlenecks in approving plans. We estimated that it could take DHS another 7 to 9 years to review the approximately 3,120 plans in its queue at that time. We also estimated that, given the additional time needed to do compliance inspections, the CFATS program would likely be implemented in 8 to 10 years. We did not make any recommendations for DHS to improve its procedures for reviewing facilities’ security plans because DHS officials reported that they were exploring ways to expedite the process, such as reprioritizing resources and streamlining inspection requirements. In July 2015, we reported that DHS had made substantial progress in addressing the backlog—estimating that it could take between 9 and 12 months for DHS to review and approve security plans for the approximately 900 remaining facilities. DHS officials attributed the increased approval rate to efficiencies in DHS’s review process, updated guidance, and a new case management system. Subsequently, DHS reported in its December 2016 semi-annual report to Congress that it had eliminated its approval backlog. Finally, we found in our 2017review that DHS also took action to implement an Expedited Approval Program (EAP). The CFATS Act of 2014 required that DHS create the EAP as another option that tier 3 and tier 4 chemical facilities may use to develop and submit security plans to DHS. Under the program, facilities may develop a security plan based on specific standards published by DHS (as opposed to the more flexible performance standards using the standard, non-expedited process). DHS issued guidance intended to help facilities prepare and submit their EAP security plans to DHS, which includes an example that identifies prescriptive security measures that facilities are to have in place. According to committee report language, the EAP was expected to reduce the regulatory burden on smaller chemical companies, which may lack the compliance infrastructure and the resources of large chemical facilities, and help DHS to process security plans more quickly. If a tier 3 or 4 facility chooses to use the expedited option, DHS is to review the plan to determine if it is facially deficient, pursuant to the reporting requirements of the CFATS Act of 2014. If DHS approves the EAP site security plan, it is to subsequently conduct a compliance inspection. In 2017, we found that DHS had implemented the EAP and had reported to Congress on the program, as required by the CFATS Act of 2014. In addition, as of June 2018 according to DHS officials, only 18 of the 3,152 facilities eligible to use the EAP opted to use it. DHS officials we interviewed attributed the low participation to several possible factors including: DHS had implemented the expedited program after most eligible facilities already submitted standard (non-expedited) security plans to DHS; facilities may consider the expedited program’s security measures to be too strict and prescriptive, not providing facilities the flexibility of the standard process; and the lack of an authorization inspection may discourage some facilities from using the expedited program because this inspection provides useful information about a facility’s security. We also found in 2017 that recent changes made to the CFATS program could affect the future use of the expedited program. As discussed previously, DHS has revised its methodology for determining the level of each facility’s security risk, which could affect a facility’s eligibility to participate in the EAP. DHS continues to apply the revised methodology to facilities regulated under the CFATS program and but it is too early to assess the impact on participation in the EAP. Inspecting Facilities and Ensuring Consistent Compliance In our July 2015 report, we found that DHS began conducting compliance inspections in September 2013, and by April 2015, had conducted inspections of 83 of the 1,727 facilities that had approved security plans. Our analysis showed that nearly half of the facilities were not fully compliant with their approved site security plans and that DHS had not used its authority to issue penalties because DHS officials found it more productive to work with facilities to bring them in compliance. We also found that DHS did not have documented processes and procedures for managing the compliance of facilities that had not implemented planned measures by the deadlines outlined in the plans. We recommended that DHS document processes and procedures for managing compliance to provide more reasonable assurance that facilities implement planned measures and address security gaps. DHS agreed and has taken steps toward implementing this recommendation. DHS updated its CFATS Enforcement Standard Operating Procedure (SOP) and has made progress on the new CFATS Inspections SOP. Once completed these two documents collectively are expected to formally document the processes and procedures currently being used to track noncompliant facilities and ensure they implement planned measures as outlined in their approved site security plans, according to ISCD officials. DHS officials stated they expect to finalize these procedures by the end of fiscal year 2018. We are examining compliance inspections as part of our ongoing work and will report on the results of our work in our report later this summer. Stakeholder Outreach and Feedback In April 2013, we reported that DHS took various actions to work with facility owners and operators, including increasing the number of visits to facilities to discuss enhancing security plans, but that some trade associations had mixed views on the effectiveness of DHS’s outreach. We found that DHS solicited informal feedback from facility owners and operators in its efforts to communicate and work with them, but did not have an approach for obtaining systematic feedback on its outreach activities. We recommended that DHS take action to solicit and document feedback on facility outreach consistent with DHS efforts to develop a strategic communication plan. DHS agreed and implemented this recommendation by developing a questionnaire to solicit feedback on outreach with industry stakeholders and began using the questionnaire in October 2016. Chairman Shimkus, Ranking Member Tonko, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgements If you or your staff members have any questions about this testimony, please contact me at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions to this work include John Mortin, Assistant Director; and Brandon Jones, Analyst-in-Charge; Michael Lennington, Ben Emmel, and Hugh Paquette. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Thousands of facilities have hazardous chemicals that could be targeted or used to inflict mass casualties or harm surrounding populations in the United States. In accordance with the DHS Appropriations Act, 2007, DHS established the CFATS program in 2007 to, among other things, identify and assess the security risk posed by chemical facilities. DHS inspects high-risk facilities after it approves facility security plans to ensure that the facilities are implementing required security measures and procedures. This statement summarizes progress and challenges related to DHS's CFATS program management. This statement is based on prior products GAO issued from July 2012 through June 2017, along with updates conducted in June 2018 on DHS actions to address prior GAO recommendations. To conduct the prior work, GAO reviewed relevant laws, regulations, and DHS policies for administering the CFATS program, how DHS assesses risk, and data on high-risk chemical facilities. GAO also interviewed DHS officials and reviewed information on DHS actions to implement its prior recommendations. What GAO Found The Department of Homeland Security (DHS) has made progress addressing challenges that GAO's past work identified to managing the Chemical Facility Anti-Terrorism Standards (CFATS) program. The following summarizes progress made and challenges remaining in key aspects of the program. Identifying high-risk chemical facilities. In July 2015, GAO reported that DHS used self-reported and unverified data to determine the risk of facilities holding toxic chemicals that could threaten surrounding communities if released. GAO recommended that DHS should better verify the accuracy of facility-reported data. DHS implemented this recommendation by revising its methodology so it now calculates the risk of toxic release, rather than relying on facilities to do so. Assessing risk and prioritizing facilities. In April 2013, GAO reported weaknesses in multiple aspects of DHS's risk assessment and prioritization approach. GAO made two recommendations for DHS to review and improve this process, including that DHS enhance its risk assessment approach to incorporate all of the elements of consequence, threat, and vulnerability associated with a terrorist attack involving certain chemicals. DHS launched a new risk assessment methodology in October 2016 and is currently gathering new or updated data from about 27,000 facilities to (1) determine which facilities should be categorized as high-risk because of the threat of sabotage, theft or diversion, or a toxic release and (2) assign those facilities deemed high risk to one of four risk-based tiers. GAO has ongoing work assessing these efforts and will report later this summer on the extent to which they fully address prior recommendations. Reviewing and approving facilities' site security plans . DHS is to review security plans and visit facilities to ensure their security measures meet DHS standards. In April 2013, GAO reported a 7 to 9 year backlog for these reviews and visits. In July 2015, GAO reported that DHS had made substantial progress in addressing the backlog—estimating that it could take between 9 and 12 months for DHS to review and approve security plans for the approximately 900 remaining facilities. DHS has since taken additional action to expedite these activities and has eliminated this backlog. Inspecting facilities and ensuring compliance. In July 2015, GAO reported that DHS conducted compliance inspections at 83 of the 1,727 facilities with approved security plans. GAO found that nearly half of the inspected facilities were not fully compliant with their approved security plans and that DHS did not have documented procedures for managing facilities' compliance. GAO recommended that DHS document procedures for managing compliance. As a result, DHS has developed an enforcement procedure and a draft compliance inspection procedure and expects to finalize the compliance inspection procedure by the end of fiscal year 2018. What GAO Recommends GAO has made various recommendations to strengthen DHS's management of the CFATS program, with which DHS has generally agreed. DHS has implemented or described planned actions to address most of these recommendations.
gao_GAO-18-8
gao_GAO-18-8_0
Background USMS mission areas include fugitive apprehension, witness protection, and federal prisoner transportation, among others. There are 94 U.S. Marshals—one for each federal judicial district—who are presidentially appointed and direct agency operations in each district. U.S. Marshals are considered to generally operate autonomously from headquarter offices and divisions. USMS’s current workforce consists of roughly 3,709 Deputy U.S. Marshals and Criminal Investigators, and approximately 1,435 Detention Enforcement Officers and administrative employees. In general, a cadre of Deputy U.S. Marshals in each district collectively conducts various activities associated with the USMS mission areas. In addition, Deputy U.S. Marshals and Criminal Investigators who are assigned to headquarter operational divisions are located in district offices and work collectively with district employees across the 94 districts to carry out division functions. Deputy U.S. Marshals are categorized into two federal government occupational series – 0082 and 1811. USMS typically hires entry-level Deputy U.S. Marshals in the 0082 series at the GS-5 or GS-7 level. At the GS-11 level, deputies automatically convert to the 1811 series and receive non-competitive career ladder promotions through GS-12 if they complete the required waiting period for advancement to the next grade level and maintain an acceptable level of performance. For GS-13 and above, deputies must compete for promotions through the operational merit promotion process. USMS’s Human Resources Division (HRD) is responsible for issuing and implementing policy guidelines, revisions, and supplements in accordance with appropriate regulations and merit system principles. HRD also periodically assesses the effectiveness of merit promotion policy, assists in filling division and district vacancies, and reports officials who inappropriately discriminate against candidates, and candidates who engage in improper behavior, such as willful exaggeration, misstatements, or other abuses of the application process. USMS’s Office of Professional Responsibility (OPR) oversees the internal compliance review of USMS staff, division, and district offices, which assess compliance with DOJ and USMS policies and procedures, and ensures the integrity of the agency’s internal controls. Federal Guidelines on Merit Promotion Policy Congress passed the Pendleton Act in 1883, establishing that federal employment should be based on merit. The nine merit system principles established by the Pendleton Act were later codified as part of the Civil Service Reform Act of 1978. The first merit principle indicates that federal personnel management should be implemented consistent with certain merit system principles, including that selection and advancement should be determined solely on the basis of relative ability, knowledge, and skills, after fair and open competition which assures that all receive equal opportunity. Title 5 of the United States Code refers to the government-wide personnel management laws and related provisions generally applicable to federal employment. While title 5 of the United States Code generally outlines the rules agencies must follow to make appointments in the competitive service, excepted service, and the senior executive service, agencies have significant discretion to design and implement internal merit promotion policies and processes. Title 5 also states that federal personnel management should be implemented consistent with merit system principles that protect federal employees against “personal favoritism.” According to MSPB, personal favoritism occurs when a supervisor or selecting official grants an advantage to one employee or candidate but not another similarly situated employee or candidate based on friendship or other affinity rather than a legitimate merit-based reason. Favoritism is distinct from discrimination on legally protected bases and is frequently more difficult to clearly identify when it occurs. OPM is responsible for overseeing all policy created to support Federal human resources departments as well as for ensuring that these policies are properly implemented and continue to be correctly carried out. OPM delegates many personnel decisions to federal agencies, but is responsible for establishing and maintaining an oversight program ensuring that the personnel management functions it delegates to agencies are in accordance with merit system principles and the standards established by OPM for conducting those functions. OPM has also established minimum qualification requirements for hiring or promoting individual employees under the competitive process. In addition, OPM allows agencies to make minimum qualification requirements more specific by adding selective placement factors. According to OPM, selective placement factors identify any qualifications that are important for the job and are required when an individual starts the job. Candidates who do not meet selective placement factors are ineligible for further consideration. OPM generally allows agencies to establish selective placement factors for any position without prior OPM approval, but requires agencies to establish and document selective placement factors through the job analysis process. OPM guidance also states that selective placement factors have four characteristics: extensive training or experience to develop; essential for successful performance on the job (i.e., if individuals do not have the selective factor, they cannot perform the job); almost always are geared toward a specific technical competency; cannot be learned on the job in a reasonable amount of time. USMS Has Aligned its Merit Promotion Policy with Federal Guidelines and Developed a Corresponding Process USMS Merit Promotion Policy Is Aligned with Federal Human Capital Guidelines We determined that the USMS merit promotion policy aligns with relevant provisions of title 5 of the United States Code, and title 5 of the Code of Federal Regulations. Specifically, the most recent version of the USMS Merit Promotion Plan, which was revised in November 2016, outlines the mechanisms for affording merit staffing and promotional opportunities to competitive status candidates for GS-13, GS-14, and GS-15 1811 operational law enforcement positions. The plan states that it is the policy of the USMS to maintain a sound staffing program that will ensure that USMS fills positions from among the best qualified candidates and that the selection, assignment, and promotion of employees are on the basis of job-related criteria. The Merit Promotion Plan cites parts of title 5 of the Code of Federal Regulations as the governing authority under which the plan was developed and aligns with key provisions of title 5 of the United States Code and title 5 of the Code of Federal Regulations. Agencies must design and administer merit promotion programs to ensure a systematic means of selection for promotion based on merit. These programs must conform to five requirements outlined in title 5 of the Code of Federal Regulations. Table 1 describes the five requirements and how key provisions in the USMS Merit Promotion Plan align with these requirements. USMS Has Developed a Promotion Process Based on Its Merit Promotion Plan USMS has developed a multi-step process based on the USMS Merit Promotion Plan to assess and select eligible candidates for promotion. To be considered eligible for promotion to GS-13, GS-14, or GS-15 law enforcement positions, candidates must (1) serve one year in an operational position at the next lower grade than the position desired; (2) take the most recent USMS merit promotion examination, which is administered every two years; and (3) submit required documents, including the promotion application package, during an annual open season submission process. Once candidates have met these prerequisites, they may apply to individual position vacancy announcements, which are advertised electronically to all USMS employees. Figure 1 depicts the multiple steps in the USMS merit promotion process. Table 2 provides a detailed description of the multiple steps in the USMS merit promotion process. USMS Is Taking Steps to Improve Monitoring of Its Merit Promotion Process, but Lacks Documented Guidance to Ensure Consistent Compliance with Merit Promotion Policy Although USMS Does Not Monitor Key Aspects of Its Merit Promotion Process, It Is Taking Steps to Improve USMS does not monitor the implementation of the scoring component of its rating process or compliance with its temporary promotion policy, but is taking steps to improve these aspects. We found that raters may directly compete with candidates whose merit promotion packages they score. For example, for an open GS-13 position, a GS-12 employee may promote into the position or a GS-13 employee may be laterally reassigned to the position. Employees seeking a lateral reassignment to another district or division are not required to submit a merit promotion application package during the open season, but instead submit documentation to the merit promotion staff to confirm their eligibility for a lateral reassignment. Thus, a GS-13 employee who serves as a rater may directly compete as a lateral candidate with a GS-12 employee seeking a promotion to the same position. Some USMS employees in our discussion groups expressed the view that the rating process is biased due to this potential conflict of interest. Specifically, seven employees across multiple districts, including four who had served as raters, expressed the view that raters may have personal incentives to score strong candidates lower because they may compete with these candidates for the same positions. The Office of Management and Budget’s (OMB) Circular No. A-123, Management’s Responsibility for Internal Control (A-123) explains that an agency should have processes in place to detect and mitigate potential employee conflicts of interest to demonstrate a commitment to integrity and ethical values. We found that USMS does not have a process in place to eliminate potential rater conflicts of interest. USMS stated that it would be difficult to detect situations where raters who might be seeking a lateral reassignment would be scoring a potential competitor, but acknowledged that to the extent this is occurring, it would be a conflict of interest. USMS also does not monitor the implementation of the rating component of its process to ensure that raters complied with a key merit promotion process requirement. Specifically, USMS guidance states that raters are expected to decline to score a candidate’s application if there is a conflict of interest with the candidate, for example, a former employee or supervisor relationship or a close personal relationship. USMS officials explained that using two raters to score each merit promotion application is intended to mitigate personal bias. However, during our discussion groups, 4 employees who had served as raters said they had directly observed raters scoring applications for employees with whom there existed possible conflicts of interest. Additionally, 18 employees in our discussion groups told us they had heard from colleagues who served on rating panels that raters have used personal knowledge of candidates to influence their scoring. Another 16 employees expressed a related concern that raters can see the names of the applicants they are scoring. According to HRD officials, they relied on raters to decline to score applications of candidates for which they may have personal knowledge and only use the information in the package to determine candidate scores. Although USMS does not monitor the implementation of key aspects of its rating process to mitigate potential rater conflicts of interest or bias, USMS has begun to implement changes that could address these deficiencies. In February 2017, during the course of our review, USMS announced a planned change to the process the agency uses to assess the experience component of candidate applications. Under the existing process, USMS raters collectively score the experience narrative component, which helps determine the overall merit promotion score. The planned change entails having a third-party contractor, rather than USMS employees, determine candidates’ competency scores using a scenario- based competency assessment. As part of the new process, USMS also updated the scoring rubric based on the new competency assessment, which includes the elimination of the experience category (see table 3). USMS started to implement this change to the process during the summer 2017 promotion cycle for GS-13 promotions. USMS plans to evaluate the effectiveness of the new process during the fall of 2017 and determine whether the new process is ready to be implemented for GS- 14 and GS-15 promotions during the next promotion cycle. If USMS effectively implements these planned changes, these actions could address the deficiencies we identified by reducing the potential for rater conflict of interest and bias because independent, third-party raters will assess candidate qualifications, rather than USMS employees evaluating their colleagues. We reviewed USMS compliance with federal guidelines for noncompetitive temporary promotions and found, in a few instances, that USMS violated federal guidelines and its merit promotion policy by extending some noncompetitive temporary promotions beyond the regulatory limit of 120 days. According to USMS officials, they typically use temporary promotions to fill open positions between merit promotion cycles. A temporary promotion may also be used to temporarily promote a GS-14 employee to the Chief Deputy position in the event a U.S. Marshal resigns and the Chief Deputy becomes the acting U.S. Marshal. According to title 5 of the Code of Federal Regulations and the USMS Merit Promotion Plan, individual employees may receive noncompetitive temporary promotions or details to a higher-graded position, or a position with known promotion potential, if the total time spent in any noncompetitive position is 120 days or less within a 12-month timeframe. USMS may also fill open positions between cycles using another type of temporary promotion for up to one year; however, employees are required to compete for temporary promotions beyond 120 days through the merit promotion process. These requirements help USMS use a systematic process of selection according to merit. We analyzed all 844 noncompetitive temporary promotion selections (of 120 days or less) from October 2015 through February 2017 and found 9 instances in which the USMS exceeded the regulatory limit of 120 days for individual employees. These 9 instances exceeded the statutory limit by approximately 30 days on average, while ranging from 5 days to 103 days. USMS officials acknowledged that because they manually enter the noncompetitive temporary promotion end dates into the system that contains the temporary promotions data, they have made errors in reviewing these dates, such as incorrectly adding dates for candidates who have received multiple noncompetitive temporary promotions that exceeded a 12-month timeframe. According to HRD, this system has internal checks and controls to ensure an employee’s temporary promotion does not go beyond the not-to-exceed date. For example, the system does not allow an employee who received a noncompetitive temporary promotion to a higher grade level to continue to be paid at the higher level beyond the date the temporary promotion is set to expire unless HRD processes an action to extend the promotion. Otherwise, to ensure the employee continues to be paid, HRD must process an action to revert the employee back to their original grade level. USMS officials explained that they must manually review instances in which employees receive multiple noncompetitive temporary promotions within a year, to ensure the total time spent serving in these positions does not exceed 120 days during any 12-month period. Despite having identified relatively infrequent instances of non- compliance, we note that agencies are required to comply with federal regulations. As a result of our review, USMS took immediate steps to strengthen its internal controls to ensure its compliance with these temporary promotion regulations. Specifically, USMS reported to us that they developed a spreadsheet to help staffing specialists correctly calculate the number of days the employee is eligible for a temporary promotion. Moreover, USMS has developed training on how to use the new tool and on the federal regulations that guide temporary promotions, which it plans to provide to staffing specialists in October 2017. Finally, USMS plans to incorporate a regular review of temporary promotion actions into the HRD standard operating procedure. USMS Lacks Documented Guidance on Rater Scoring USMS provides verbal guidance to instruct raters on how to score the experience category of merit promotion packages, which may result in inconsistent application of the guidance. USMS Merit Promotion Procedures generally state that raters assign a numerical grade to each experience category—such as problem-solving or leadership—by comparing how the experience described in the application relates to the established benchmarks. The benchmarks, which are provided to raters, contain descriptions of relevant experience that are designed to guide the raters as they assign scores to specific knowledge, skills and abilities, such as supervising staff and working with databases. At the beginning of the scoring process, each rating panel receives verbal guidance from merit promotion staff, which entails using actual candidate applications as examples and verbally discussing how to use professional judgment to apply the benchmarks. Some employees in our discussion groups expressed the opinion that the guidance provided to raters to score candidate experience narratives is unclear, which results in inconsistent scoring. Specifically, during our discussion groups, 39 employees across multiple districts, including 7 employees who had served as a rater, stated that raters often had different interpretations of HRD’s expectations for how to apply the benchmarks. For example, they stated that some raters determined scores based on whether a candidate’s narrative contained the specific language in the benchmark. Other raters, by contrast, determined scores based on whether the candidate met the intent of the benchmark, regardless of whether the candidate included the specific language in the benchmark. As a result, employees in our discussion groups explained that highly qualified candidates with relevant management and supervisory experience may receive a low experience score if a rater determines that the candidate did not use the exact language appearing in the benchmarks. Furthermore, 70 of 85 employees (82 percent) expressed the view that inconsistent scoring of similarly qualified candidates creates the perception that the rating process is unfairly subjective. Specifically, they asserted that comparable candidates with similar types of experience have received vastly different scores depending on which raters scored their applications. Two employees in different districts also said that they re-submitted the same experience narrative as the prior year, and received a significantly different score each year. Additionally, approximately 20 employees contended that raters may be influenced by their own professional experiences. For example, raters who have operational experiences that are different from candidates’ experiences may not sufficiently understand the duties or professional experiences described by candidates. Consequently, they argued, these raters may be limited in their ability to fairly rate some candidates’ experiences. Although USMS is implementing a new competency assessment process for GS-13 merit promotions, it is not clear at this time whether the new process will address concerns about inconsistent rater scoring because the agency plans to use new benchmarks that were developed by a third- party contractor in collaboration with USMS subject matter experts to determine candidate scores. According to USMS officials, the new process will entail professionally trained assessors using evaluation guidelines to assess how well USMS promotion candidates respond to scenario-based questions. In collaboration with the contractor, USMS also developed evaluation guidelines that include plans for monitoring quality assurance over the rating process. For example, according to USMS officials, the third-party contractor will conduct random spot checks to assess the consistency with which raters apply the new benchmarks and will provide USMS a report on the results of the quality assurance monitoring. However, given that USMS implemented these changes near the end of our review, we did not assess the implementation of the new process or the related quality assurance monitoring. Furthermore, until USMS determines a timeframe for implementing the new competency assessment at the GS-14 and GS-15 levels, the current rating process will remain in effect. Standards for Internal Control in the Federal Government call for agency management to determine the consistency with which controls are applied. Furthermore, it states management should document policies in the appropriate level of detail to allow management to effectively monitor the control activity. While USMS provides raters with benchmarks and verbal guidance on how to apply the benchmarks when scoring applications, USMS has not documented guidance for raters. Six employees who had served as raters said the rating guidance provided was insufficient or the guidance could be improved. By developing clear and specific documented guidance on how raters should interpret and apply the benchmark guidelines, USMS could minimize rater subjectivity and scoring inconsistency for both the current rating process and the forthcoming competency-based assessment. USMS Has Taken Limited Steps to Understand or Address Employee Concerns about the Merit Promotion Process USMS Employees Have Expressed Negative Views and Concerns about the USMS Merit Promotion Process According to an OPM report summarizing 2016 Federal Employee Viewpoint Survey (FEVS) data, about one-third of USMS employees who answered the survey indicated they agree that promotions are based on merit. Specifically, in response to the survey statement, promotions in my work unit are based on merit, an estimated 41 percent of USMS respondents strongly disagreed or disagreed with the statement, while 34 percent strongly agreed or agreed, and 25 percent neither agreed nor disagreed. Based on our review of an agency report examining district and division-level USMS 2016 FEVS scores, district and division scores varied greatly among those employees who responded to the FEVS. For example, across the 10 districts with the lowest reported ratings in 2016, we found that 63 percent to 78 percent of respondents disagreed that promotions are based on merit. By comparison, across the 10 districts with the highest reported satisfaction ratings in 2016, 7 percent to 16 percent of respondents disagreed that promotions are based on merit. Most of the USMS employees at four district locations who met with us and answered our questions viewed the merit promotion process unfavorably, citing concerns primarily related to favoritism in the process. For example, 57 of 82 employees (70 percent) indicated that they had low or no trust that the merit promotion process is fair and based on merit. Employees in lower grade levels expressed a greater degree of mistrust than did those in higher grades (see table 4). Specifically, 45 of 53 GS-12 employees (85 percent) indicated that they had low or no trust in the merit promotion process, while just less than half of GS-13 employees (10 of 22) and relatively few GS-14 employees (2 of 7) said they had low or no trust in the merit promotion process. While most employees (51 of 70, or 73 percent) answered that sometimes qualified candidates get promoted; several explained during our discussion groups that they believe the promotion of less qualified—or unqualified—employees occurs frequently enough to affect morale. Further, 47 of 84 employees (56 percent) noted that morale has deteriorated as a result of merit promotion processes or selections. Finally, most of the employees (66 of 85, or 78 percent) answered that USMS has not taken any steps to understand or improve employee morale or they were unsure of whether any steps had been taken. In addition, USMS employees we talked with during our discussion groups expressed concerns about the USMS merit promotion process. The prevalent themes that emerged during these groups were concerns that (1) promotions are based on favoritism, (2) the promotion process lacks transparency, and (3) promotion guidance is unclear and promotion candidates do not receive feedback. Concerns that Promotions are Based on Favoritism Employees in our discussion groups expressed the view that many promotion decisions are based on personal relationships over individual merit. Notably, 51 of 85 employees in our discussion groups cited examples of qualified candidates who were passed over for promotion by those whom they believed were less-qualified due to favoritism. From their perspective, there have been instances where candidates with high promotion package scores and good reputations as supervisors have not been promoted, while lower scoring candidates with poor reputations as supervisors who have personal relationships with decision-makers have been promoted. Further, 36 employees in our discussion groups said they believed that career-enhancing opportunities, such as temporary promotions, which improve employees’ promotion potential by providing them with directly related experience in positions for which they may be competing, are often provided unfairly to employees based on personal relationships. Employees in our discussion groups also expressed the view that some employees receive more guidance on their application from supervisors than do others, which they attributed to favoritism. As part of the merit promotion process, supervisors are required to verify the experience statements submitted by candidates. We found that among the limited number of supervisors with whom we met, there were varying interpretations of their responsibility in meeting this requirement. Specifically, 1 supervisor viewed his role as strictly verifying the experience and providing no further input. However, 7 other supervisors viewed their role as providing guidance and mentorship to employees by offering advice for improving candidate applications. Finally, 5 additional supervisors said they provided additional guidance to employees only when specifically requested. Of the 85 employees in our discussion groups, 28 indicated that they believed supervisors helped certain candidates develop their merit promotion packages, which provides an unfair advantage over candidates who do not receive such guidance. Additionally, nine employees raised concerns that USMS has sometimes expanded certificate of eligibles lists inconsistent with USMS policy to include preselected, favored candidates. According to the USMS Merit Promotion Plan, if there are more than five candidates applying for a position, at least the top five scoring candidates will generally be included on the list and subsequently referred for candidate selection. In some circumstances, more than five eligible candidates are allowed to be placed on the list. For example, if there is a tie for the last position on the list, all candidates with that score will be included. Additionally, candidates with a score within one point from the fifth highest scoring candidate would also be included on the list. Finally, if there are multiple vacancies for the same position (same series, grade, title, and location), one additional name for each vacancy may be added to the list. To examine USMS compliance with this policy, we analyzed certificate of eligibles lists and the corresponding candidate scores for fiscal years 2015 and 2016. For fiscal year 2015, we examined all 213 position vacancies and found 2 instances where additional candidates were included on the list inconsistent with USMS’ established policy. Specifically, these 2 lists contained the names of candidates with scores that were more than one point below the fifth highest-scoring candidate, and of these 2 instances, 1 candidate was promoted. For fiscal year 2016, we examined all 224 position vacancies and did not find any inconsistencies with USMS’ established policy. Whistleblowers who raised concerns about improper promotion practices to Congress had alleged that USMS managers used selective placement factors to limit competition for certain positions or to tailor vacancy announcements for preselected, favored candidates. Similarly, five employees in our discussion groups expressed the view that USMS used selective placement factors to limit competition or pre-select certain candidates. In this regard, we reviewed USMS compliance with OPM requirements for the use of selective placement factors. Specifically, OPM requires that agencies document the justification for using selective placement factors through a job analysis process. We reviewed all job vacancy announcements for fiscal year 2015, fiscal year 2016, and part of fiscal year 2017 (October 2016 through April 2017) to determine if a job analysis had been performed when selective placement factors were included in the announcement. In fiscal year 2015, there were 213 vacancy announcement positions, and 12 contained selective placement factors. We found USMS had not completed a job analysis justification for any one of these 12 announcements. In fiscal year 2016, there were 224 vacancy announcements, and 15 contained selective placement factors. USMS completed a job analysis justification for all 15. For part of fiscal year 2017, there were 171 vacancy announcements, and 23 contained selective placement factors, each of which had a justification. HRD officials acknowledged that in the past they did not consistently document the agency’s use of selective placement factors by conducting job analysis justifications, as required by OPM, but have consistently complied with this requirement since April 2016. Concerns that the Promotion Process Lacks Transparency Employees in our discussion groups also expressed the view that poor communication and limited transparency about the merit promotion process and certain management decisions further contribute to employees’ negative perceptions of the merit promotion process. For example, among the 85 employees in our discussion groups: Sixty-three employees expressed the view that the merit promotion process lacks transparency because HRD does not effectively communicate with employees about procedural steps or process changes, contributing to a lack of understanding about the process. Forty-eight employees expressed the view that they have a limited understanding of the rating and ranking process or that there is no mechanism to dispute or appeal their score if they do not believe they were fairly rated. Nineteen employees stated that HRD does not provide information about policy or process changes until the changes have been implemented and that they initially learn about forthcoming process changes through other employees and hearsay, causing confusion and frustration. Twenty-five employees expressed the perspective that USMS management cancels vacancy announcements when preselected or favored candidates do not appear on the certificate of eligibles list. According to USMS officials, the agency cancels an announcement when the announcement posting was made in error (i.e., the position was not actually available) or when they need to reassign an employee to a different location. We found vacancy cancellations were infrequent—9 of 437 announcements—during fiscal years 2015 and 2016; however, we noted that USMS canceled 5 of the 9 announcements after final certificate of eligibles were issued, which may have contributed to employees’ concerns. Concerns that Promotion Process is Unclear and Promotion Candidates Lack Feedback Another prevalent theme that emerged during our discussion groups was that the merit promotion process is unclear, and that employees do not receive feedback when they do not get promoted. Notably, among the 85 employees in our discussion groups: Forty-six employees described the merit promotion process as unclear. Fifty-nine employees stated that the merit promotion application package does not reflect their qualifications to perform specific jobs or their readiness to be promoted. Thirty-seven employees told us they are not notified of key steps in the merit promotion process, such as whether they make the certificate of eligibles list. Thirty-eight employees stated that because they are not provided feedback when they are not selected for a promotion, they do not have a clear understanding of how the USMS promotion process assesses the extent to which candidates are ready for promotion. While there is no formal mechanism for providing specific feedback, HRD officials explained, they may provide general feedback about the process to candidates who proactively request feedback. However, as part of the promotion process, HRD officials do not provide employees with specific feedback at that time about their performance or readiness for promotion. HRD officials also noted that as part of the new competency-based assessment process, candidates will receive detailed instructions and guidance on how candidates will be assessed for each competency. HRD officials acknowledged that informing candidates about key merit promotion steps, such as making the certificate of eligibles, would help improve transparency and employee morale. They further explained that while they do not directly inform candidates about making the certificate of eligibles, in 2016 during the course of our review, they began posting the cutoff scores for each job so candidates are now able to determine whether they made the certificate of eligibles by comparing their final score to the cutoff score for each position. Federal guidance notes that perceptions of favoritism, particularly when combined with unclear guidance, a lack of transparency, and limited feedback, negatively impact employee morale. According to MSPB, perceptions of favoritism are damaging to employee morale regardless of their basis in fact, because employees’ perceptions are their reality. Moreover, MSPB noted that providing honest feedback from selecting officials can help employees improve their readiness for future opportunities, and provide transparency to decrease perceptions of favoritism. The report further noted that to achieve the goals of fair and effective management of the federal workforce, organizations must establish clear expectations for supervisors, and supervisors must be aware of employees’ perceptions and exercise sound judgment when making a variety of decisions such as promotion selections, work assignments, training, performance management, and providing workplace flexibilities. In addition, Standards for Internal Control in the Federal Government state that management should communicate quality information down and across reporting lines to enable personnel to perform key roles in achieving objectives, addressing risks, and supporting the internal control system. Providing specific and consistent information to employees about key steps in the merit promotion process and internal management decisions, and constructive feedback to employees on the results of the promotion process, including employee readiness for promotion, would improve transparency and help mitigate employee perceptions of favoritism that have negatively impacted employee morale. USMS Has Taken Limited Steps to Understand and Address Employee Concerns about the Merit Promotion Process USMS has taken limited steps to understand and address employee concerns about its merit promotion process. Specifically, after analyzing the results of the 2016 FEVS responses, USMS headquarters staff acknowledged employees’ negative perceptions of the merit promotion process as an internal agency challenge. In an update provided to DOJ on plans for addressing employee engagement challenges identified in the FEVS, USMS reported that the primary employee engagement challenges are the geographical dispersal and management structure of district offices (since USMS districts are led by political appointees, who have different management styles). To address this challenge, USMS disseminated an agency-wide memorandum emphasizing to all employees that each employee and manager has an individual responsibility to take action to improve engagement at the local level. Also, USMS encouraged local managers to evaluate their FEVS results and formulate an action plan that fits their individual district or division. USMS does not track the extent to which district and divisions complete action plans and does not require district or division offices to submit their action plans to HRD. We found that none of the four districts we visited had developed a written action plan in response to the 2016 FEVS results. At three of these districts, the Chief Deputy U.S. Marshals indicated to us that no steps were being taken to develop an action plan because they did not consider it a required or necessary step. However, the Chief Deputy U.S. Marshal in one district explained that while he did not document an action plan, he took steps to better understand employee engagement challenges identified in the FEVS for his district. Specifically, he facilitated small discussion groups to better understand low employee agreement with two FEVS survey statements, including promotions in my work unit are based on merit. During these discussions, he said that he aimed to clarify areas where employees’ negative perspectives were based on a lack of understanding about the merit promotion process. While USMS has taken some positive steps, having a better understanding of the basis for these concerns, and how to address them, will likely require that USMS take additional steps. Most of the employees we interviewed said they were unaware of whether USMS has taken any steps to understand or improve employee morale related to merit promotions, and some feared raising concerns to management. Specifically, 25 of 85 (29 percent) employees in our discussion groups said no steps were taken to understand or improve employee morale, while an additional 41 employees (48 percent) were unsure that any steps were taken. Further, 24 of 85 employees in our discussion groups expressed fears of raising concerns to USMS district or headquarters management, citing allegations of district management intimidating or retaliating against employees who raise issues, such as not selecting those employees for career-enhancing opportunities or promotions. To the extent that employees fear they will not get promoted if they raise concerns to management and management does not have sufficient information to understand the nature and causes of employee concerns about the merit promotion process, taking meaningful and effective steps to address the concerns will be difficult. OMB and OPM intend for agency managers to use the findings in the FEVS to develop policies and action plans for improving agency performance, including the enhancement of employee engagement and satisfaction. According to OPM, action plans should be developed at multiple levels; agency-wide, by subcomponent, and several levels down in the agency. Also, many agencies have found it beneficial to conduct focus groups after reviewing survey results to better understand the underlying causes of employee engagement scores and get employee suggestions for how to improve. OPM’s action planning guidance also suggests that agencies specify time frames for accomplishing the actions, who will be responsible for implementing the actions, who will be affected by the actions, the resources required, and a plan to communicate these actions to managers and employees. Although HRD disseminated a memorandum requesting district and division managers to develop action plans, it has not developed an agency-wide action plan, nor has it taken steps to ensure that all districts and divisions develop action plans. By delegating responsibility for developing action plans to individual districts and divisions, HRD does not have consistent or adequate information to understand the nature and causes of employee concerns across districts and divisions. Without this information, USMS is unable to address employee concerns about its merit promotion process and remains vulnerable to adverse effects, such as decreased employee satisfaction and engagement, and decreased agency performance. USMS management stated that they take employee concerns and feedback into consideration as appropriate, but are primarily concerned with ensuring the process is implemented in accordance with legal requirements. They further stated that they generally believe the USMS merit promotion process to be fair, and attributed some employee concerns with the merit promotion process to a lack of available positions relative to the number of employees who are ready for promotion. Nevertheless, we believe an agency-wide action plan would help USMS more fully understand and address areas where employees express negative perceptions of the merit promotion process. Conclusions Selecting candidates based on their qualifications instead of patronage has been the foundation of the federal hiring system for more than 130 years. Federal guidelines give agencies significant discretion to design and implement their merit promotion processes to best meet their needs. Since 2016, USMS has been implementing changes to its merit promotion process in response to multiple internal and external investigations, which substantiated allegations made by whistleblowers. While the new competency assessment process has the potential to reduce the risk of rater conflicts of interest and bias, USMS could still do more to further improve its process. Developing specific guidance to help raters more consistently score candidate applications would minimize scoring subjectivity. Continuing to take steps to improve this process would better position USMS to improve employee engagement. In light of the significant distrust in the merit promotion practices we heard from employees, USMS management can also take further action to better understand and appropriately address employee concerns, such as providing employees specific feedback on the results of the promotion process, including their readiness for promotion and developing an agency-wide action plan to more fully understand and address areas where employees express negative perceptions of the merit promotion process. More actively engaging employees could also bolster ongoing USMS efforts to improve the promotion process and enhance agency performance. Recommendations for Executive Action We recommend that the Director of the USMS take the following actions: Develop specific documented guidance—both for the current and new processes—to enhance raters’ ability to consistently interpret and apply experience-based benchmarks for GS-14 and GS-15 positions and competency-based benchmarks for GS-13 positions when evaluating candidate qualifications. (Recommendation 1) Develop and implement a mechanism to provide specific feedback to employees on the results of the promotion process, including their readiness for promotion. (Recommendation 2) Develop and implement an agency-wide action plan to more fully understand and address areas where employees express negative perceptions of the merit promotion process. Consistent with OPM guidance in this area, the plan should specify time frames for accomplishing the actions, who will be responsible for implementing the actions, who will be affected by the actions, the resources required, and a plan to communicate these actions to managers and employees. (Recommendation 3) Agency Comments We provided a draft of this report to DOJ and USMS for review and comment. Liaisons from DOJ and USMS responded in an email that DOJ had no formal comments on the report. In addition, the USMS liaison concurred with the recommendations and provided technical comments, which we incorporated as appropriate. We are sending copies of this report to DOJ, the Director of the USMS, appropriate congressional committees and members, and other interested parties. In addition, this report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions, please contact Diana Maurer at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made significant contributions to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Brett Fallavollita (Assistant Director), Carissa Bryant (Analyst-in-Charge), Jessica Du, and Kelsey Hawley made key contributions to this report, along with David Alexander, Willie Commons III, Dominick Dale, and Eric Hauswirth.
Why GAO Did This Study USMS mission areas include fugitive apprehension, witness protection, and federal prisoner transportation, among others. USMS whistleblowers recently alleged that USMS officials engaged in improper promotion practices—such as routinely preselecting favored candidates. Investigations have substantiated multiple whistleblower allegations which has raised questions about the integrity of USMS's merit promotion process. USMS announces about 260 law enforcement promotion opportunities annually. GAO was asked to review USMS's promotion processes and policies and effects that USMS promotion practices have on employee morale. This report examines (1) the extent to which the USMS's merit promotion policies are aligned with federal guidelines; (2) the extent to which USMS monitors its merit promotion processes; and (3) the steps, if any, USMS has taken to understand and address employee concerns about its merit promotion policies and processes. GAO analyzed data and documents on USMS promotions from October 2015 through April 2017, and found these data to be sufficiently reliable for the purposes of GAO's study. GAO also analyzed USMS documentation, and interviewed USMS officials and non-generalizable groups of employees (85 in total) in four district locations. What GAO Found The U.S. Marshals Service's (USMS) merit promotion policy aligns with relevant provisions in title 5 of the United States Code and Code of Federal Regulations, which are the government-wide laws and related provisions agencies must follow to make federal appointments. Agencies must design and administer merit promotion programs to ensure a systematic means of selection for promotion based on merit and these programs must conform to five key requirements outlined in title 5. GAO found that the USMS merit promotion plan, as revised in November 2016, aligned with each of these five requirements. For example, the first requirement states that agencies must establish merit-based procedures for promoting employees that are available in writing to candidates. The USMS merit promotion plan, which is available to employees, outlines such procedures. USMS is taking steps to improve how it monitors the implementation of the scoring component of its process to rate promotion applications, but lacks documented guidance to ensure consistent compliance with its merit promotion policy. GAO found that USMS does not adequately monitor the rating process, which allowed for conflicts of interest with raters who may compete with candidates whose applications they score. USMS also does not monitor the rating process to ensure that raters complied with a key requirement—that raters decline to score applications of candidates with whom there is a conflict of interest, such as a supervisor-employee relationship. USMS is implementing a process change that, if implemented effectively, can address these two deficiencies. The new process entails having a third-party contractor, rather than USMS employees, determine candidates' scores. Finally, GAO found that USMS lacks documented guidance on rater scoring. USMS only provides verbal guidance to instruct raters on how to score the experience category of merit promotion packages, creating inconsistent application of the guidelines. Employees GAO met with expressed the view that such discrepancies create the perception that the rating process is unfairly subjective. Developing clear and specific documented guidance on how raters should apply the benchmark guidelines could minimize scoring inconsistency and potential rater subjectivity for both the current rating process and the new competency-based assessment. USMS has taken limited steps to understand and address employee concerns about the promotion process. An estimated 41 percent of USMS respondents to the 2016 Office of Personnel Management Federal Employee Viewpoint Survey strongly disagreed or disagreed that USMS promotions are merit-based, while 34 percent strongly agreed or agreed, and 25 percent neither agreed nor disagreed. During discussion groups GAO held at four USMS district locations across the U.S., employees frequently expressed negative views and many indicated low or no trust that the process is fair and merit-based. Although USMS has acknowledged employees' negative perceptions of the promotion process, it has not developed an agency-wide action plan in accordance with federal guidance to better understand the nature and causes of employee concerns across districts and divisions. Providing specific and consistent information to employees about key steps in the merit promotion process and internal management decisions could improve transparency and help mitigate employee perceptions of favoritism that have negatively impacted employee morale. What GAO Recommends GAO recommends that USMS develop specific rater guidance and develop and implement an agency-wide action plan to better understand and address employee concerns, among other steps. USMS concurred with the recommendations.
gao_GAO-18-156T
gao_GAO-18-156T_0
State Has Worked with UNHCR on Various Measures Designed to Ensure the Integrity of the Resettlement Referral Process In July 2017, we found that State and UNHCR have worked together on several measures designed to ensure integrity in the process through which UNHCR refers refugees to USRAP for potential resettlement in the United States (or, the resettlement referral process). Since 2000, State and UNHCR have outlined their formal partnership using a Framework for Cooperation. State and UNHCR signed the most recent framework document in 2016, covering the period of March 14, 2016 to December 31, 2017. The organizations developed the framework to guide their partnership, emphasizing measures such as oversight activities and risk management. Among other things, the framework emphasizes improved accountability at UNHCR through effective oversight measures, close cooperation with State, and organization-wide risk management. In addition, the framework notes that State will work to ensure that UNHCR allocates sufficient resources to fully implement measures to provide oversight and accountability. For instance, UNHCR has several offices that are responsible for overseeing antifraud activities, in addition to providing audit services, investigating instances of fraud, and conducting broad reviews of country-level operations such as the United Nations Office of Internal Oversight Services and the Board of Auditors. The framework also describes regular coordination and communication between State and UNHCR as an important principle in the relationship between the two organizations. Specifically, at the headquarters level, the U.S. Mission in Geneva, Switzerland, has a humanitarian affairs office that, according to State officials, coordinates with UNHCR on a regular basis. Additionally, UNHCR has developed standard operating procedures (SOPs) and identity management systems to combat the risk of fraud and worked with State to implement these activities in the resettlement process. Despite the complexity and regional variations in its refugee registration, refugee status determination, and resettlement referral processes, UNHCR officials said that standardizing procedures ensures that the organization has established basic antifraud practices worldwide. These officials added that they believe that the SOPs are among the most important tools with which they ensure the integrity of the resettlement referral process. UNHCR officials also collect biometric information on refugees, such as iris scans and fingerprints. State and UNHCR developed a Memorandum of Understanding (MOU) regarding the sharing of some biometric information. According to a Letter of Understanding that accompanies the MOU, it provides a framework whereby data from UNHCR is shared with State, which allows for increased efficiency and accuracy in processing resettlement referrals to the United States. See figure 2 for photographs of technology that UNHCR uses to register and verify refugee identities. State and RSCs Have Policies and Procedures for Processing Refugees, but State Could Improve Efforts to Monitor RSC Performance State and RSCs have policies and procedures for processing refugee applications, but, as we found in July 2017, State has not established outcome-based performance measures to assess whether RSCs are meeting their objectives under USRAP. State’s USRAP Overseas Processing Manual includes requirements for information RSCs should collect when prescreening applicants and initiating national security checks, among other things. RSCs communicate directly with USRAP applicants and prepare their case files. For example, RSCs are to conduct prescreening interviews to record key information, such as applicants’ persecution stories and information about their extended family, and submit certain security checks based on the information collected during the interview to U.S. agencies. In addition, State developed SOPs for processing and prescreening refugee applications at RSCs, which State officials indicated provide baseline standards for RSC operations. Further, all four of the RSCs we visited provided us with their own local SOPs that incorporated the topics covered in State’s SOPs. Directors at the remaining five RSCs also told us that they had developed local SOPs that covered the overarching USRAP requirements. We observed how RSC staff implemented State’s case processing and prescreening policies and procedures during our site visits to four RSCs from June 2016 to September 2016. Specifically, we observed 27 prescreening interviews conducted by RSC caseworkers at the four RSCs we visited and found that these caseworkers generally adhered to State requirements during these interviews. In addition, we observed how RSC staff in all four locations implemented additional required procedures during our site visits, such as initiating required security checks and compiling case file information for USCIS interviewing officers, and found that these RSC staff were generally complying with SOPs. State has control activities in place to monitor how RSCs implement policies and procedures for USRAP, but it does not have outcome-based performance indicators to assess whether RSCs are meeting their objectives under USRAP. Consistent with State’s January 2016 Federal Assistance Policy Directive, and according to State officials, State is required to monitor the RSCs it funds, whether through cooperative agreements or voluntary contributions. On the basis of our interviews with State officials and as reflected in documentation from all nine RSCs, including quarterly reports to State, all RSCs have generally undergone the same monitoring regime regardless of funding mechanism. Further, according to State officials, the department has dedicated Program Officers located in Washington, D.C., and Refugee Coordinators based in U.S. embassies worldwide, who are responsible for providing support to RSCs and monitoring their activities—including conducting annual monitoring visits. Further, State has established objectives for RSCs, which include interviewing applicants to obtain relevant information for the adjudication and ensuring the accuracy of information in State’s database and the case files. State also establishes annual targets for the number of refugees who depart for the United States from each RSC. Although State has established objectives and monitors several quantitative goals for RSCs, it has not established outcome-based performance indicators for key RSC activities such as prescreening applicants or accurate case file preparation, or monitored RSC performance consistently across such indicators. Specifically, neither the quarterly reports nor other monitoring reports we examined have or use consistent outcome-based performance indicators from which State could evaluate whether RSCs were consistently and effectively prescreening applicants and preparing case files—key RSC activities that have important implications for timely and effective USCIS interviews and security checks. Developing outcome-based performance indicators, as required by State policy and performance management guidance, and monitoring RSC performance against such indicators on a regular basis, would better position State to determine whether all RSCs are processing refugee applications in accordance with their responsibilities under USRAP. USCIS Has Policies and Procedures for Adjudicating Refugee Applications, but Could Improve Training and Quality Assurance USCIS Has Policies and Procedures to Adjudicate Refugee Applications, but Could Improve Training for Temporary Officers USCIS has developed policies and procedures for adjudicating refugee applications. In July 2017, we found that these policies and procedures apply to all USCIS officers who adjudicate refugee applications—those from USCIS’s Refugee Affairs Division (RAD), International Operations Division (IO), and temporary officers from offices throughout USCIS—and include those for how officers are to review the case file before the interview and conduct the interview, as well as how supervisors are to review applications to ensure they are legally sufficient. We observed 29 USCIS refugee interviews at the four RSCs that we visited from June 2016 to September 2016 and found that the interviewing officers completed all parts of the assessment tool and followed other required policies. We also observed that the USCIS officers documented the questions they asked and the answers the applicants provided. We also observed USCIS supervisors while they reviewed officers’ initial decisions, interview transcripts, and case file documentation, consistent with USCIS policy, at two of the sites we visited. Further, all six of the officers that we met with stated that supervisors conducted the required supervisory case file review during their circuit rides and the four supervisory officers we met with were aware of the requirements and stated that they conducted the supervisory reviews. USCIS also provides specialized training to all officers who adjudicate applications abroad, but we found that USCIS could provide additional training for officers who work on a temporary basis. According to USCIS policy, all USCIS officers who adjudicate refugee applications must complete specialized training, and the training varies based on the USCIS division of the officer. However, temporary officers receive a condensed (or shortened) version of the trainings received by full time refugee officers and do not receive infield training. Although temporary officers receive training prior to conducting in-person interviews with refugee applicants, we found that they sometimes face challenges adjudicating refugee applications. Specifically, we reviewed 44 summary trip reports USCIS supervisors completed following officers’ trips overseas to interview USRAP applicants from the fourth quarter of 2014 through the third quarter of 2016 that included adjudications by temporary officers. In 15 of the 44 reports, the supervisors noted that temporary officers faced challenges adjudicating refugee applications. Standards for Internal Control in the Federal Government state that management should demonstrate a commitment to recruit, develop, and retain competent individuals. The standards also note that competence is the qualification to carry out assigned responsibilities, and requires relevant knowledge, skills, and abilities, which are gained largely from professional experience, training, and certifications. To the extent that USCIS uses temporary officers on future circuit rides, providing them with additional training, such as in-field training, would help better prepare them to interview refugees and adjudicate their applications, increase the quality and efficiency of their work, and potentially reduce the supervisory burden on those who oversee temporary officers. USCIS Has Resources to Help Officers Identify Applicants with National Security Concerns, but Has Not Documented Plans for Deploying Officers with National Security Expertise Overseas In addition to training, USCIS has developed guidance documents and tools to help officers identify USRAP applicants with potential national security concerns. However, we found that USCIS could strengthen its efforts by developing and implementing a plan for deploying officers with national security expertise on selected circuit rides. USCIS provides a number of resources to officers to help them identify and address potential national security-related concerns in USRAP applications. In addition, USCIS’s national security policies and operating procedures require that cases with national security concerns be placed on hold by interviewing officers. These cases are then reviewed by USCIS headquarters staff who have additional specialized training and expertise in vetting national security issues. While USCIS has training and guidance to adjudicate cases with national security-related concerns, USCIS trip reports we analyzed and officers we interviewed indicated that it can be challenging to adjudicate such applications. USCIS officials identified several reasons why it is challenging to provide training and guidance on how to adjudicate cases with potential national security concerns. For example, according to RAD and IO headquarters officials, indicators of national security concerns and the country conditions that give rise to them evolve and change; as a result, USCIS guidance on how to address those concerns also changes over time. To further help interviewing officers adjudicate cases with national security concerns, in 2016, USCIS completed a pilot program that included sending officers with national security expertise overseas to support interviewing officers in some locations. USCIS determined the pilot was successful and has taken steps to formalize it; however, USCIS has not developed and implemented a plan for deploying these additional officers, whose expertise could help improve the efficiency and effectiveness of the adjudication process. In light of the evolving and significant nature of national security concerns, developing and implementing a plan to deploy additional USCIS officers with national security expertise on circuit rides—including timeframes for deployment and how USCIS will select circuit rides for deployment—would better ensure that USCIS provides interviewing officers with the resources needed to efficiently and effectively adjudicate cases with national security concerns. USCIS Does Not Conduct Regular Quality Assurance Assessments of Refugee Adjudications We also found that USCIS has not regularly assessed the quality of refugee adjudications, which help ensure that case files are completed accurately and that decisions by USCIS officers are well-documented and legally sufficient. USCIS conducted a quality assurance review of refugee adjudications in fiscal year 2015, which included a sample of applications adjudicated by RAD and IO during one quarter of the fiscal year. The 2015 quality assurance review found that most cases in the sample were legally sufficient. However, the review indicated that there were differences between RAD and IO adjudications. Specifically, the review rated 69 of 80 RAD case files (86 percent) as good or excellent, and rated 36 of 73 IO case files (49 percent) as good or excellent. Two of 80 RAD case files (less than 3 percent) in the review and 17 of 73 IO case files (23 percent) were rated as not legally sufficient. USCIS developed action items to address identified deficiencies and has taken steps to implement them. Among cases rated not legally sufficient, the most common deficiency identified was that interviewing officers did not fully develop the interview record with respect to possible inadmissibilities. Other deficiencies reported included interview records not being fully developed with respect to well-founded fear of persecution, improper documentation and analysis of terrorism-related inadmissibility concerns, incorrect hold determination, and required sections of the assessment leading to the adjudication decision that were incomplete. Although there have been major changes in the refugee caseload in the past 2 years (such as an increase in Syrian refugees), an increased use of temporary staff to conduct refugee adjudications in fiscal year 2016, and the difference in quality between RAD and IO adjudications noted in the 2015 quality assurance review, USCIS did not conduct quality reviews in 2016 and had no plans to conduct them in 2017. Standards for Internal Control in the Federal Government states that management should establish and operate monitoring activities to monitor the internal control system and evaluate the results. In addition, standard practices for program management state that program quality should be monitored on a regular basis to provide confidence that the program will comply with the relevant quality policies and standards. USCIS officials stated that supervisors continue to review each refugee case file for legal sufficiency and completeness at the time of the interview. While supervisory review is an important quality control step, it does not position USCIS to identify systematic quality concerns, such as those identified in the fiscal year 2015 quality assessment results. Conducting such reviews would help ensure that case files are completed accurately and that decisions by USCIS officers are well-documented and legally sufficient. State, USCIS, and Their Partners Have Implemented Antifraud Measures but Could Further Assess Staff and Applicant Fraud Risks To Address Fraud Risks, State and RSCs Have Taken Steps to Follow Many Leading Antifraud Practices but Could Improve Implementation of Controls and Assessment of Risk According to State officials we interviewed for our July 2017 report, staff fraud at RSCs occurs infrequently, but instances of staff fraud have taken place in recent years, such as RSC staff soliciting bribes from applicants in exchange for promises of expediting applicants through RSC processing. State and RSCs reported instituting a number of activities to combat the risk of fraud committed by RSC staff. Many of these activities correspond with leading practices identified in GAO’s Fraud Risk Framework which identifies leading antifraud practices to aid program managers in managing fraud risks that affect their program. For instance, State and RSCs reported that they have taken steps to commit to an organizational culture and structure to help manage staff fraud risks and established collaborative relationships with both internal and external partners to share information. Officials from all nine RSCs stated that they assign staff fraud risk management responsibilities to designated individuals. In addition, State and RSCs reported that RSCs have designed control activities to address staff fraud risk. State officials identified two key guidance documents containing control activities: RSC SOPs and the Program Integrity Guidelines. The Program Integrity Guidelines are a list of 87 measures designed to prevent and mitigate staff fraud at RSCs. The measures were developed by State and provided to RSCs in response to a staff fraud incident in 2013 that resulted in the termination of two RSC staff. These measures include control activities addressing issues such as background checks, interpreter assignment, antifraud training, office layout, case file reviews, electronic data management, and reporting and responding to instances of suspected fraud. State required RSCs to comply with the original Program Integrity Guidelines by October 2014; however, our review of RSC documents found that RSCs reported complying with most, but not all, of the required measures applicable to their operations. Reported compliance with required, applicable measures at individual RSCs ranged from 86 percent to 100 percent. For 53 of the 72 measures, compliance was reported by all RSCs for which the measure was applicable. Some RSCs have reported that they face challenges in fully implementing certain controls. State officials told us that they work to ensure that each RSC complies with all required controls in the Program Integrity Guidelines. If an RSC reports that it does not yet fully comply with a measure listed in the Program Integrity Guidelines, State expects the RSC to report its progress toward compliance to State. While this reporting assists State in its implementation efforts, we found that gaps remain. Full compliance with these measures could help RSCs ensure the integrity of their operations and guard against staff fraud. In addition, State has taken some steps to assess the risks posed by staff fraud to RSC operations; however, we found that not all RSCs have conducted staff fraud risk assessments that follow leading practices identified in the Fraud Risk Framework, including (1) conducting assessments at regular intervals or when the program experiences changes, (2) tailoring assessments to the program and its operations, and (3) examining the suitability of existing fraud controls. State officials told us that not all RSCs had conducted staff fraud risk assessments because State’s Program Integrity Guidelines recommend but do not require these assessments. Without State requiring RSCs to conduct regular staff fraud risk assessments tailored to their specific operations, staff fraud risk assessments conducted by individual RSCs have varied. Further, we found that State and most RSCs have not examined the suitability of existing fraud controls. For example, while one RSC has regularly assessed the suitability of its existing staff fraud controls by conducting regular staff fraud risk assessments that examine the likelihood and impact of potential fraudulent activity and related fraud controls, the remaining eight RSCs have not done so. State officials told us that because State does not require RSCs to conduct risk assessments, information needed to assess the suitability of existing controls is not available from all RSCs. As the number of refugees accepted varies each year by RSC, internal control systems may need to be changed to respond to the potential increased fraud risk. Without requiring RSCs to conduct regular staff fraud risk assessments that are tailored to their specific operating environments and reviewing these assessments to examine the suitability of existing fraud controls, State may lack necessary information about staff fraud risks and therefore not have reasonable assurance that existing controls effectively reduce these risks. Information from such risk assessments could help State and RSCs revise existing controls or develop new controls to mitigate the staff fraud risks faced by the program, if necessary. State and USCIS Have Mechanisms to Help Detect and Prevent Applicant Fraud, but Could Jointly Assess Applicant Fraud Risks Fraud can occur in the refugee process in a number of ways, and State, RSCs, and USCIS have implemented certain mechanisms to help detect and prevent fraud by USRAP applicants. USCIS officers can encounter indicators of fraud while adjudicating refugee applications, and State has suspended USRAP programs in the past because of fraud. To detect and prevent applicant fraud in USRAP, State, RSCs, and USCIS have put mechanisms in place such as DNA testing for certain applicants; training on applicant fraud trends for USCIS officers; and procedures at RSCs to require, where possible, that different interpreters be involved in different stages of the USRAP application process to decrease the likelihood that applicants collude with interpreters. However, State and USCIS have not jointly assessed applicant fraud risks program-wide. The Fraud Risk Framework calls for program managers to plan and conduct regular fraud risk assessments. In addition, Standards for Internal Control in the Federal Government states that management should consider the potential for fraud when identifying, analyzing, and responding to risks, and analyze and respond to identified fraud risks, through a risk analysis process, so that they are effectively mitigated. Although State and USCIS perform a number of fraud risk management activities and have responded to individual instances of applicant fraud, we found that these efforts do not position the agencies to assess fraud risks program-wide for USRAP or know if their controls are appropriately targeted to the areas of highest risk in the program. State and USCIS officials told us that each agency has discrete areas of responsibility in the refugee admissions process, and each agency’s antifraud activities are largely directed at their portions of the process. Because the management of USRAP involves several agencies, without jointly and regularly assessing applicant fraud risks and determining the fraud risk tolerance of the entirety of USRAP, in accordance with leading practices, State and USCIS do not have comprehensive information on the inherent fraud risks that may affect the integrity of the refugee application process and therefore do not have reasonable assurance that State, USCIS, and other program partners have implemented controls to mitigate those risks. Moreover, regularly assessing applicant fraud risks program-wide could help State and USCIS ensure that fraud prevention and detection efforts across USRAP are targeted to those areas that are of highest risk, in accordance with the program’s fraud risk tolerance. Our Recommendations and Agencies’ Responses In our July reports, we made several recommendations to State and DHS. Specifically, we recommended that State take the following actions in GAO-17-706: develop outcome-based indicators for RSC, as required by State monitor RSC performance against such indicators on a regular basis; And we recommended that State take the following actions in GAO-17-737: actively pursue efforts to ensure that RSCs comply with required, applicable measures in the Program Integrity Guidelines; update guidance, such as the Program Integrity Guidelines, to require each RSC to conduct regular staff fraud risk assessments that are tailored to each RSC’s specific operations; and regularly review RSC staff fraud risk assessments and use them to examine the suitability of existing staff fraud controls and revise controls as appropriate. We recommended that USCIS take the following actions in GAO-17-706: provide additional training for any temporary officers who adjudicate develop and implement a plan to deploy officers with national security conduct regular quality assurance assessments of refugee application adjudications across RAD and IO. We also recommended that State and USCIS conduct regular joint risk assessments of applicant fraud risk across USRAP. State and USCIS concurred with all of our recommendations and have actions underway to address them. For example, State noted that it has developed new guidance to enhance the monitoring of RSCs, which outlines roles, responsibilities, and tools for program officers and refugee coordinators. In addition, USCIS provided documentation that USCIS officials conducted a quality assurance assessment of refugee adjudications in July 2017. Moreover, in July 2017, USCIS provided documentation indicating that it instituted additional headquarters and overseas training for temporary officers consistent with our recommendation. Therefore, we closed this recommendation as implemented. Chairman Labrador, Ranking Member Lofgren, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. GAO Contacts and Staff Acknowledgments For further information regarding this testimony, please contact Rebecca Gambler, Director, Homeland Security and Justice at (202) 512-8777 or gamblerr@gao.gov, or Thomas Melito, Director, International Affairs and Trade at (202) 512-9601 or melitot@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are David Alexander, Ashley Alley, Kathryn Bernet, Anthony Costulas, Martin De Alteriis, Brian Hackney, Paul Hobart, Thomas Lombardi, and Elizabeth Repko. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study According to UNHCR, as of June 2017, more than 21 million people were refugees worldwide. State manages the U.S. Refugee Admissions Program (USRAP) and coordinates with UNHCR, which refers the most applicants to USRAP, and USCIS, which adjudicates refugee applications. Deterring and detecting fraud is essential to ensuring the integrity of USRAP and an increase in the number of applicants approved for resettlement in the United States from countries where terrorists operate has raised questions about the adequacy of applicant screening. This statement addresses (1) how State works with UNHCR to ensure program integrity in the UNHCR resettlement referral process; (2) the extent to which State and RSCs have policies and procedures on refugee case processing and State has overseen RSC activities; (3) the extent to which USCIS has policies and procedures for adjudicating refugee applications; and (4) the extent to which State, USCIS, and their partners follow leading practices to reduce the risk of staff and applicant fraud in USRAP. This statement is based on GAO's July 2017 reports regarding USRAP. To conduct that work, GAO analyzed State, USCIS, and UNHCR policies; interviewed relevant officials; conducted fieldwork in 2016 at selected UNHCR offices, as well as at RSCs in Austria, Jordan, Kenya, and El Salvador, where GAO observed a nongeneralizable sample of refugee screening interviews (selected based on application data and other factors). What GAO Found The Department of State (State) and the United Nations High Commissioner for Refugees (UNHCR) have worked together on measures designed to ensure integrity in the refugee resettlement referral process and have established a framework to guide their partnership. Working with State, UNHCR has implemented standard operating procedures and other guidance that, according to UNHCR officials, provides baseline requirements throughout the referral process. UNHCR also uses databases to help verify the identities of, and manage information about, refugees. State and the nine worldwide Resettlement Support Centers (RSC) have policies and procedures for processing refugee applications. Overseen by State, the organizations that operate RSCs hire staff to process and prescreen applicants who have been referred for resettlement consideration. GAO observed 27 prescreening interviews conducted by RSC caseworkers in four countries and found that, for example, RSCs generally recorded key information and submitted any required security checks. However, State has not established outcome-based performance indicators to evaluate whether RSCs were consistently and effectively prescreening applicants and preparing case files—key RSC activities that have important implications for timely and effective adjudication and security checks. Developing outcome-based performance indicators would better position State to determine whether RSCs are meeting their responsibilities. The Department of Homeland Security's U.S. Citizenship and Immigration Services (USCIS) has policies and procedures for adjudicating refugee applications for resettlement in the United States, including how officers are to conduct interviews and adjudicate applications. GAO observed 29 USCIS interviews and found that officers completed all parts of the required assessment. USCIS also provides guidance to help officers identify national security concerns in applications, which can be challenging to identify as country conditions evolve. In 2016, USCIS determined that its pilot to send officers with national security expertise overseas to support interviewing officers was successful. USCIS has taken steps to fill these positions, but it has not yet developed a plan for deploying these additional officers, whose expertise could help improve the effectiveness of the adjudication process. State, USCIS, and their partners have implemented antifraud measures to reduce the risk of staff and applicant fraud—both of which have occurred—but could further assess fraud risks. Officials from all nine RSCs stated that they assign staff fraud risk management responsibilities to designated individuals. However, not all RSCs reported complying with all required program integrity measures—reported compliance at individual RSCs ranged from 86 to 100 percent. State has also not required RSCs to conduct regular staff fraud risk assessments tailored to each RSC or examined the suitability of related controls. Without taking additional steps to address these issues, State and RSCs may face challenges in identifying new staff fraud risks or gaps in the program's internal control system and implementing new control activities to mitigate them. Further, State and USCIS have not jointly assessed applicant fraud risk program-wide. Doing so could help them ensure that fraud detection and prevention efforts across USRAP are targeted to those areas that are of highest risk. What GAO Recommends GAO made recommendations to State and USCIS to strengthen the implementation of USRAP. State and USCIS agreed with GAO's recommendations and have begun taking actions to address them.
gao_GAO-18-546
gao_GAO-18-546_0
Background The DATA Act was enacted May 9, 2014, for purposes that include expanding on previous federal transparency legislation by requiring the disclosure of federal agency expenditures and linking agency spending information to federal program activities, so that both policymakers and the public can more effectively track federal spending. The act also calls for improving the quality of data submitted to USAspending.gov by holding federal agencies accountable for the completeness and accuracy of the data submitted. The Federal Funding Accountability and Transparency Act of 2006 (FFATA), as amended by the DATA Act, identifies OMB and Treasury as the two agencies responsible for leading government-wide implementation. For example, the DATA Act requires OMB and Treasury to establish government-wide financial data standards that shall, to the extent reasonable and practicable, provide consistent, reliable, and searchable spending data for any federal funds made available to or expended by federal agencies. These standards specify the data elements to be reported under the DATA Act and define and describe what is to be included in each data element, with the aim of ensuring that information will be consistent and comparable. The DATA Act also requires OMB and Treasury to ensure that the standards are applied to the data made available on USAspending.gov. Sources of Data on USAspending.gov USAspending.gov has many sources of data. For example, agencies submit data from their financial management systems, and other data are extracted from government-wide federal financial award reporting systems populated by federal agencies and external award recipients. A key component of the reporting framework is Treasury’s DATA Act broker (broker)—a system that collects and validates agency-submitted data to create linkages between the financial and award data prior to their publication on the USAspending.gov website. According to Treasury guidance documents, agencies are expected to submit three data files with specific details and data elements to the broker from their financial management systems. File A: Appropriations account. This includes summary information such as the fiscal year cumulative federal appropriations account balances and includes data elements such as the agency identifier, main account code, budget authority appropriated amount, gross outlay amount, and unobligated balance. File B: Object class and program activity. This includes summary data such as the names of specific activities or projects as listed in the program and financing schedules of the annual budget of the U.S. government. File C: Award financial. This includes award transaction data such as the obligation amounts for each federal financial award made or modified during the reporting quarter (e.g., January 1, 2017, through March 31, 2017). The broker also extracts spending information from government-wide award reporting systems that supply award data (e.g., federal grants, loans, and contracts) to USAspending.gov. These systems—including the Federal Procurement Data System-Next Generation (FPDS-NG), System for Award Management (SAM), Financial Assistance Broker Submission (FABS), and the FFATA Subaward Reporting System (FSRS)—compile information that agencies and external federal award recipients submit to report, among other things, procurement and financial assistance award information required under FFATA. The four files produced with information extracted by the broker from the four systems are as follows: File D1: Procurement. This includes award and awardee attribute information (extracted from FPDS-NG) on procurement (contract) awards and contains elements such as the total dollars obligated, current total value of award, potential total value of award, period of performance start date, and other information to identify the procurement award. File D2: Financial assistance. This includes award and awardee attribute information (extracted from FABS) on financial assistance awards and contains elements such as the federal award identification number, the total funding amount, the amount of principal to be repaid for the direct loan or loan guarantee, the funding agency name, and other information to identify the financial assistance award. File E: Additional awardee attributes. This includes additional information (extracted from SAM) on the award recipients and contains elements such as the awardee or recipient unique identifier; the awardee or recipient legal entity name; and information on the award recipient’s five most highly compensated officers, managing partners, or other employees in management positions. File F: Subaward attributes. This includes information (extracted from FSRS) on awards made to subrecipients under a prime award and contains elements such as the subaward number, the subcontract award amount, total funding amount, the award description, and other information to facilitate the tracking of subawards. The key components of the broker and how the broker operated when the agencies submitted their data for the second quarter fiscal year 2017 are shown in figure 1. After agencies submit the three files to the DATA Act broker, it runs a series of validations and produces warnings and error reports for agencies to review. After passing validations for these three files, the agencies are to generate Files D1 and D2, the files containing details on procurement and assistance awards. Before the data are displayed on USAspending.gov, agency senior accountable officials are required to certify the data submissions in accordance with OMB guidance. Certification is intended to assure alignment among Files A, B, C, D1, D2, E, and F, and to provide assurance that the data are valid and reliable. According to Treasury officials, once the certification is submitted a sequence of computer program instructions or scripts are issued to transfer and map the data from broker data tables to tables set up in a database used as a source for the information on the website. Certified data are then displayed on USAspending.gov along with certain historical information from other sources, including Monthly Treasury Statements. OIG Methodology and Reporting Guidance for Assessing Agencies’ DATA Act Submissions The DATA Act requires each OIG to issue three reports on its assessment of the quality of the agency’s data submission and compliance with the DATA Act. The first report was due November 8, 2016; however, agencies were not required to submit spending data in compliance with the DATA Act until May 2017. Therefore, the Council of the Inspectors General on Integrity and Efficiency (CIGIE) developed an approach to address what it described as a reporting date anomaly; encouraged interim OIG readiness reviews and related reports on agencies’ implementation efforts; and delayed issuance of the mandated reports to November 2017, with subsequent reports following a 2-year cycle and due November 2019 and 2021. CIGIE established the Federal Audit Executive Council (FAEC) to discuss and coordinate issues affecting the federal audit community, with special emphasis on audit policy and operations of common interest to FAEC members. FAEC formed the FAEC DATA Act Working Group to assist the OIG community in understanding and meeting its DATA Act oversight requirements by (1) serving as a working-level liaison with Treasury, (2) consulting with GAO, (3) developing a common approach and methodology for conducting the readiness reviews and mandated reviews, and (4) coordinating key communications with other stakeholders. To assist the OIG community, the FAEC DATA Act Working Group developed a common methodology and published the Inspectors General Guide to Compliance Under the DATA Act (IG Guide) for use in conducting mandated reviews. The IG Guide includes procedures to test data in agencies’ Files A and B by reconciling these data to the information that agencies report in their quarterly SF 133, Report on Budget Execution and Budgetary Resources. The IG Guide also instructs OIGs to select a statistically valid sample of spending data from the agencies’ available award-level transactions in File C, and among other procedures, to confirm whether these data are also included in the agencies’ Files D1 and D2. The OIGs are also to confirm whether the transactions in the sample were linked to the award and awardee attributes in Files E and F. The data in Files E and F are reported by award recipients in two external government-wide systems, and are outside the direct control of the federal agencies, except for the General Services Administration, which manages these external systems. Based on additional guidance from the FAEC DATA Act Working Group, OIGs are not required to assess the quality of the award recipient-entered data that the broker extracted from the two external government-wide systems used to create Files E and F. According to the IG Guide, the sampled spending data and testing results are to be evaluated using the following definitions for the requirements being assessed: Completeness is measured in two ways: (1) all transactions that should have been recorded are recorded in the proper reporting period, and (2) as the percentage of transactions containing all applicable data elements required by the DATA Act. Timeliness is measured as the percentage of transactions reported within 30 days of the end of the quarter. Accuracy is measured as the percentage of transactions that are complete and agree with the systems of record or other authoritative sources. Quality is defined in OMB guidance as a combination of utility, objectivity, and integrity. Utility refers to the usefulness of the information to the intended users. Objectivity refers to whether the disseminated information is being presented in an accurate, clear, complete, and unbiased manner. Integrity refers to the protection of information from unauthorized access or revision. The IG Guide also states that OIGs should assess agencies’ implementation and use of the data standards, including evaluating each agency’s process for reviewing the 57 required data elements and associated definitions that OMB and Treasury established and documenting any variances. Prior GAO Reports Related to the DATA Act and Data Quality In November 2017, we issued our first report on data quality as required by the DATA Act, which identified issues with the completeness and accuracy of the data that agencies submitted for the second quarter of fiscal year 2017, use of data elements, and presentation of the data on Beta.USAspending.gov. Among other things, we recommended that Treasury disclose known data quality issues and limitations on the new USAspending.gov website. Treasury agreed with that recommendation and stated that it would develop a plan to better disclose known data quality issues. Since the DATA Act’s enactment in 2014, we have issued a series of interim reports on our ongoing monitoring of the implementation of the DATA Act and made recommendations intended to help ensure effective government-wide implementation. However, many of those recommendations still remain open. These reports identified a number of challenges related to OMB’s and Treasury’s efforts to facilitate agency reporting of federal spending, as well as internal control weaknesses and challenges related to agency financial management systems that we and agency auditors reported that present risks to agencies’ ability to submit quality data as required under the act. For example, our prior work has identified issues with agency source systems that could affect the quality of spending data made available to the public. In April 2017, we reported a number of weaknesses and issues previously identified by agencies’ auditors and OIGs that affect agencies’ financial reporting and may affect the quality of the information reported under the DATA Act. We also reported on findings and recommendations from prior reports with issues on the four key award systems—FPDS-NG, SAM, the Award Submission Portal (ASP), and FSRS—which increase the risk that the data submitted to USAspending.gov may not be complete, accurate, and timely. OIG Reviews of Agencies’ DATA Act Submissions Varied in Scope and Type of Standards Used Based on our review of the 53 OIG reports, the scope of all of the OIG reviews covered their agencies’ submission of spending data for the second quarter of fiscal year 2017 (i.e., January through March 2017). However, the files that the OIGs included in their scope to select and review sample transactions and the type of audit standards used—such as attestation examination engagement or performance audit—varied among the OIGs. According to the IG Guide, the OIGs were to select and review a statistically valid sample of transactions, preferably from the agencies’ File C certified data submissions; if File C was unavailable or did not contain data, they were to select their sample test items from Files D1 and D2. Based on their survey responses, we found that most OIGs tested data from File C, File D1, File D2, or some combination of these agency file submissions. We also found that some OIGs tested a statistical sample of transactions in these files, while others tested all the transactions in the files because of the small population size. Further, we found that some OIGs used different files when testing for completeness, timeliness, or accuracy. For example, one OIG used File C when testing for completeness, File D1 when testing for timeliness, and File D2 when testing for accuracy. Overall, as shown in figure 2, the source files that 47 of the 53 OIGs used for testing accuracy were as follows. Twenty-eight OIGs selected items for testing accuracy from File C. Twelve OIGs selected items for testing accuracy from Files D1, D2, or both. Seven OIGs selected items for testing accuracy from a combination of Files C, D1, and D2. The IG Guide also states that OIGs should conduct either attestation examination engagements or performance audits in accordance with generally accepted government auditing standards (GAGAS). Performance audits are audits that provide findings or conclusions based on an evaluation of sufficient, appropriate evidence against criteria. Attestation examination engagements involve obtaining sufficient, appropriate evidence with which to express an opinion stating whether the subject matter is in conformity with the identified criteria. In contrast to these two types of engagements that provide conclusions or opinions, agreed-upon procedures attestation engagements do not result in opinions or conclusions, but instead involve auditors performing specific procedures on the subject matter and issuing a report of findings. All 53 OIGs reported that they performed their engagements in accordance with GAGAS; 47 OIGs reported that they conducted a performance audit, 5 reported that they performed an attestation examination engagement, and 1 reported that it performed an agreed- upon procedures attestation engagement. Twenty-one CFO Act agency OIGs and 26 non-CFO Act agency OIGs conducted performance audits, 3 CFO Act agency OIGs and 2 non-CFO Act agency OIGs conducted attestation examination engagements, and 1 non-CFO Act agency OIG conducted an agreed-upon procedures attestation engagement. OIG Reports Show Variations in Agencies’ Use of Data Standards and Quality of Data, and Most OIGs Made Recommendations to Address Identified Deficiencies According to the OIG reports, about half of the agencies met the OMB and Treasury requirements for implementation and use of data standards. However, almost three-fourths of OIGs determined that their respective agencies’ submissions were not complete, timely, accurate, or of quality. Based on their reports and survey responses, certain OIGs also found data errors related to problems with how Treasury’s DATA Act broker extracted information from external award reporting systems. The FAEC DATA Act Working Group considered these data errors to be a government-wide issue. Other errors that the OIGs identified may have been caused by agency-specific internal control deficiencies. Most of the OIGs made recommendations to agencies to help address the concerns they identified in their reports. OIG Reports Show About Half of the Agencies Met Requirements for Implementation and Use of Data Standards Based on our review of the 53 OIG reports, we found that 27 OIGs determined that their agencies met OMB and Treasury requirements for implementation and use of the data standards, whereas 23 OIGs determined that their agencies did not meet these requirements. In addition, 3 CFO Act agency OIGs did not include an assessment of their agencies’ implementation and use of the data standards in their reports. The OIG reports described reasons why the 23 agencies did not meet the implementation and use of data standards requirements, including data submissions that did not include required data elements or included data elements that did not conform with the established data standards. For example, one OIG reported that 74 percent of transactions it tested did not contain program activity names or codes aligned with the President’s Budget, and as a result, 39 percent of total obligations and 57 percent of total expenditures from that agency’s data submission could not be aligned with established programs. Another OIG reported that because of inconsistent application of data standards and definitions across award systems, the agency’s spending data were not complete, timely, or accurate. In their survey responses, certain OIGs identified additional concerns about their agencies’ implementation and use of data standards and related data elements. Specifically, six OIGs identified differences between their agencies’ definitions of the data standards and OMB guidance. For example, two OIGs noted differences between definitions in OMB guidance and their agencies’ definitions of “primary place of performance address.” One of these OIGs noted that its agency submitted the wrong data, providing the address of the legal entity receiving the award instead of the address of the primary place where performance of the award will be accomplished or take place. In our November 2017 report, we also noted that OMB guidance for this data element was unclear and recommended that OMB clarify and align existing guidance regarding the appropriate definitions agencies should use to collect and report on primary place of performance and establish monitoring mechanisms to foster consistent application and compliance. In addition, based on their survey responses, 21 OIGs reported error rates over 50 percent for 25 data elements. This includes 10 data elements that were reported by multiple OIGs and 15 data elements only reported by one OIG, as shown in table 1. There were five other data elements with error rates over 50 percent that the FAEC DATA Act Working Group determined to be government-wide broker-related data reporting issues, as discussed later in this report. The OIGs’ survey responses did not indicate whether the data elements with errors were the result of issues related to the agencies’ implementation or use of required data standards. OIG Reports and Survey Responses Show Most Agencies Did Not Submit Complete, Timely, Accurate, or Quality Data Based on the OIG reports, we found that 15 of the 53 OIGs determined that their agencies’ data were generally complete, timely, accurate, or of quality, comprising 6 CFO Act agency OIGs and 9 non-CFO Act agency OIGs (see fig. 3). Conversely, 38 of 53 OIGs determined that their agencies’ data were not complete, timely, accurate, or of quality, comprising 18 CFO Act agency OIGs and 20 non-CFO Act agency OIGs. OIG reports did not always include separate assessments for completeness, timeliness, and accuracy, but gave an overall assessment of the quality of the data. As part of our OIG survey, we requested the overall error rates, agency- specific error rates, and broker error rates for each requirement— completeness, timeliness, and accuracy—used to evaluate the quality of data tested to help provide more insights on the nature and extent of errors that the OIGs identified. For the purposes of our survey, based on guidance from the FAEC DATA Act Working Group and in the IG Guide, these error rates were defined as follows: Overall error rate is the percentage of transactions tested that were not in accordance with policy, and includes errors due to the agency, broker, and external award reporting systems. Agency error rate is the percentage of transactions tested that were not in accordance with policy, and includes only errors that were within the agency’s control. Broker error rate is the percentage of transactions tested that were not in accordance with policy, and includes only errors due to the broker and external award reporting systems. With regard to overall error rates and the tests conducted, 40 OIGs reported that they tested a statistical sample of transactions, 9 OIGs reported that they tested all transactions in the populations of data, and 4 OIGs reported that they did not test any transactions or were unable to complete their testing. As shown in figure 4, our survey results show that the 40 OIGs that tested a statistical sample of transactions generally reported higher (projected) overall error rates for the accuracy and completeness of data than for the timeliness of data. We found similar results based on our tests to assess the completeness, timeliness, and accuracy of government-wide spending data that we tested for the same time period, as described in our November 2017 report. More than half of the 40 OIGs reported projected overall error rates of 25 percent or greater for accuracy, including 8 OIGs reporting projected accuracy error rates of over 75 percent. In contrast, more than three-fourths of the OIGs projected overall error rates of less than 25 percent for completeness and timeliness of their agencies’ data. See appendix II for more details on the 53 OIGs’ individual agency testing results, including the actual overall error rates for those OIGs that tested the full population of transactions included in their agencies’ data submissions and the estimated range of projected overall error rates for OIGs that conducted a statistical sample. The OIG survey responses that included agency-specific error rates showed that the agency-specific error rates were similar to the overall error rates, with accuracy of data having higher error rates than those for completeness and timeliness. Fourteen OIGs provided agency-specific error rates for accuracy, 13 OIGs provided agency-specific error rates for completeness, and 12 OIGs provided agency-specific error rates for timeliness of the data sampled. In addition, nine OIGs reported error rates for broker-related errors that, similar to the overall and agency-specific error rates, had higher error rates for accuracy of data than for completeness and timeliness. The FAEC DATA Act Working Group determined that the broker-related errors had a government-wide impact, as discussed further below. In October 2017—1 month before the mandated reports were to be issued—the working group provided guidance to the OIGs suggesting that they determine and report these additional broker error rates separately because they were not within the agencies’ control. Some OIGs may not have reported separate agency-specific and broker error rates as their work was already substantially completed. Of the nine OIGs that reported they tested all transactions in the populations of their agencies’ data, five OIGs reported actual overall error rates and found that overall error rates for accuracy were higher than the error rates for completeness or timeliness. Of the four OIGs that reported agency-specific error rates, only one OIG reported an error rate for accuracy, and it was greater than 75 percent. One OIG reported a broker error rate, and it was higher for accuracy than for completeness or timeliness. In addition to using different testing methodologies (e.g., statistical sampling or testing the full population of transactions) and source files, as previously discussed, the OIGs also used different assumptions and sampling criteria to design and select sample items for testing. As a result, the overall error rates are not comparable and a government-wide error rate cannot be projected. DATA Act Broker-Related Issues Caused Certain Government-wide Data Reporting Errors Based on discussions with OIGs, the FAEC DATA Act Working Group identified certain data errors caused by broker-related issues that it determined to be government-wide data reporting issues. Also, because the broker is maintained by Treasury, these issues were beyond the control of the affected agencies. According to the working group, these issues involve inconsistencies in data the broker extracted from government-wide federal financial award reporting systems, as described in table 2. To help provide consistency in reporting these issues, the working group developed standard report language used by OIGs in their reports to describe the errors caused by the broker. The standard reporting language stated that because agencies do not have responsibility for how the broker extracts data, the working group did not expect agency OIGs to evaluate the reasonableness of Treasury’s planned corrective actions. In April 2018, a Treasury official told us that the issues causing these problems have been resolved. To address these issues, the Treasury official stated that, among other things, Treasury implemented the DATA Act Information Model Schema version 1.1, loaded previously missing historical procurement data to USAspending.gov, updated how information from FPDS-NG is mapped to File D1, and replaced ASP with FABS. However, we plan to follow up on these efforts as a part of our ongoing monitoring efforts. OIGs Identified Agency- Specific Control Deficiencies That May Have Contributed to Data Errors In their survey responses and OIG reports, 43 OIGs reported agency- specific control deficiencies that may have contributed to or increased the risk of data errors. Of these 43 OIGs, 37 OIGs identified deficiencies affecting accuracy, 32 OIGs identified deficiencies affecting completeness, and 14 OIGs identified deficiencies affecting timeliness. A few OIGs reported that they leveraged their financial statement audit results, which found deficiencies in certain financial reporting controls, in conducting their DATA Act reviews. We categorized the OIGs’ reported control deficiencies and found that the categories with the most frequently reported deficiencies related to their agencies’ lack of effective procedures or controls, such as conducting reviews and reconciliations of data submissions to source systems, and information technology system deficiencies, as shown in figure 5. In their survey responses, OIGs provided additional information about whether their agencies’ controls over agency source systems and controls over the DATA Act submission processes were properly designed, implemented, and operating effectively to achieve their objectives. For both CFO Act and non-CFO Act agencies, OIGs generally reported that agencies’ internal controls over source systems and the DATA Act submission process were designed effectively but were not implemented or operating effectively as designed. Some examples of agency-specific control deficiencies reported by the OIGs are as follows. Lack of effective procedures or controls. Deficiencies where agency procedures for reviewing and reconciling data and files to different sources were not performed, or were performed ineffectively, or standard operating procedures for data submissions had not been designed and implemented. For example, some of these deficiencies related to agencies’ lack of review or reconciliation of data in Files A and B to data in Files D1 and D2. Further, two OIGs found that their agencies did not perform any sort of quality review of their data until after they were submitted to the broker. Another OIG found that its agency did not ensure that its components developed objectives for accomplishing its data submissions, assessed the risks to achieving those objectives, or established corresponding controls to address them. As a result, the agency’s DATA Act submissions included errors. Information technology system deficiencies. Deficiencies related to the lack of effective automated systems controls necessary to ensure proper system user access or automated quality control procedures and the accuracy and completeness of data, as well as systems that are not compliant with federal financial management system requirements. For example, one OIG noted that its agency experienced issues related to segregation of duties and access controls that affected the agency’s ability to ensure completeness and accuracy of data in its financial, procurement, and grant processing systems. Another OIG found that its agency did not complete necessary system updates to ensure that all data were certified prior to submission. Further, an OIG reported that its agency’s information system was unable to combine transactions with the same unique identifiers, resulting in over 12,000 transactions being removed because of broker warnings. Insufficient documentation. Deficiencies related to agencies’ production and retention of documentary evidence supporting their DATA Act submissions. For example, three OIGs found that their agencies were unable to provide supporting documentation for various portions of their DATA Act submissions. Another OIG reported that one of its agency’s components did not take effective steps to ensure that procurement and grant personnel understood the specific documentation that should be maintained to support data entered in grant and contract files. Further, another OIG found that its agency did not document the process for compiling the agency’s DATA Act submission files. Inappropriate application of data standards and data elements. Deficiencies related to the inappropriate use of data definition standards or the misapplication of data elements. For example, one OIG found that its agency did not identify the prior year funding activity names or codes for all transactions included in its spending data submission. Another OIG found that its agency did not consistently apply standardized object class codes in compliance with OMB guidance, as well as standardized U.S. Standard General Ledger account codes as outlined in Treasury guidance. Similarly, an OIG reported instances where agency users of certain award systems were not knowledgeable about how required DATA Act elements were reported in their procurement system. Data entry errors or incomplete data. Deficiencies related to controls over data entry and errors or incomplete data in agency or government- wide external systems. For example, an OIG found that its agency did not include purchase card transactions greater than $3,500, which represented about 1 percent of the agency’s data submission. Another OIG reported that its agency’s service provider did not enter miscellaneous obligations in the data submission file because it expected the agency to enter such transactions in the federal procurement data system. Timing errors. Deficiencies related to delays in reporting information to external government-wide systems that result in errors in the data submitted. For example, one OIG reported that its agency did not take effective steps to ensure that contracting officers timely report required DATA Act award attribute information in FPDS-NG. Another OIG reported that a bureau in its agency consistently submitted certain payment files 2 months late, resulting in incomplete Files C and D2 in the agency’s data submission. Inaccurate broker uploads. Deficiencies related to agencies uploading data to the broker. For example, one OIG found a lack of effective internal controls over data reporting from its agency’s source systems to the DATA Act broker for ensuring that the data reported are complete, timely, accurate, and of quality. Specifically, certain components were not able to consolidate data from multiple source systems and upload accurate data to the broker for File C. Another OIG reported that the broker could not identify and separate an individual component’s award data from agency- wide award data. Specifically, the broker recognized only agency-wide award data and did not include award data from its agency’s individual components. As a result, the OIG reported that the component did not comply with the DATA Act requirements because its submission did not include all of the agency’s required award data. Reliance on manual processes. Deficiencies that cause agencies to rely on manual processes and work-arounds. For example, one OIG found that in the absence of system patches to map data elements directly from feeder award systems to financial systems, its agency developed an interim solution that relied heavily on manual processes to collect data from multiple owners and systems and increased the risk for data quality to be compromised. Another OIG reported that its agency’s financial management systems are outdated and unable to meet DATA Act requirements without extensive manual efforts, resulting in inefficiencies in preparing data submissions. Other. Other deficiencies including, among other things, instances where an agency’s senior accountable official did not submit a statement of assurance certifying the reliability and validity of the agency account-level and award-level data submitted to the DATA Act broker, an agency did not provide adequate training and cross-training of personnel on the various DATA Act roles, and certain components of one agency were not included in the agency’s DATA Act executive governance structure. Most OIGs Made Recommendations to Agencies to Improve Data Quality and Controls To help address control deficiencies and other issues that resulted in data errors, 48 of the 53 OIGs (23 CFO Act agency OIGs and 25 non-CFO Act agency OIGs) included recommendations in their reports. As shown in figure 6, the most common recommendations OIGs made to their agencies related to the need for agencies to develop controls over their data submissions, develop procedures to address errors, and finalize or implement procedures or guidance. Some examples of OIG recommendations made to agencies to improve data quality and controls are as follows. Develop controls over submission process. Recommendations related to controls or processes to resolve issues in submitting agency financial system data to the broker. For example, one OIG recommended that its agency develop and implement a formal process to appropriately address significant items on broker warning reports, which could indicate systemic issues. Develop procedures to address errors. Recommendations related to procedures to address data errors in the agency’s internal systems. For example, one OIG recommended that its agency correct queries to extract the correct information and ensure that all reportable procurements are included in its DATA Act submissions. Finalize or implement procedures or guidance. Recommendations related to establishing and documenting an agency’s DATA Act-related standard operating procedures or agency guidance, including the roles and responsibilities of agency stakeholders. For example, one OIG recommended that its agency update its guidance on what address to use for primary place of performance to be consistent with OMB and Treasury guidance. Maintain documentation. Recommendations related to establishing or maintaining documentation of the agency’s procedures, controls, and related roles and responsibilities for performing them. For example, one OIG recommended that its agency develop a central repository for grant award documentation and maintain documentation to support its DATA Act submissions. Provide training. Recommendations related to developing, implementing, and documenting training for an agency’s DATA Act stakeholders. For example, one OIG recommended that its agency provide mandatory training to all contracting officers and grant program staff to ensure their understanding of DATA Act requirements. Work with Treasury, OMB, and other external stakeholders. Recommendations for the agency to work with Treasury, OMB, or other stakeholders external to the agency to resolve government-wide issues. For example, one OIG recommended that its agency work closely with its federal shared service provider to address timing and coding errors that the service provider caused for future DATA Act submissions. Implement systems controls or modify systems. Recommendations related to developing and implementing automated systems and controls. For example, one OIG recommended that its agency complete the implementation of system interfaces and new procedures that are designed to improve collection of certain data that were not reported timely to FPDS-NG and improve linkages of certain financial transactions and procurement awards using a unique procurement instrument identifier. Increase resources. Recommendations related to increasing the staff, resources, or both necessary to fully implement DATA Act requirements. For example, one OIG recommended that its agency allocate the resources to ensure that reconciliations are performed when consolidating source system data to the DATA Act submission files. Management for 36 agencies stated that they concurred or generally concurred with the recommendations of their OIGs (see fig. 7). Management at many of these agencies stated that they continued to improve their processes and controls for subsequent data submissions. In addition, management for seven agencies stated that they partially concurred with the recommendations that their OIGs made. Management for two agencies did not concur with their OIGs’ recommendations. Management for one agency that did not concur with the recommendations stated that they should not be held responsible for data discrepancies that other agencies caused, and management for the other agency stated that they followed authoritative guidance that OMB and Treasury issued related to warnings and error messages. OMB Staff and Treasury Officials Said They Use OIG Reports to Identify and Resolve Issues and Determine the Need for Additional Guidance OMB staff told us that they reviewed the OIG reports—focusing on the 24 CFO Act agencies—to better understand issues that the OIGs identified and to determine whether additional guidance is needed to help agencies improve the completeness, timeliness, accuracy, and quality of their DATA Act submissions. OMB staff explained to us how they have or are planning to address OIG-identified issues. OMB staff told us that in April 2017 the CFO Council’s DATA Act Audit Collaboration working group was formed, which includes officials from OMB, Treasury, and the Chief Financial Officers (CFO) Council to foster collaboration and understanding of the risks that were being identified as agencies prepared and submitted their data. The working group also consults with CIGIE, which is not a member of the working group, but its representatives attend meetings to help the group members better understand issues involving the OIG reviews and the IG guide. According to OMB staff, the working group is the focal point to identify government- wide issues and identify guidance that can be clarified. They also told us that OMB continues to meet with this working group to determine what new guidance is needed to meet the DATA Act requirement to ensure that the standards are applied to the data available on the website. In June 2018, OMB issued new guidance requiring agencies to develop data quality plans intended to achieve the objectives of the DATA Act. According to OMB staff, OMB is committed to ensuring integrity and providing technical assistance to ensure data quality. Treasury officials told us that they reviewed OIG reports that were publicly available on Oversight.gov and are collaborating with OMB and the CFO Council to identify and resolve government-wide issues, including issues related to the broker, so that agencies can focus on resolving their agency-specific issues. In February 2018, the working group documented certain topics identified for improving data quality and value. OMB staff and Treasury officials also told us that OMB and Treasury have taken steps to address issues we previously reported related to their oversight of agencies’ implementation of the DATA Act. For example, we recommended in April 2017 that OMB and Treasury take appropriate actions to establish mechanisms to assess the results of independent audits and reviews of agencies’ compliance with the DATA Act requirements. The DATA Act Audit Collaboration working group is one of the mechanisms OMB and Treasury use to assess and discuss the results of independent audits and to address identified issues. In November 2017, we also recommended, among other things, that Treasury (1) reasonably assure that ongoing monitoring controls to help ensure the completeness and accuracy of agency submissions are designed, implemented, and operating as designed, and (2) disclose known data quality issues and limitations on the new USAspending.gov. Treasury has taken some steps and is continuing to take steps to address these recommendations. For example, under the data quality section of the About page on USAspending.gov, Treasury disclosed the requirement for each agency OIG to report on its agency’s compliance with the DATA Act and noted the availability of the reports at Oversight.gov. Agency Comments We provided a draft of this report to OMB, Treasury, and CIGIE for comment. We received written comments from CIGIE that are reproduced in appendix III and summarized below. In addition, OMB, Treasury, and CIGIE provided technical comments, which we incorporated as appropriate. In its written comments, CIGIE noted that the report provides useful information on OIG efforts to meet oversight and reporting responsibilities under the DATA Act. CIGIE further stated that it believes that the report will contribute to a greater understanding of the oversight work that the OIG community performs and of agency efforts to report and track government-wide spending more effectively. We are sending copies of this report to the Director of the Office of Management and Budget, the Secretary of the Treasury, the Chairperson and Vice Chairperson of the Council of the Inspectors General on Integrity and Efficiency, as well as interested congressional committees and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9816 or rasconap@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology The Digital Accountability and Transparency Act of 2014 (DATA Act) includes provisions requiring us to review the Offices of Inspector Generals’ (OIG) mandated reports and issue our own reports assessing and comparing the completeness, timeliness, accuracy, and quality of the data that federal agencies submit under the act and the federal agencies’ implementation and use of data standards. We issued our first report on data quality in November 2017, as required. This report includes our review of the OIGs’ mandated reports, which were also issued primarily in November 2017. Our reporting objectives were to describe 1. the reported scope of work covered and type of audit standards OIGs used in their reviews of agencies’ DATA Act spending data; 2. any variations in the reported implementation and use of data standards and quality of agencies’ data, and any common issues and recommendations reported by the OIGs; and 3. the actions, if any, that the Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) have reported taking or planning to take to use the results of OIG reviews to help monitor agencies’ implementation of the act. To address our first and second objectives, we obtained and reviewed 53 OIG reports that were issued on or before January 31, 2018, including reports related to 24 Chief Financial Officers Act of 1990 (CFO Act) agencies and 29 non-CFO Act agencies. Of 91 entities for which second quarter fiscal year 2017 spending data were submitted, we did not obtain and review OIG DATA Act reports for 38 entities with obligations totaling at least $1.2 billion (as displayed on USAspending.gov on May 23, 2018) because no reports for those entities were publicly available by our January 31, 2018, cutoff date. Table 3 lists the 53 agencies for which we obtained and reviewed the OIG reports on the quality of data that agencies submitted in accordance with DATA Act requirements. We also developed and conducted a survey of OIGs to provide further details on the design and results of their efforts to conduct statistical samples to select and test agencies’ data submissions and reviews of internal controls. In November 2017, we sent the survey to those OIGs whose agencies originally submitted DATA Act data to Treasury’s DATA Act broker. We received and reviewed responses from the 53 OIGs that we obtained reports from, with 9 OIGs including the completed surveys in their published reports and the others providing us their completed survey responses separately. We analyzed 53 OIG reports and survey responses, following up with OIGs for clarification when necessary. We reviewed each of the 53 OIG reports we obtained and identified the reported scope of work covered (e.g., the quarter of data reviewed) and the type of audit standards OIGs used to conduct their reviews (e.g., performance audit or attestation examination engagement). We also developed and used a data collection instrument to compile and summarize the conclusions and opinions included in the OIG reports on the completeness, timeliness, accuracy, and quality of agencies’ data submissions and their implementation and use of data standards. During this process, GAO analysts worked in teams of three to reach a consensus on how these OIG conclusions and opinions were categorized. For OIG reports that did not specifically state whether the agencies met the DATA Act requirements, we considered the reported results in conjunction with the more detailed information provided in the OIG responses to our survey and made conclusions about the OIGs’ assessments based on our professional judgment. We also reviewed the OIG reports and survey responses and used two data collection instruments to compile, analyze, and categorize common issues or agency-specific control deficiencies the OIGs identified in their reviews and recommendations they made to address them. During this process, GAO analysts worked in teams of three to obtain a consensus in how these issues and deficiencies were categorized. To address our third objective, we interviewed OMB staff and Treasury officials about how they used or planned to use the results of the OIG DATA Act reviews to assist them in their monitoring of agencies’ implementation of the act. We conducted this performance audit from September 2017 to July 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Offices of Inspector General Digital Accountability and Transparency Act of 2014 Testing Results In their survey responses, Offices of Inspector General (OIG) for 45 agencies reported actual overall error rates or estimated error rates and estimated ranges of errors associated with the spending data transactions they tested for accuracy, completeness, or timeliness (see table 4). These results include OIGs that tested a statistical sample of transactions, tested the full population, and conducted an assessment of internal controls without additional substantive testing. OIGs that tested a sample responded that they used different sampling criteria, and the sources of files they used to select their statistical samples varied based on the files that were available. Regardless of whether the OIG tested a sample or the full population, some of the OIGs selected items for testing from File C, File D1, File D2, or some combination thereof. As a result, the overall error rates the OIGs reported are not from the same data submission files and are not fully comparable, but are intended to provide additional information on the individual results of the completeness, timeliness, and accuracy of the data each agency OIG tested. Appendix III: Comments from the Council of the Inspectors General on Integrity and Efficiency Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Michael LaForge (Assistant Director), Diane Morris (Auditor in Charge), Umesh Basnet, Thomas Hackney, and Laura Pacheco made major contributions to this report. Other key contributors include Dave Ballard, Carl Barden, Maria Belaval, Jenny Chanley, Patrick Frey, Ricky Harrison, Jason Kelly, Jason Kirwan, Quang Nguyen, Samuel Portnow, Carl Ramirez, Anne Rhodes-Kline, and Dacia Stewart. Related GAO Products DATA Act: OMB, Treasury, and Agencies Need to Improve Completeness and Accuracy of Spending Data and Disclose Limitations. GAO-18-138. Washington, D.C.: November 8, 2017. DATA Act: As Reporting Deadline Nears, Challenges Remain That Will Affect Data Quality. GAO-17-496. Washington, D.C.: April 28, 2017. DATA Act: Office of Inspector General Reports Help Identify Agencies’ Implementation Challenges. GAO-17-460. Washington, D.C.: April 26, 2017. DATA Act: Implementation Progresses but Challenges Remain. GAO-17- 282T. Washington, D.C.: December 8, 2016. DATA Act: OMB and Treasury Have Issued Additional Guidance and Have Improved Pilot Design but Implementation Challenges Remain. GAO-17-156. Washington, D.C.: December 8, 2016. DATA Act: Initial Observations on Technical Implementation. GAO-16- 824R. Washington, D.C.: August 3, 2016. DATA Act: Improvements Needed in Reviewing Agency Implementation Plans and Monitoring Progress. GAO-16-698. Washington, D.C.: July 29, 2016. DATA Act: Progress Made but Significant Challenges Must Be Addressed to Ensure Full and Effective Implementation. GAO-16-556T. Washington, D.C.: April 19, 2016. DATA Act: Data Standards Established, but More Complete and Timely Guidance Is Needed to Ensure Effective Implementation. GAO-16-261. Washington, D.C.: January 29, 2016. DATA Act: Progress Made in Initial Implementation but Challenges Must be Addressed as Efforts Proceed. GAO-15-752T. Washington, D.C.: July 29, 2015.
Why GAO Did This Study The DATA Act was enacted to increase accountability and transparency and, among other things, expanded on the required federal spending information that agencies are to submit to Treasury for posting to a publicly available website. The act also includes provisions requiring a series of oversight reports by agencies' OIGs and GAO. The objectives of this report are to describe (1) the reported scope of work covered and type of audit standards OIGs used in their reviews of agencies' DATA Act spending data; (2) any variations in the reported implementation and use of data standards and quality of agencies' data, and any common issues and recommendations reported by the OIGs; and (3) the actions, if any, OMB and Treasury have reported taking or planning to take to use the results of OIG reviews to help monitor agencies' implementation of the act. To address these objectives, GAO reviewed 53 OIG reports issued on or before January 31, 2018, that assessed agencies' first submissions of spending data for the second quarter of fiscal year 2017 and surveyed the OIGs to obtain additional information. What GAO Found The Digital Accountability and Transparency Act of 2014 (DATA Act) requires agencies' Offices of Inspector General (OIG) to issue reports on their assessments of the quality of the agencies' spending data submissions and compliance with the DATA Act. The scope of all OIG reviews covered their agencies' second quarter fiscal year 2017 submissions. The files the OIGs used to select and review sample transactions varied based on data availability, and OIGs performed different types of reviews under generally accepted government auditing standards. Some OIGs reported testing a statistical sample of transactions that their agencies submitted and other OIGs reported testing the full population of submitted transactions. Because of these variations, the overall error rates reported by the OIGs are not fully comparable and a government-wide error rate cannot be projected. According to the OIG reports, about half of the agencies met Office of Management and Budget (OMB) and Department of the Treasury (Treasury) requirements for the implementation and use of data standards. The OIGs also reported that most agencies' first data submissions were not complete, timely, accurate, or of quality. OIG survey responses show that OIGs generally reported higher (projected) overall error rates for the accuracy of data than for completeness and timeliness. OIGs reported certain errors that involve inconsistencies in how the Treasury broker (system that collects and validates agency-submitted data) extracted data from certain federal award systems that resulted in government-wide issues outside the agencies' control, while other errors may have been caused by agency-specific control deficiencies. For example, OIGs reported deficiencies related to agencies' lack of effective procedures or controls and systems issues. Most OIGs made recommendations to agencies to address identified concerns. OMB staff and Treasury officials told GAO that they reviewed the OIG reports to better understand issues identified by the OIGs. OMB issued new guidance in June 2018 requiring agencies to develop data quality plans intended to achieve the objectives of the DATA Act. Treasury officials told GAO that they are collaborating with OMB and the Chief Financial Officers Council DATA Act Audit Collaboration working group to identify and resolve government-wide issues. What GAO Recommends GAO is not making recommendations in this report. The Council of the Inspectors General on Integrity and Efficiency (CIGIE) noted that GAO's report provides useful information on OIG efforts to meet oversight and reporting responsibilities under the DATA Act. OMB, Treasury, and CIGIE also provided technical comments that GAO incorporated as appropriate.
gao_GAO-17-775
gao_GAO-17-775_0
Background GPRAMA significantly enhances GPRA, the centerpiece of a statutory framework that Congress put in place during the 1990s to help resolve longstanding performance and management problems in the federal government and provide greater accountability for results. Congress passed GPRAMA in 2010 to address a number of persistent federal performance challenges, including focusing attention on crosscutting issues and enhancing the use and usefulness of performance information. Goals and Objectives OMB and agencies are to establish various government-wide and agency-specific performance goals, in line with GPRAMA requirements or OMB guidance. These include the following: Cross-agency priority (CAP) goals: CAP goals are crosscutting and include outcome-oriented goals covering a limited number of policy areas as well as goals for management improvements needed across the government. OMB is to coordinate with agencies to establish CAP goals at least every 4 years. OMB is also required to coordinate with agencies to develop annual federal government performance plans to, among other things, define the level of performance to be achieved toward the CAP goals. Strategic objectives: A strategic objective is the outcome or impact the agency is intending to achieve through its various programs and initiatives. Agencies establish strategic objectives in their strategic plans and may update the objectives during the annual update of performance plans. Agency priority goals (APG): At the agency level, every 2 years, GPRAMA requires that the heads of certain agencies, in consultation with OMB, identify a subset of agency performance goals as APGs. These goals are to reflect the agencies’ highest priorities. They should be informed by the CAP goals as well as consultations with relevant congressional committees and other interested parties. In a schedule established by GPRAMA, OMB and agencies are to develop and publish new CAP goals, APGs, and strategic plans (with updated strategic objectives) in February 2018. Performance Reviews GPRAMA and related OMB guidance require agencies to regularly assess their progress in achieving goals and objectives through performance reviews. Data-driven reviews: Agency leaders and managers are to use regular meetings, at least quarterly, to review data and drive progress toward key performance goals and other management-improvement priorities. For each APG, GPRAMA requires agency leaders to conduct reviews at least quarterly to assess progress toward the goal, determine the risk of the goal not being met, and develop strategies to improve performance. Similarly, the Director of OMB, with relevant parties, is to review progress toward each CAP goal. Strategic reviews: OMB guidance directs agency leaders to annually assess progress toward achieving each strategic objective using a broad range of evidence. Leadership Positions and Council GPRAMA establishes certain senior leadership positions and a council, as described below. Chief Operating Officer (COO): The deputy agency head, or equivalent, is designated COO, with overall responsibility for improving agency management and performance. Performance Improvement Officer (PIO): Agency heads are to designate a senior executive within the agency as the PIO. The PIO reports directly to the COO and assists the agency head and COO with various performance management activities. Goal leaders: Goal leaders are responsible for developing strategies to achieve goals, managing execution, and regularly reviewing performance. GPRAMA requires goal leaders for CAP goals and agency performance goals, including APGs. OMB guidance directs agencies to designate goal leaders for strategic objectives. Performance Improvement Council (PIC): The PIC is charged with assisting OMB to improve the performance of the federal government and achieve the CAP goals. The PIC is chaired by the Deputy Director for Management at OMB and includes agency PIOs from each of the 24 CFO Act agencies as well as other PIOs and individuals designated by the chair. Among its responsibilities, the PIC is to work to resolve government-wide or crosscutting performance issues, and facilitate the exchange among agencies of practices that have led to performance improvements within specific programs, agencies, or across agencies. Transparency and Public Reporting GPRAMA includes several provisions related to providing the public and Congress with information, as described below. Performance.gov: GPRAMA calls for a single, government-wide performance website to communicate government-wide and agency performance information. Among other things, the website— implemented by OMB as Performance.gov—is to include (1) quarterly progress updates on CAP goals and APGs; (2) an inventory of all federal programs; and (3) agency strategic plans, annual performance plans, and annual performance reports. Reporting burden: GPRAMA establishes a process to reexamine the usefulness of certain existing congressional reporting requirements. Specifically, GPRAMA requires an annual review (including congressional consultation), based on OMB guidance, of agencies’ reporting requirements to Congress. Additionally, OMB is to include in the budget a list of plans and reports determined to be outdated or duplicative and may submit legislation to eliminate or consolidate such plans or reports. The Administration’s Plans for Federal Performance Management In early 2017, the administration announced several efforts that are intended to improve government performance. The 2018 Budget Blueprint states that the President’s Management Agenda will seek to improve the federal government’s effectiveness by using evidence-based approaches, balancing flexibility with accountability to better achieve results, improving mission support functions, and developing and monitoring critical performance measures. In addition, OMB issued several memoranda detailing the administration’s plans to improve government performance by reorganizing the government, reducing the federal workforce, and reducing federal agency burden. A number of these efforts, which are to leverage GPRAMA and our past work, have the potential to further progress in addressing key governance challenges. As part of reorganization efforts, OMB and agencies are developing government-wide and agency reform plans, respectively, that are to leverage various GPRAMA provisions. For example, an April 2017 memorandum states that OMB intends to monitor implementation of the reform plans using CAP goals, APGs, annual strategic reviews, and Performance.gov. The government-wide plan also is to include crosscutting reform proposals, such as merging agencies or programs that have similar missions. To that end, the memorandum states agencies should consider our reports, including our work on fragmentation, overlap, and duplication, as well as inspectors general reports. Despite Progress in Selected Areas, the Executive Branch Needs to Take Additional Actions to Manage Crosscutting Issues Agencies Have Made Progress in Some Areas, but Continued Attention Is Needed to Better Manage Crosscutting Issues Many of the meaningful results that the federal government seeks to achieve, such as those related to ensuring public health, providing homeland security, and promoting economic development, require the coordinated efforts of more than one federal agency, level of government, or sector. For more than 2 decades, we have reported on agencies’ missed opportunities for improved collaboration through the effective implementation of GPRA and, more recently, GPRAMA. Our reports also have demonstrated that collaboration across agencies is critical to address issues of fragmentation, overlap, and duplication as well as many of the areas on our High-Risk List. Fragmentation, Overlap, and Duplication: Since 2011, our annual reports have identified 133 crosscutting areas that require the coordinated effort of more than one federal organization, level of government, or sector. For instance, for the area of federal grant awards, we found in January 2017 that the National Park Service (NPS), Fish and Wildlife Service, Food and Nutrition Service, and Centers for Disease Control and Prevention (CDC) had not established guidance and formal processes to ensure their grant-management staff review applications for potential duplication and overlap among grants in their agencies before awarding. We recommended that these agencies do so, and they agreed. As of August 2017, these agencies had taken several actions to address the recommendation. For example, the Department of the Interior (Interior) provided documentation showing that the Fish and Wildlife Service now requires discretionary grant applicants to provide a statement that addresses whether there is any overlap or duplication of proposed projects or activities to be funded by the grant. Fish and Wildlife also updated its guidance to grant awarding offices instructing them to perform a potential overlap and duplication review of all selected applicants prior to award. Our Action Tracker provides details on the status of actions from our annual reports. Within the 133 crosscutting areas, since 2011 we have identified 315 targeted actions where opportunities exist to better manage fragmentation, overlap, and duplication, including 29 new actions in our most recent report issued in April 2017. We found that the executive branch and Congress addressed 145 (46 percent) of the 315 actions. For example, in November 2014, we recommended that the U.S. Coast Guard and Consumer Product Safety Commission establish a formal approach to coordination (such as a memorandum of understanding) to facilitate information sharing; better leverage their resources; and address challenges, including those related to fragmentation and overlap that we identified. In response to this recommendation, the two agencies signed a formal policy document to govern their coordination in May 2015. This policy document outlined procedures for determining jurisdictional authority for recreational boat-associated equipment and marine safety items. Specifically, the procedures clarified that upon receiving notice of a possible defect, the agency receiving such notice shall determine whether the item properly falls within its jurisdiction, and if not, initiate discussions to determine the appropriate jurisdiction. These new procedures should help the agencies share information and leverage each other’s resources so they can better ensure that recreational boat-associated equipment and marine safety items are fully regulated. However, more work is needed on the remaining 170 actions (54 percent) that have not been fully addressed. For example, in July 2016, we reported that four federal agencies—the Departments of Defense, Education, Health and Human Services, and Justice—manage at least 10 efforts to collect data on sexual violence, which differ in target population, terminology, measurements, and methodology. We found that data collection efforts use 23 different terms to describe sexual violence. Data collection efforts also differed in how they categorized particular acts of sexual violence, the context in which data were collected, data sources, units of measurement, and time frames. We recommended that OMB convene an interagency forum to better manage fragmentation of efforts to collect sexual violence data. In commenting on that report, OMB stated it would consider implementing the action in the future but did not believe it was the most effective use of resources at that time, in part because the agencies were not far enough along in their research. In response, we stated that given the number of federal data collection efforts, the range of differences across them, and the potential for causing confusion, it would be beneficial for agencies to discuss these differences and determine whether they are, in fact, necessary. As of July 2017, OMB had not provided an update on the status of this recommendation. High-Risk List: Since the early 1990s, our high-risk program has focused attention on government operations with greater vulnerabilities to fraud, waste, abuse, and mismanagement or that are in need of transformation to address economy, efficiency, or effectiveness challenges. As of February 2017, there were 34 high-risk areas covering a wide range of issues including human capital management, modernizing the U.S. financial regulatory system, and ensuring the security of federal information systems and cyber critical infrastructure. Many of these high- risk areas require a coordinated response from more than one branch of government, agency, or sector. In the time between our 2015 and 2017 High-Risk Updates, many of these high-risk areas on our list demonstrated solid progress. During that period, 15 high-risk areas fully met at least one of the five criteria required for removal from the High-Risk List. In many cases, progress was possible through the joint efforts of Congress and leadership and staff in agencies. For example, Congress passed over a dozen laws following our 2015 High-Risk Update to help address high-risk issues. In addition, in 2017, we removed one high-risk area on managing terrorism-related information, because significant progress had been made to strengthen how intelligence on terrorism, homeland security, and law enforcement is shared among federal, state, local, tribal, international, and private sector partners. Despite this progress, continued oversight and attention is also warranted given the issue’s direct relevance to homeland security as well as the constant evolution of terrorist threats and changing technology. Our February 2017 High-Risk Update also highlighted a number of long- standing high-risk areas that require additional attention. We also added three new crosscutting areas to incorporate the management of federal programs that serve tribes and their members, the government’s environmental liabilities, and the 2020 decennial census. Based on our body of work on federal programs that serve tribes and their members, we concluded that federal agencies had (1) ineffectively administered Indian education and health care programs and (2) inefficiently fulfilled their responsibilities for managing the development of Indian energy resources. For example, we identified numerous challenges facing Interior’s Bureau of Indian Education (BIE) and Bureau of Indian Affairs, and the Department of Health and Human Services’ (HHS) Indian Health Service (IHS), in administering education and health care services. We concluded that these challenges put the health and safety of American Indians served by these programs at risk. In May 2017, we issued two additional reports on accountability for school construction and safety at schools funded by BIE. Although these agencies have taken some actions to address recommendations we made related to Indian programs, about 50 recommendations have yet to be fully resolved. We are monitoring federal efforts to address the unresolved recommendations. We also are reviewing IHS’s workforce, and tribal nations’ management and use of their energy resources. The Executive Branch Could Better Leverage GPRAMA Implementation to Work across Organizational Boundaries Many of the crosscutting areas highlighted by our annual reports on fragmentation, overlap, and duplication and designated as high-risk would benefit from enhanced collaboration among the federal agencies involved in them. GPRAMA establishes a framework aimed at taking a more crosscutting and integrated approach to focusing on results and improving government performance. Our survey results and past work demonstrate that agencies continue to face difficulties when working together on crosscutting issues, but also that implementing certain GPRAMA requirements can have a positive effect on collaboration. An item related to coordination in our survey of federal managers is statistically significantly lower in 2017, relative to our previous survey in 2013 and our initial survey in 1997. In 2017, an estimated 43 percent of managers agreed that they use information obtained from performance measurement to a great or very great extent when coordinating program efforts with internal or external organizations (compared to an estimated 50 percent in 2013 and an estimated 57 percent in 1997). Moreover, our past work has found that agencies face a variety of challenges when working across organizational boundaries to deliver programs and improve performance. For example, our work has found that interagency groups have, at times, encountered difficulty clarifying roles and responsibilities or developing shared outcomes and performance measures. In contrast, our past work demonstrates that implementing GPRAMA provisions can improve collaboration. For example, in May 2016, we found that OMB and the PIC updated the governance structure for CAP goals to include both agency-level and Executive Office of the President goal leaders and held regular, senior-level reviews on CAP goal progress. Moreover, CAP goal teams told us that the CAP goal designation increased leadership attention and improved interagency collaboration on their crosscutting issues. Furthermore, our prior work has found that priority goals and related data-driven reviews have also been used to help manage crosscutting issues and enhance collaboration. Priority Goals and Related Reviews Can Help Address Crosscutting Issues Various GPRAMA requirements are aimed at improving agencies’ coordination of efforts to address crosscutting issues. As with our 2013 survey, our 2017 survey continues to show that CAP goals, APGs, and related data-driven reviews—also called quarterly performance reviews (QPR)—are associated with reported higher levels of collaboration with internal and external stakeholders. For example, our 2017 survey data indicate that about half of federal managers (an estimated 54 percent) reported they were somewhat or very familiar with CAP goals. Among these individuals, those who viewed their programs as contributing to CAP goals to a great or very great extent (36 percent) were more likely to report collaborating outside their program to a great or very great extent to help achieve CAP goals (62 percent), as shown in figure 2. Our analysis shows a similar pattern exists for APGs and QPRs. Our past work also has highlighted ways in which OMB and agencies could better implement GPRAMA’s crosscutting provisions—many of which have been addressed. A continued focus on fully and effectively implementing these provisions will be important as OMB and agencies establish new CAP goals and APGs, and assess progress toward them through related QPRs. Cross-agency priority (CAP) goals: In May 2012 and June 2013, we found that OMB had not always identified relevant agencies and program activities as contributors to the initial set of CAP goals. OMB took actions in response to our recommendations to include relevant contributors. Our most recent review, in May 2016, found that all relevant contributors had been identified for a subsequent set of CAP goals. In that report, we also found that OMB and the PIC had improved implementation of the CAP goals, in part, by helping agencies build their capacity to contribute to implementing the goals. Appendix II summarizes our past recommendations related to GPRAMA and the actions agencies have taken to address them. Agency priority goals (APGs): In April 2013, we found that agencies did not fully explain the relationship between their APGs and crosscutting efforts. Identify contributors: Similar to OMB’s responsibilities with the CAP goals, agencies are to identify the various organizations and programs that contribute to each of their performance goals, including APGs. We found that agencies identified internal contributors for their APGs, but did not list external contributors in some cases. We recommended that the Director of OMB ensure that agencies adhere to OMB’s guidance for website updates by providing complete information about the organizations, program activities, regulations, tax expenditures, policies, and other activities—both within and external to the agency— that contribute to each APG. In response, in April 2015, OMB asked agencies to identify organizations, program activities, regulations, policies, tax expenditures, and other activities contributing to their 2014-2015 APGs. Based on an analysis of the final quarterly updates for those APGs, published in December 2015, we found that agencies made progress in identifying external organizations and programs for their APGs. Describe how agency goals contribute to CAP goals: Agencies generally did not identify how their APGs contributed to CAP goals. We recommended that OMB direct agencies to describe in their performance plans how the agency’s performance goals—including APGs—contribute to any of the CAP goals as required by GPRAMA. In response, in July 2013, OMB updated its guidance directing agencies to include a list of the CAP goals to which the agency contributes and explain the agency’s contribution to them in their strategic plans, performance plans, and performance reports. Data-driven reviews: For their data-driven reviews of agency priority goals, agencies are to include, as appropriate, relevant personnel within and outside the agency who contribute to the accomplishment of each goal. However, in February 2013, we found that most Performance Improvement Officers (PIO) we surveyed (16 of 24) indicated that there was little to no involvement in these reviews from external officials who contribute to achieving agency goals. We recommended that OMB and the PIC help agencies extend their QPRs to include, as relevant, representatives from outside organizations that contribute to achieving their APGs. OMB staff told us that they generally concurred with the recommendation, but believed it would not always be appropriate to regularly include external representatives in agencies’ data-driven reviews, which they considered to be internal management meetings. In a subsequent review, we found in July 2015 that PIOs at 21 of the 22 agencies we surveyed said that their data-driven reviews had a positive effect on collaboration among officials from different offices or programs within the agency. Despite the positive effects, most agency PIOs (17) indicated that there continued to be little to no involvement in the reviews from external officials who contribute to achieving agency goals. In May 2016, OMB and PIC staff reported that, in response to our earlier recommendation, they were working with agencies to identify examples where agencies included representatives from outside organizations in data-driven reviews, and to identify promising practices based on those experiences. PIC staff told us they would disseminate any promising practices identified through the PIC Internal Reviews Working Group and other venues. In August 2017, OMB staff told us they plan to hold a summit with agencies later in the year to discuss implementing various performance management requirements, which could include agencies highlighting experiences and promising practices related to involving external officials in their data-driven reviews. We continue to believe data- driven reviews should include any relevant contributors from outside organizations and will continue to monitor progress. Despite the important role priority goals and related reviews can play in addressing crosscutting issues and enhancing collaboration, OMB recently removed the priority status of the current sets of priority goals. According to OMB staff, removing the priority designation from CAP goals and APGs returned them to regular performance goals, which are not subject to quarterly data-driven reviews or updates on the results of those reviews on Performance.gov. In a June 2017 memorandum, OMB stated that CAP goals and APGs are intended to focus efforts toward achieving the priorities of current political leadership, and therefore reporting on the priority goals of the previous administration on Performance.gov was discontinued for the remainder of the period covered by the goals (through September 30, 2017, the end of fiscal year 2017). The memorandum further noted that agencies and teams working on those goals should continue working on the current goals where they align with the priorities of the current administration. Moreover, the memorandum states that agencies have flexibility in structuring their data-driven reviews, but they should continue such reviews focused on agency priorities. When asked about these actions, OMB staff told us that they believed they were working in line with the intentions of GPRAMA, which realigned the timing of goal setting with presidential terms, to better take into account changes in priorities. This is the first presidential transition since GPRAMA was enacted, and OMB staff told us they thought the act was unclear on how to handle priority goals during the changes in administrations and priorities. They stated that it was not practical to continue reporting on the priority goals of the prior administration as agencies worked to develop new strategic plans and priority goals for publication in February 2018. Hence, they told us OMB ended the current round of CAP goals and directed agencies to remove the priority designation from the APGs, returning them to regular performance goals. OMB staff further told us that although the guidance was published in a June 2017 memorandum, these decisions had been made and previously communicated to agencies during the transition in administrations. Therefore, reporting on the fiscal year 2014-2017 CAP goals, fiscal year 2016-2017 APGs, and related reviews stopped much earlier in the year, well before goal cycles were planned to be completed on September 30, 2017. OMB staff further stated that although the goals no longer had priority designations, work towards them largely continued in 2017. For example, one of the prior administration’s CAP goals was to modernize the federal permitting and review process for major infrastructure projects. OMB staff told us that they and agencies have continued many of the activities intended to achieve that goal, but they are no longer subject to quarterly data-driven reviews or updates on the results of these reviews on Performance.gov. Moreover, they expect most of this work will continue towards a new and refocused CAP goal on infrastructure permitting modernization. OMB staff reaffirmed to us their intentions to resume implementation of CAP goals, APGs, and related data-driven reviews when the new planning and reporting cycle begins in February 2018. This is in line with stated plans to leverage various GPRAMA provisions to track progress of proposed government-wide and agency-specific reforms, as outlined in OMB’s April 2017 memorandum on the reform plans. In addition, OMB’s July 2017 update to its guidance for implementing GPRAMA similarly focuses on continued implementation of the act. Strategic Reviews and Program Inventory Also Could Help with Crosscutting Issues Additional aspects of GPRAMA implementation could similarly help improve the management of crosscutting issues. Strategic reviews: OMB’s 2012 guidance implementing GPRAMA established a process in which agencies, beginning in 2014, were to conduct leadership-driven, annual reviews of their progress toward achieving each strategic objective established in their strategic plans. As we found in July 2015, effectively implementing strategic reviews could help identify opportunities to reduce, eliminate, or better manage instances of fragmentation, overlap, and duplication. Under OMB’s guidance, agencies are to identify the various organizations, program activities, regulations, tax expenditures, policies, and other activities that contribute to each objective, both within and outside the agency. Where progress in achieving an objective is lagging, the reviews are intended to identify strategies for improvement, such as strengthening collaboration to better address crosscutting challenges, or using evidence to identify and implement more effective program designs. If successfully implemented in a way that is open, inclusive, and transparent—to Congress, delivery partners, and a full range of stakeholders—this approach could help decision makers assess the relative contributions of various programs to a given objective. Successful strategic reviews could also help decision makers identify and assess the interplay of public policy tools that are being used to ensure that those tools are effective and mutually reinforcing, and that results are being efficiently achieved. In July 2017, OMB released guidance which updated the status of the 2017 strategic reviews. Because agencies are currently developing new strategic goals and objectives, OMB stated that agencies may forego the reporting and categorization requirements for any current strategic objectives that an agency determines will be substantively different or no longer aligned with the current administration’s policy, legislative, regulatory, or budgetary priorities. In addition, OMB stated that while there will be no formal meetings between OMB and the agencies to discuss findings and related progress from the 2017 strategic reviews, it expects that agencies will continue to conduct strategic reviews or assess progress made toward strategic goals and objectives aligned with administration policy. Furthermore, OMB stated that during this transition year, updates of progress on agency strategic objectives will only be published in the agency’s annual performance report and will not be reported to Performance.gov. Full reporting through Performance.gov is to resume after new agency strategic plans are published in February 2018. Agencies are to include a progress update for strategic objectives as part of their progress update in their fiscal year 2017 annual performance reports. Agencies also must address next steps for performance improvement as part of their fiscal year 2019 annual performance plans. Program inventories: GPRAMA requires OMB to publish a list of all federal programs, along with related budget and performance information, on a central government-wide website. Such a list could help decision makers and the public fully understand what the federal government does, how it does it, and how well it is doing. An inventory of federal programs could also be a critical tool to help decision makers better identify and manage fragmentation, overlap, and duplication across the federal government. Agencies developed initial program inventories in May 2013, but since then have not updated or more fully implemented these inventories. In October 2014, we found several issues limited the completeness, comparability, and usefulness of the May 2013 program inventories. OMB and agencies did not take a systematic approach to developing comprehensive inventories. For example, OMB’s guidance in Circular No. A-11 presented five possible approaches agencies could take to define their programs and noted that agencies could use one or more of those approaches in doing so. We found that because the agencies used inconsistent approaches to define their programs, the comparability of programs was limited within agencies as well as government-wide. In addition, we found that the inventories had limited usefulness for decision making, as they did not consistently provide the program and related budget and performance information required by GPRAMA. Moreover, we found that agencies did not solicit feedback on their inventories from external stakeholders—which can include Congress, state and local governments, third party service providers, and the public. Doing so would have provided OMB and agencies an opportunity to ensure they were presenting useful information for stakeholder decision making. We concluded that the ability to tag and sort information about programs through a more dynamic, web-based presentation could make the inventory more useful. In October 2014, we made several recommendations to OMB to update relevant guidance to help develop a more coherent picture of all federal programs and to better ensure relevant information is useful for decision makers. For example, we recommended that OMB revise its guidance to direct agencies to consult with relevant congressional committees and stakeholders on their approach to defining and identifying programs when developing or updating their inventories. OMB staff generally agreed with these recommendations, but have not yet taken any actions to implement them. OMB’s guidance for the program inventory has largely remained unchanged since 2014, when OMB postponed further development of the program inventory and eliminated portions of the guidance. For example, the guidance no longer describes, or provides directions for agencies to meet, GPRAMA’s requirements for presenting related budget or performance information for each program. OMB decided to postpone implementing a planned May 2014 update to the program inventory in order to coordinate with the implementation of the public spending reporting required by the Digital Accountability and Transparency Act of 2014 (DATA Act). OMB subsequently stated that it would not begin implementing the program inventory until after the DATA Act was implemented in May 2017, despite requirements for regular updates to the program inventory to reflect current budget and performance information. The DATA Act is now being implemented, but OMB has postponed resuming the development of the program inventory. In July 2017, OMB staff told us that they are now considering how to align GPRAMA’s program inventory provisions with future implementation of the Program Management Improvement Accountability Act (PMIAA). This was reflected in OMB’s July 2017 update to its guidance, which states that OMB is working with agencies to determine the right strategy to merge the implementation of the DATA Act and PMIAA with GPRAMA’s program inventory requirements to the extent possible to avoid duplicating efforts. For example, PMIAA requires OMB to coordinate with agency Program Management Improvement Officers to conduct portfolio reviews of agency programs to assess the quality and effectiveness of program management. GPRAMA requires OMB to issue guidance for implementing the program inventory requirements, among other things. Moreover, federal internal control standards state that organizations should clearly define what is to be achieved, who is to achieve it, how it will be achieved, and the time frames for achievement. As described above, OMB’s current guidance for the program inventory lacks some of those details—such as describing and providing direction to meet GPRAMA’s requirements for budget and performance information—in part because OMB is working with agencies to determine a strategy for implementation. Ensuring all GPRAMA requirements are covered and taking action on our past recommendations would help OMB improve its guidance to more fully implement the program inventory and improve its usefulness. To that end, in a report issued earlier this month, we identified a series of iterative steps that OMB could use in directing agencies to develop a useful inventory, as described in figure 3. A useful inventory would consist of all programs identified, information about each program, and the organizational structure of the programs. Our work showed that the principles and practices of information architecture—a discipline focused on organizing and structuring information—offer an approach for developing such an inventory to support a variety of uses, including increased transparency for federal programs. Such a systematic approach to planning, organizing, and developing the inventory that centers on maximizing the use and usefulness of information could help OMB ensure the inventory is implemented in line with GPRAMA requirements and meets the needs of decision makers and the public, among others. OMB’s guidance also lacks specific time frames, with associated milestones for resuming implementation of the program inventory requirements. As part of PMIAA’s requirements, OMB is to issue standards, policies, and guidelines for program and project management for agencies by December 2017. OMB staff told us that, within a year after that, they expect to issue further guidance on moving forward with resuming the program inventory. However, that general time frame was not reflected in the July 2017 update to OMB’s guidance. Providing specific time frames and associated milestones would bring the program inventory guidance in line with other portions of OMB’s guidance for implementing GPRAMA requirements, which contains a timeline of various performance planning and reporting requirements, including specific dates for meeting those requirements and related descriptions of required actions. For example, OMB’s July 2017 guidance identifies over 30 actions agencies should take between June 2017 and December 2018 to implement various GPRAMA provision. More specific time frames and milestones related to the program inventory requirements would help agencies prepare for resumed implementation by allowing them to know what actions they would be expected to take and by when. Moreover, publicly disclosing planned implementation time frames and associated milestones also would help ensure that external stakeholders are prepared to engage with agencies as they develop and update their program inventories. The Executive Branch Does Not Systematically Assess the Results Achieved by Tax Expenditures, Which Represent Over $1 Trillion in Annual Forgone Revenue Effectively implementing various GPRAMA tools could help inform assessments of the performance of tax expenditures, which are reductions in tax liabilities that result from preferential provisions (figure 4). In fiscal year 2016, tax expenditures represented an estimated $1.4 trillion in forgone revenue, an amount greater than total discretionary spending that year. Despite the magnitude of these investments, our work has also shown that little has been done to determine how well specific tax expenditures work to achieve their stated purposes and how their benefits and costs compare to those of spending programs with similar goals. GPRAMA requires OMB to identify tax expenditures that contribute to the CAP goals. In addition, OMB guidance directs agencies to identify tax expenditures that contribute to their strategic objectives and APGs. However, our past work reviewing GPRAMA implementation found that OMB and agencies rarely identified tax expenditures as contributors to these goals. Fully implementing our recommendation to identify how tax expenditures contribute to various goals could help the federal government establish a process for evaluating the performance of tax expenditures. To that end, in May 2017, we provided the Director of OMB with three priority recommendations that require attention: Develop framework for reviewing performance: In June 1994, and again in September 2005, we recommended that OMB develop a framework for reviewing tax expenditure performance. We explained that the framework should (1) outline leadership responsibilities and coordination among agencies with related responsibilities, (2) set a review schedule, (3) identify review methods and ways to address the lack of credible tax expenditure performance information, and (4) identify resources needed for tax expenditure reviews. Since their initial efforts in 1997 and 1999 to outline a framework for evaluating tax expenditures and preliminary performance measures, OMB and the Department of the Treasury (Treasury) have ceased to make progress and retreated from setting a schedule for evaluating tax expenditures. Inventory tax expenditures: In October 2014, we found that OMB had not included tax expenditures in the federal program inventory, and therefore was missing an opportunity to increase the transparency of tax expenditures and the outcomes to which they contribute. We recommended that OMB should designate tax expenditures as a program type in relevant guidance, and develop, in coordination with the Secretary of the Treasury, a tax expenditure inventory that identifies each tax expenditure and provides a description of how the tax expenditure is defined, its purpose, and related budget and performance information. OMB staff said they neither agreed nor disagreed with these recommended actions. As noted earlier, OMB has not resumed updates to the program inventory. Therefore, OMB had not taken any actions in response to this recommendation, according to OMB staff as of July 2017. Identify contributions to agency goals: In July 2016, we found that agencies had made limited progress identifying tax expenditures’ contribution to agency goals, as directed by OMB guidance. As of January 2016, 7 of the 24 CFO Act agencies identified tax expenditures as contributing to their missions or goals. The 11 tax expenditure they identified—out of the 169 tax expenditures included in the President’s Budget for Fiscal Year 2017—represented approximately $31.9 billion of the $1.2 trillion in estimated forgone revenues for fiscal year 2015. (See figure 5.) To help address this issue, we recommended that OMB, in collaboration with the Department of the Treasury, work with agencies to identify which tax expenditures contribute to their agency goals, as appropriate. In particular, we recommended that they identify which specific tax expenditures contribute to specific strategic objectives and APGs. In July 2017, OMB staff said they had taken no actions to address the recommendation. Our July 2016 report also identified options for policymakers to further incorporate tax expenditures into federal budgeting processes, several of which options align with the recommendations discussed above. These options could help achieve various benefits, but we also reported that policymakers would need to consider challenges and tradeoffs in deciding whether or how to implement them. For example, one option was to require that all tax expenditures, or some subset of them, expire after a finite period. This option could result in greater oversight, requiring policymakers to explicitly decide whether to extend more or all tax expenditures. One consideration with this option is that it could lead to frequent changes in the tax code, such as from extended or expired tax expenditures, which can create uncertainty and make tax planning more difficult. Long-standing Weaknesses Persist in Ensuring Performance Information Is Useful and Used; Expanded Use of Data-Driven Reviews Could Help Agencies Better Achieve Results Federal Managers Generally Did Not Report Improvements in Their Use of Performance Information in Decision Making Our previous work has shown that using performance information in decision making is essential to improving results. Performance information can be used across a range of management activities, such as setting priorities, allocating resources, or identifying problems to be addressed. However, our work continues to show that agencies can better use performance information in decision making, as shown in the example in the text box below. Department of Justice (DOJ) Could Better Analyze Performance Information to Reduce Backlog in Immigration Courts In June 2017, we found that the case backlog—cases pending from previous years that remain open at the start of a new fiscal year—at DOJ’s Executive Office for Immigration Review (EOIR) courts more than doubled from fiscal years 2006 through 2015. Stakeholders identified various factors that potentially contributed to the backlog, including continuances—temporary case adjournments until a different day or time. Our analysis of continuance records showed that the use of continuances increased by 23 percent from fiscal years 2006 through 2015. We found that EOIR collects continuance data but does not systematically assess them. Systematically analyzing the use of continuances could provide EOIR officials with valuable information about challenges the immigration courts may be experiencing, such as with operational issues like courtroom technology malfunctions, or areas that may merit additional guidance for immigration judges. Further, using this information to potentially address operational challenges could help that office meet its goals for completing cases in a timely manner. We recommended that the Director of EOIR systematically analyze immigration court continuance data to identify and address any operational challenges faced by courts or areas for additional guidance or training. EIOR agreed with this recommendation. EOIR stated that it supports conducting additional analysis of immigration court continuance data and recognizes that additional guidance or training regarding continuances may be beneficial to ensure that immigration judges use continuances appropriately in support of EOIR’s mission to adjudicate immigration cases in a careful and timely manner. We will monitor EOIR’s progress in taking these actions. Our 2017 survey of federal managers shows little change in their reported use of performance information. Using a set of survey questions, we previously developed an index that reflects the extent to which managers reported that their agencies used performance information for various management activities and decision making. The index suggests that government-wide use of performance information did not change significantly between 2013 and 2017, and it is statistically significantly lower relative to our 2007 survey, when we created the index. Figure 6 shows the questions included in the index and the government-wide results. In regard to individual survey items, in 2017 federal managers reported no changes or decreases in their use of performance information when compared to our last survey and when those survey items were first introduced. These results are generally consistent with our last few surveys. For example, in 2008 we found that there had been little change in federal managers’ reported use of performance information government-wide from 1997 to our 2007 survey. Citing those results, the Senate Committee on Homeland Security and Governmental Affairs report accompanying the bill that would become GPRAMA stated that agencies were not consistently using performance information to improve their management and results. The report further stated that provisions in GPRAMA are intended to address those findings and increase the use of performance information to improve performance and results. However, five items that were highlighted in our 2008 statement on the 2007 survey results generally show no improvement when compared to the 2017 results, as shown in figure 7. The one exception is for managers’ reported use of performance information to refine program performance measures. While this item was statistically significantly higher in 2013 relative to 2007—an estimated 46 percent to 53 percent—the 2017 result (43 percent) is a statistically significant decrease relative to 2013 and is not statistically different from the 2007 results. Another item, the use of performance information to adopt new program approaches or change work processes, also was statistically significantly lower in 2017 (47 percent) when compared to 2007 and 2013 (53 and 54 percent, respectively). This is of particular concern as agencies are developing their reform plans. Moreover, when compared to our 1997 survey, the 2017 results show four of the five items are statistically significantly lower, and the remaining item—allocating resources—has not changed. Similarly, we found there was no improvement in 2017 for more recent survey items on other uses of performance information compared to the years in which they were introduced, as shown in figure 8. Although one item, on the use of performance information to develop program strategy, was statistically significantly higher in 2013 relative to 2007 (an estimated 58 and 51 percent, respectively), the 2017 result (53 percent) does not represent a statistically significant change from either of those years. Another item, on the use of performance information to streamline programs to reduce duplicative activities, is statistically significantly lower relative to 2013, when it was introduced (from 44 to 33 percent in 2017). This is especially concerning because streamlining and reducing duplication are to be key parts of agencies’ reform plans. There is one area in the survey where we saw improvement: an estimated 46 percent of managers agreed to a great or very great extent that employees who report to them pay attention to their agency’s use of performance information in management decision making. That is statistically significantly higher relative to 2013 (40 percent), as well as when compared to when the item was introduced in 2007 (37 percent). For a new and related item in the 2017 survey that asked managers the amount of attention their employees pay to the use of performance information in decision making when compared to 3 years ago, we found an estimated 48 percent reported that employees pay about the same 33 percent reported that employees pay somewhat or a great deal more attention. Federal Managers Generally Did Not Report Changes in Applying Management Practices That Promote the Use of Performance Information In September 2005, we identified five practices that agencies can apply to enhance the use of performance information in their decision making and improve results: demonstrating management commitment; communicating performance information frequently and efficiently; improving the usefulness of performance information, such as by ensuring the accessibility of the information; developing the capacity to use performance information; and aligning agency-wide goals, objectives, and measures. Many of the requirements put in place by GPRAMA reinforce the importance of these practices. Our 2017 survey of federal managers includes a number of items related to these practices. However, the 2017 results suggest that managers have not effectively adopted them. In the following sections, we examine several of the practices to enhance the use of performance information and their related survey items further. In doing so, we also highlight a subset of six survey items related to these practices that, while separate from those in our use of performance information index, we found in September 2014 to have a statistically significant and positive relationship with it. Demonstrating Management Commitment The commitment of agency leaders to results-oriented management is critical to increased use of performance information for policy and program decisions. GPRAMA requires top leadership involvement in performance management, including leading data-driven performance reviews. However, we have previously reported that improvements are needed to strengthen leadership’s commitment to use performance information, as discussed in the text box below. Department of Defense Should Strengthen Leadership Responsibilities for Using Performance Information In January 2005, we designated the Department of Defense’s (DOD) approach to business transformation as high-risk because DOD had not taken the necessary steps to achieve and sustain business reform on a broad, strategic, department-wide, and integrated basis. In the February 2017 update to our High-Risk List, we found that DOD had taken some positive steps to improve its business transformation efforts.continuing to hold business function leaders accountable for diagnosing performance problems and identifying strategies for improvement, and leading regular DOD performance reviews regarding transformation goals and associated metrics and ensuring that business function leaders attend these reviews to facilitate problem solving. In July 2017, DOD officials told us that the department’s performance reviews have been put on hold until after the new Agency Strategic Plan is issued. We will review DOD’s updated Agency Strategic Plan when it is issued (expected in February 2018, as required by GPRAMA) to see if it addresses continuing to hold business function leaders accountable for diagnosing performance problems and identifying strategies for improvement. We will continue to monitor the status of these actions. GAO, High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others, GAO-17-317 (Washington, D.C.: Feb. 15, 2017). Results from our 2017 survey show no statistically significant difference relative to 2013 in managers’ perceptions of leaders’ and supervisors’ attention and commitment to the use of performance information. (See figure 9.) Three items are statistically significantly different from the years when they were introduced. Two items increased between 1997 and 2017: changes by management to my program(s) are based on results-oriented information (from an estimated 16 to 25 percent), and the individual I report to periodically reviews with me the outcomes of my program(s) (from 42 to 54 percent). For the third item, top leadership demonstrates a strong commitment to using performance information to guide decision making, results decreased from 49 percent in 2007 to 42 percent in 2017. New items in the 2017 survey show some improvement in management commitment to the use of performance information in decision making. An estimated 36 percent of federal managers reported that, when compared to 3 years ago, the individual they report to pays somewhat or a great deal more attention to the use of performance information in decision making, while 46 percent said they pay about the same amount of attention. Additionally, an estimated 21 percent of federal managers said that, when compared to 3 years ago, the head of their agency pays somewhat or a great deal more attention to the use of performance information in decision making, while 33 percent said they pay about the same amount of attention. Communicating Performance Information Communicating performance information frequently and effectively throughout an agency can help to achieve the agency’s goals. GPRAMA includes requirements for communicating performance information, such as reporting progress updates for APGs at least quarterly. However, our prior work has found that some agencies could continue to improve in the communication of performance information, as illustrated by the example in the text box below. Department of Education (Education) Could Better Share Effective Practices across States in Grant Program Education awards 21st Century Community Learning Centers grants to states, which in turn competitively award funds to local organizations that use them to offer academic enrichment and other activities to improve students’ academic and behavioral outcomes. In April 2017, we found that states are experiencing substantial difficulty in sustaining their programs after 21st Century funding ends. We further found that Education was missing opportunities in its monitoring efforts to collect information on states’ strategies and practices for program sustainability—information that could be useful for sharing promising practices across states. We recommended that Education use the information it collects from its monitoring visits and ongoing interactions with states to share effective practices across states for sustaining their 21st Century programs once program funding ends. Education neither agreed nor disagreed with the recommendation but outlined steps it is taking to address it. We will continue to monitor progress on the implementation of this recommendation. There is no difference for two survey items on federal managers communicating performance information relative to 2013 or since those items were introduced in 2007. In 2017, we estimate that 44 percent of federal managers agreed to a great or very great extent that agency managers at their level effectively communicate performance information on a routine basis. In addition, 34 percent agreed to a great or very great extent that managers at their level use performance information to share effective program approaches with others. Our 2017 survey data also indicate that agencies may not be effectively communicating to their employees about contributions to CAP goals or progress toward achieving APGs. Of the estimated 54 percent of federal managers who indicated they were familiar with CAP goals, 23 percent reported that their agency has communicated to its employees on those goals to a great or very great extent. Of the 74 percent of federal managers who indicated familiarity with APGs, 44 percent reported that their agency has communicated on progress toward achieving those goals to great or very great extent. Improving the Usefulness of Performance Information Our prior work has shown that agencies should consider users’ differing needs—for accessibility, accuracy, completeness, consistency, ease of use, timeliness, and validity, among others things—to ensure that performance information will be both useful and used. GPRAMA introduced several requirements that could help to address aspects of usefulness, such as requiring agencies to disclose more information about the accuracy and validity of their performance data and actions to address limitations to the data. However, agencies face challenges in ensuring their performance information is useful, with one instance from our past work described in the text box below. The Environmental Protection Agency (EPA) Could Improve Usefulness of Information in Planned Grantee Portal EPA monitors performance reports and program-specific data from grantees to ensure that grants achieve environmental and other program results. However, in July 2016, we found that EPA’s 2014 internal analysis of its grants management business processes identified improvements that, if implemented into EPA’s planned web-based portal, could improve the accessibility and usefulness of information in grantee performance reports for EPA, grantees, and other users. We recommended, among other actions, that EPA incorporate expanded search capability features, such as keyword searches, into its proposed web- based portal for collecting and accessing performance reports to improve their accessibility. EPA agreed with our recommendation but stated that it is a long- term initiative, subject to the agency’s budget process and replacement of its existing grants management system. As of May 2017, EPA officials said that they have not begun work on the web-based portal project, which is subject to the availability of funds. Federal managers generally responded similarly in 2017 on a variety of survey items related to usefulness, relative to earlier surveys. On a broadly worded item, less than half of managers agreed to a great or very great extent that agency managers at their level take steps to ensure that performance information is useful and appropriate. At an estimated 43 percent in 2017, this represents no statistically significant change compared to our last surveys in 2013 or 2007, when the item was introduced. Responses to four survey items indicate no changes in hindrances related to the usefulness of performance information. There is no statistically significant change in managers reporting hindrances compared to 1997 or 2013, as shown in figure 10. In addition, there was a statistically significant increase when compared to 2013 on only one of six items about managers’ views on the usefulness of performance information, as shown in figure 11. As the figure shows, approximately one-third to half of managers agreed to a great or very great extent on each item related to the usefulness of performance information. Although less than half of managers reported having sufficient information on validity of performance data used to make decisions, this represents a statistically significant increase to an estimated 42 percent in 2017 compared to 36 percent in 2013, and from 28 percent in 2000, when this item was introduced. This is a notable improvement because our September 2014 report found that the strongest driver of the use of performance information was whether federal managers had confidence in its validity. Our analysis suggests that easy access to performance information is related to the effective communication of performance information. Of the estimated 49 percent of federal managers in 2017 who agreed to a great or very great extent that performance information is easily accessible to managers at their level, 63 percent also agreed that agency managers at their level effectively communicate performance information on a routine basis to a great or very great extent. Conversely, of the 20 percent that agreed to a small or no extent that performance information is easily accessible to managers at their level, 12 percent also agreed that agency managers at their level effectively communicate performance information on a routine basis to a great or very great extent. Developing the Capacity to Use Performance Information Our prior work has shown that building capacity—including analytical tools and staff expertise—is critical to using performance information in a meaningful manner. GPRAMA lays out specific requirements that reinforce the importance of staff capacity to use performance information. GPRAMA directed the Office of Personnel Management (OPM) to take certain actions to support agency hiring and training of performance management staff. Specifically, by January 2012, OPM was to identify skills and competencies needed by government personnel for setting goals, evaluating programs, and analyzing and using performance information for improving government efficiency and effectiveness. By January 2013, OPM was to incorporate these skills and competencies into relevant position classifications and to work with each agency to incorporate the identified skills into employee training. In April 2013, we found that OPM had completed its work on the first two responsibilities and taken steps to work with agencies to incorporate performance management staff competencies into training. However, OPM did not assess competency gaps among agency performance management staff to inform its work. Without this information, OPM, working with the PIC, was not well-positioned to focus on the most- needed resources and help other agencies use them. We recommended that the Director of OPM, in coordination with the PIC and the Chief Learning Officer Council, work with agencies to take the following three actions: 1. Identify competency areas needing improvement within agencies. 2. Identify agency training that focuses on needed performance management competencies. 3. Share information about available agency training on competency areas needing improvement. In July 2017, PIC staff stated they have not focused on identifying competency areas because the competencies do not resonate strongly with the performance community. Instead, staff said they identified a need for introductory training on performance management, which they have developed and piloted. They said that they are not sure when they will implement the training, since the PIC is reviewing priorities with its new executive director. We continue to believe that identifying the competency areas would be useful, and will monitor the PIC’s efforts to identify and share training. The need for performance management training is further highlighted by our survey results. Our 2017 survey shows no statistically significant change in managers’ responses about the availability of training on various performance management activities relative to 2013, including the use of performance information to make decisions. However, the response to each of the six questions related to specific training is statistically significantly higher relative to the year in which it was introduced, as shown in figure 12. Similarly, in 2017 there was no statistically significant change on four survey items related to agencies’ analysis and evaluation tools and staff’s skills and competencies when compared to 2013 or when these items were introduced. We estimate that in 2017 29 percent of managers agreed to a great or very great extent that their agencies were investing in resources to improve the agencies’ capacity to use performance information; 28 percent of managers agreed to a great or very great extent that their agencies were investing the resources needed to ensure that performance data are of sufficient quality; 33 percent of managers reported that they agreed to a great or very great extent that their agencies have sufficient analytical tools for managers at their levels to collect, analyze, and use performance information; and 33 percent of managers reported that they agree to a great or very great extent that the programs they are involved with have sufficient staff with the knowledge and skills needed to analyze performance information. Conducting Additional Data-Driven Reviews Could Increase the Use of Performance Information in Decision Making Performance reviews can serve as a strategy to bring leadership and other responsible parties together to review performance information and identify important opportunities to drive performance improvements. Our prior work has examined how different types of performance reviews—strategic reviews, data-driven reviews, and retrospective regulatory reviews—can contribute to agencies assessing progress toward desired results. Strategic reviews: As previously mentioned, in implementing GPRAMA, OMB established a review process in which agencies are to annually assess their progress in achieving each strategic objective in their strategic plans, known as strategic reviews. Given the long-term and complex nature of many outcomes, the strategic review should be informed by a variety of evidence regarding the implementation of strategies and their effectiveness in achieving outcomes. OMB’s guidance states that the strategic review process should consider multiple perspectives and sources of evidence to understand the progress made on each strategic objective. It further states that the results of these reviews should inform many of the decision-making processes at the agency, as well as decision making by the agency’s stakeholders, in areas such as long-term strategy, budget formulation, and risk management. In 2017, agencies are completing their fourth round of these reviews. Our prior work has identified ways in which agencies can effectively conduct these reviews and leverage the results that come from them. In July 2015, we identified seven practices federal agencies can employ to facilitate effective strategic reviews. (See sidebar.) In addition, earlier this month we reported on selected agencies’ experiences in implementing these reviews. Specifically, we found that (1) strategic reviews helped direct leadership attention to progress on strategic objectives, (2) agencies used existing management and performance processes to conduct the reviews, and (3) agencies refined their reviews by capturing lessons learned. Data-driven reviews: GPRAMA requires agencies to review progress toward APGs at least once a quarter. The Senate Committee on Homeland Security and Governmental Affairs report accompanying the bill that would become GPRAMA stated that this approach is aimed at increasing the use of performance information to improve performance and results. In February 2013, we identified nine leading practices to promote successful data-driven performance reviews in the federal government. (See sidebar.) In July 2015, we found that most of the 24 CFO Act agencies were conducting their reviews in line with GPRAMA requirements and our leading practices. Moreover, agencies reported that their data-driven performance reviews had positive effects on progress toward agency goals, collaboration between agency officials, the ability to hold officials accountable for progress, and efforts to improve the efficiency of operations. Our 2017 survey shows that federal managers remain largely unfamiliar with their agency’s data-driven performance reviews, also known as quarterly performance reviews (QPRs). An estimated 35 percent of managers reported familiarity with their agency’s QPRs. Survey results show that a greater percentage of Senior Executive Service (SES) managers than non-SES managers reported that they were familiar with QPRs. Approximately 50 percent of SES managers reported being somewhat or very familiar with QPRs; 34 percent of non-SES reported the same. However, for the estimated 35 percent of managers who reported familiarity with QPRs, the more they viewed their programs being subject to a QPR, the more likely they were to report their agency’s QPRs were driving results and conducted in line with our leading practices. Figure 13 shows several illustrative examples of these survey items. For example, of the estimated 48 percent of federal managers who reported their programs being subject to QPRs to a great or very great extent, 83 percent also reported their agencies use QPRs to identify problems or opportunities associated with agency performance goals. Conversely, for the 24 percent of managers who reported their programs were subject to QPRs to a small or no extent, 22 percent also reported the reviews were used for these purposes to a great or very great extent. Being subject to a QPR is also positively related to viewing QPRs as having led to similar meetings at lower levels. An estimated 62 percent of federal managers who reported being subject to QPRs to a great or very great extent also reported their agencies have similar meetings at lower levels to a great or very great extent. An estimated 16 percent of federal managers subject to QPRs to a small or no extent reported the same. Despite the reported benefits of and results achieved through QPRs, as found by our past work and survey data, these reviews are not necessarily widespread. GPRAMA requires agencies to conduct QPRs for APGs, which represent a small subset of goals—generally 2 to 8 priority goals at each designated agency, with approximately 100 total government-wide. Moreover, these required reviews are at the department (or major independent agency) level. These reasons may explain why most managers reported they were not familiar with the reviews. As was described previously, our 2017 survey data show that the reported use of performance information in decision making generally has not improved and in some cases is lower than it was 20 years ago. Survey data also show that managers generally have not reported increases in their employment of practices that further promote the use of performance information in decision making. This suggests that agencies could increase the use of performance information in decision making and the likelihood of achieving desired results by going beyond the specific GPRAMA requirements and expanding their use of data-driven performance reviews—in line with leading practices—to more broadly cover other agency-wide performance goals, as well as goals at lower levels within the agency. For example, such reviews at the program level could help inform the previously mentioned portfolio reviews required by the Program Management Improvement Accountability Act (PMIAA). We have already suggested expanding reviews to other performance goals. Our management agenda for the presidential and congressional transition includes a key action to expand the use of data-driven performance reviews to assess progress toward meeting agency performance goals. Our prior work has stated that although GPRAMA’s requirements apply at the agency-wide level, they can also serve as leading practices at other organizational levels, such as component agencies, offices, programs, and projects. In addition, federal internal control standards call for the design of appropriate control activities, such as top-level reviews of actual performance and reviews by management at the functional or activity level. The standards also recommend that management design control activities at the appropriate levels in the organizational structure. The July 2017 update to OMB’s guidance states that agency leaders, including various chief officer positions, are to conduct frequent data- driven reviews to drive improvements on various management functions. For example, the agency Chief Human Capital Officer is to conduct quarterly data-driven reviews (known as HRStat) to monitor the progress of human capital goals and measures contained in the human capital operating plan. Beyond these management areas, OMB’s guidance also states that agencies may expand quarterly progress reviews beyond APGs to include other goals and priorities. However, OMB’s guidance does not identify practices for agencies to expand the use of these reviews to other goals, such as other agency-wide performance goals or those at lower levels within the agency. As mentioned previously, one of the responsibilities of the Performance Improvement Council (PIC) is to facilitate the exchange among agencies of practices that have led to performance improvements within specific programs, agencies, or across agencies. By working with the PIC to identify and share among agencies practices to expand the use of data- driven reviews, OMB could help agencies increase the use of performance information in decision making and achieve results. Retrospective regulatory reviews: In retrospective reviews, agencies evaluate how existing regulations are working in practice and whether they are achieving expected outcomes. GPRAMA requires agencies to identify and assess how their various program activities and other activities, including regulations, contribute to APGs. However, in April 2014, we found that agencies reported mixed experiences linking retrospective analyses to APGs. We recommended that OMB strengthen these reviews by issuing guidance for agencies to take actions to ensure that contributions made by regulations toward achieving APGs are properly considered, and improve how retrospective regulatory reviews can be used to help inform assessments of progress toward these APGs. OMB staff agreed with this recommendation and stated that the agency was working on strategies to help facilitate agencies’ ability to use retrospective reviews to inform APGs. To that end, in April 2017, OMB issued guidance to agencies that, among other things, emphasized the importance of performance measures related to evaluating and improving the net benefits of their respective regulatory programs. OMB included explicit references to section 6 of Executive Order 13563, which directed agencies’ efforts to conduct retrospective regulatory reviews. Specifically, the updated guidance encourages agencies to establish and report “meaningful performance indicators and goals for the purpose of evaluating and improving the net benefits of their respective regulatory programs.” The guidance further states that agencies’ efforts to improve such net benefits may be conducted as part of developing agency strategic and performance plans and priority goals. In July 2017, OMB confirmed that the updated guidance was issued, in part, to address our April 2014 recommendation. Evidence-Based Tools Can Help Federal Agencies Use Performance Information for Decision Making For several years, OMB has encouraged agencies to expand their use of evidence—performance measures, program evaluation results, and other relevant data analytics and research studies—in budget, management, and policy decisions with the goal of improving government effectiveness. In particular, OMB has encouraged agencies to strengthen their program evaluations—systematic studies that use research methods to address specific questions about program performance. Evaluation is closely related to performance measurement and reporting. Evaluations can be designed to better isolate the causal impact of programs from other external economic or environmental conditions in order to assess a program’s effectiveness. Thus, an evaluation study can provide a valuable supplement to ongoing performance reporting by measuring results that are too difficult or expensive to assess annually, explaining the reasons why performance goals were not met, or assessing whether one approach is more effective than another. Despite the valuable insights and information that program evaluations can provide, we continue to find that most federal managers lack access to or awareness of such studies. Our 2017 survey shows that an estimated 40 percent of managers reported that an evaluation had been completed within the past 5 years of any program, operation, or project in which they were involved—comparable to the results in our 2013 survey, when questions about program evaluations were added. In recent years, OMB has encouraged agencies to explore evidence-based tools to strengthen agency and grantee evaluation capacity, consider the effectiveness of their programs, and foster innovation rooted in research and rigorous evaluation. During the past 2 years, we examined several of those tools, as described below. Pay for success: Also known as social impact bonds, pay for success is a contracting mechanism under which investors provide the capital the government uses to provide a social service. The government specifies performance outcomes in pay for success contracts and generally includes a requirement that a program’s impact be independently evaluated. The evaluators also are to regularly review performance data, while those managing and investing in a project focus on performance and accountability, as shown in the figure 14. In September 2015, we found that the federal government’s involvement in pay for success had been limited. In addition, a formal mechanism for federal agencies to collaborate on pay for success did not exist. We concluded that, given the evolving nature of pay for success, a mechanism for federal agencies to collaborate would increase access to leading practices. We therefore recommended that OMB establish a formal means for federal agencies to collaborate on pay for success. OMB concurred and, in February 2016, announced that it had developed the Pay for Success Interagency Learning Network with representatives from 10 federal agencies to share lessons, hone policy, and strengthen implementation. Tiered evidence grants: Tiered evidence grants seek to incorporate evidence of effectiveness into grant making. Federal agencies establish tiers of grant funding based on the level of evidence grantees provide on their approaches to deliver social, educational, health, or other services. (See figure 15.) Smaller awards are used to test new and innovative approaches, while larger awards are used to scale up approaches that have strong evidence of effectiveness. This creates incentives for grantees to use approaches supported by evidence and helps them build the capacity to conduct evaluations. In September 2016, we found that interagency collaboration had helped federal agencies that administer tiered evidence grants address challenges and share lessons learned. At that time, such collaborative efforts relied on informal networks. We recommended that OMB establish a formal means for agencies to collaborate on tiered evidence grants. OMB had no comment on the recommendation. In July 2017, OMB staff told us that they had established an interagency working group and other mechanisms to facilitate collaboration and disseminate information on tiered evidence grants. Performance partnerships: Performance partnerships allow federal agencies to provide grant recipients flexibility in how they use funding across two or more programs along with additional flexibilities. In exchange, the recipient commits to improve and assess progress toward agreed-upon outcomes. Figure 16 provides an overview of the performance partnership model. In April 2017, we examined two performance partnership initiatives authorized by Congress: the Environmental Protection Agency’s Performance Partnership Grants and the Performance Partnership Pilots for Disconnected Youth, which allows funding from multiple programs across multiple agencies to be combined into pilot programs serving disconnected youth. For the Performance Partnership Pilots for Disconnected Youth, we found that the agencies involved in the initiative had not fully identified the key financial and staff resources each agency would need to contribute over the lifetime of the initiative in line with leading practices for interagency collaboration. This was because agencies primarily had been focused on meeting near-term needs to support design and implementation. We also found that agencies had not developed criteria to help determine whether, how, and when to implement the flexibilities tested by the pilots in a broader context. (This is known as scalability.) Officials involved in the pilots told us it was too early in pilot implementation to determine such criteria. However, by not identifying these criteria while designing the pilots, they were risking not collecting needed data during pilot implementation. We recommended that OMB coordinate with federal agencies to identify (1) agency resource contributions needed for the lifetime of the pilots and (2) criteria and related data for assessing scalability. OMB neither agreed nor disagreed with these recommendations. We continue to monitor progress on these recommendations. Agencies Have Made Some Progress in Aligning Daily Operations with Results, but Could Take Additional Actions Agencies Could Take Additional Actions to Further Develop Results- Oriented Cultures In 2003, we identified nine key practices for effective performance management that collectively create a “line of sight” between individual performance and organizational success. (See sidebar on next page.) Our recent work and the results of our 2017 survey of federal managers highlight areas where agencies have made progress but could take additional action to better reflect several of these practices, thereby better instilling results-oriented cultures. Align individual performance expectations with organizational goals: Our 2003 report found that high-performing organizations use their performance management systems to help individuals see the connection between their daily activities and organizational goals. The executive branch has taken several steps to link individual and organizational results. For example, in October 2000, OPM issued guidance to link SES performance expectations with GPRA-required goals. In January 2012, OPM and OMB released a government-wide SES performance appraisal system that provided agencies with a standard framework to manage the performance of SES members. However, our work continues to identify areas for improvement. Goal leaders and deputy goal leaders are responsible for achieving APGs, but our July 2014 review found that the performance plans for a sample of goal and deputy goal leaders generally did not link their individual performance and the broader goal. We recommended that OMB ensure that those plans demonstrate a clear connection with APGs. OMB staff generally agreed with our recommendation. In July 2017, OMB staff stated that components of both OMB and OPM guidance support accountability for agency priority goals. Despite this, we continue to believe that ensuring an explicit connection in performance plans to APGs will improve accountability, and that additional action is needed to do so. In May 2016, we found that the Federal Emergency Management Agency (FEMA) had not aligned Federal Disaster Recovery Coordinators’ performance expectations with its organizational goals for implementing the National Disaster Recovery Framework. We concluded that without this linkage, FEMA could not evaluate how effectively the coordinators performed in implementing the framework. We recommended that FEMA align performance expectations consistent with leading practices. The Department of Homeland Security concurred with our recommendation. In July 2017, FEMA stated that it is preparing the Field Leader Manual, which will define the core competencies and duties of coordinators. We will continue to monitor FEMA’s actions to implement this recommendation. Our 2017 survey also shows that this linkage could be improved for other federal employees. An estimated 58 percent of federal managers reported using performance information to a great or very great extent in setting expectations for employees they manage or supervise. The 2017 responses do not represent a statistically significant change when compared to our last survey in 2013 (62 percent) or to 1997 (61 percent), the year this survey item was introduced. Address organizational priorities: Our prior work showed that, by requiring and tracking follow-up actions on performance gaps, high- performing organizations underscore the importance of holding individuals accountable for making progress on their priorities. Our past and 2017 surveys have identified differences in responses between SES and non-SES managers reporting being held accountable for results. For example, in 2017, our survey results indicate that there was a statistically significant difference between SES and non-SES managers reporting to a great or very great extent that they were held accountable for results of the programs for which they are responsible. However, our 2017 survey shows no change compared to our last survey in either SES or non-SES managers reporting they were held accountable for results. There are statistically significant increases when compared to 1997, when these survey items were introduced. For example, an estimated 79 percent of SES managers and 64 percent of non-SES managers reported being held accountable to a great or very great extent for results of the programs for which they are responsible in 2017. This does not represent a statistically significant change from our 2013 survey (80 percent and 67 percent, respectively), but it is statistically significantly higher than the 62 percent of SES managers and 54 percent of non-SES managers in 1997. (See figure 17.) Similarly, as shown in figure 18, an estimated 71 percent of SES managers reported being held accountable to a great or very great extent for accomplishing agency strategic goals in 2017. This represents no statistical change since 2013 (73 percent), but it is a statistically significant increase compared to when this item was introduced in 2003 (61 percent). Additionally, as figure 18 shows, a gap between being held accountable for strategic goals and having the decision-making authority needed to help accomplish those goals has nearly closed, due to an increase in the latter survey item. The estimated 69 percent of SES managers who reported having such authority to a great or very great extent in 2017 is a statistically significant increase relative to both 2013 (61 percent) and 1997 (51 percent). As noted earlier, GPRAMA requires goal leaders for CAP goals and APGs. Our past work has generally found that they are in place. GPRAMA also requires agencies to identify an agency official responsible for resolving major management challenges, which can help ensure accountability. (See sidebar.) However, in June 2016 we found that 17 of the 24 CFO Act agencies had not identified an agency official responsible for resolving each of their challenges, partly because OMB guidance was not clear that major management challenges should be identified in agency performance plans. We recommended that the 17 agencies identify such officials in their performance plans, and that OMB clarify its guidance. OMB revised its guidance accordingly in July 2016, and, as of July 2017, 7 of the 17 agencies had identified officials responsible for resolving major management challenges. Link pay to individual and organizational performance: High- performing organizations seek to create pay, incentive, and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. Our work has found that agencies have made progress in this area. For example, in July 2013, we found that the Securities and Exchange Commission (SEC) lacked mechanisms to monitor how supervisors used its performance management system to recognize and reward performance. To help enhance the credibility of SEC’s performance management system, we recommended that it create mechanisms to monitor how supervisors use the performance management system. In a subsequent (December 2016) report, we found that, in response to our recommendation, SEC began monitoring how supervisors provide feedback, recognize and reward staff, and address poor performance. However, federal managers generally reported no change on three items related to recognizing and rewarding employee performance since our last survey in 2013 (figure 19). One of those items—managers agreeing to a great or very great extent that employees in their agency receive positive recognition for helping the agency to accomplish its strategic goals—had a statistically significant increase between 1997 and 2017 (from an estimated 26 percent to 46 percent). Make meaningful distinctions in performance: Effective performance management requires the organization’s leadership to meaningfully distinguish between acceptable and outstanding performance of individuals and to appropriately reward those who perform at the highest level. For example, in January 2015, we found disparities in performance ratings for SES among agencies. Across the 24 CFO Act agencies, the percent of SES rated at the highest level ranged from about 22 percent to 95 percent in fiscal year 2013. To help address these disparities, we recommended that the Director of OPM consider the need to refine the performance certifications guidelines addressing distinctions in performance. To address this recommendation, OPM informed us, in June 2015, that it had convened a cross-agency working group that developed a standard template for agencies to complete and post on a website to more transparently justify their SES ratings distributions. In May 2016, we found that about 74 percent of non-SES employees under a five-level appraisal system—the most commonly used system— were rated in the top two of five performance categories in 2013. We explored this issue further in our December 2016 review of human capital challenges at the Veterans Health Administration (VHA), which illustrates the importance of making meaningful distinctions in performance for non- SES employees. We found that in fiscal year 2014, about 73 percent of VHA employees were rated in the top two of five performance categories. This may have been due, in part, to a policy that did not require standards to be defined for each level of performance. We recommended that VHA ensure that meaningful distinctions are being made in employee performance ratings by reviewing and revising performance management policies consistent with leading practices, among other actions. The Department of Veterans Affairs partially concurred with our recommendation. In May 2017, the department stated that it had begun piloting a new performance management process and would analyze results at the end of fiscal year 2017. Additional OMB Actions Could Help Address Long- Standing Performance Measurement Issues One key aspect of connecting daily operations to results is aligning program performance measures to agency-wide goals and objectives. However, in 2017, an estimated 50 percent of federal managers agreed to a great or very great extent that managers at their level took steps to create such an alignment. There has been no statistically significant change since this item was introduced in 2007. In addition, GPRAMA calls for agencies to develop a balanced set of performance measures, which reinforces the need for agencies to have a variety of measures across program areas. Our 2017 survey shows that managers have not reported any difference in the availability of performance measures for their programs when compared to the 2013 results. However, the 2017 result (an estimated 87 percent) represents a statistically significant increase when compared to 1997 (76 percent). When asked about the availability of certain types of performance measures, three of the five types (outcome, output, and efficiency) were statistically significantly higher in 2017 when compared to our initial 1997 survey. However, when comparing 2017 results to those in 2013, two of the five types (output and quality) showed a statistically significant decrease, and the other types did not change. These are illustrated in figure 20. Beyond the survey results, our work has found that some agencies had not developed or used outcome measures, but have taken steps to do so. Agencies have been responsible for measuring program outcomes since GPRA was enacted in 1993. The text box below describes two illustrative examples from our past work. Examples of Agencies That Did Not Develop or Use Outcome Measures Patient access to electronic health information: In March 2017, we found that the Department of Health and Human Services (HHS) had invested over $35 billion since 2009 to enhance patient access to electronic health information, among other things. HHS had not developed outcome measures to gauge the effectiveness of these efforts, which meant the department did not have information to determine whether the efforts were contributing to its overall goals. We recommended that HHS develop relevant outcome measures and HHS concurred. Safety interventions: According to the Federal Motor Carrier Safety Administration (FMCSA), between 2011 and 2015, over 4,000 people died in crashes involving motor carriers each year. GAO, Motor Carriers: Better Information Needed to Assess Effectiveness and Efficiency of Safety Interventions, GAO-17-49 (Washington, D.C.: Oct. 27, 2016). Further OMB actions could also help agencies make progress in measuring the performance of different program types. In our June 2013 report on initial GPRAMA implementation, we found that agencies experienced common issues in measuring the performance of various types of programs, such as contracts and grants. We recommended that OMB work with the PIC to develop a detailed approach to examine those difficulties. Although they took some actions, OMB and the PIC have not yet developed a comprehensive and detailed approach to address these issues. We concluded that, without such an approach, it would be difficult for the PIC and agencies to fully understand these measurement issues and develop a crosscutting approach to help address them. In August 2017, OMB staff stated that efforts related to the future implementation of the Program Management Improvement Accountability Act (PMIAA) could help address this recommendation. As highlighted in table 1, our work continues to show why it is important for OMB and the PIC to take actions to more fully address our recommendation. Increased Transparency and Public Engagement Could Improve Government Oversight and Foster Innovation Further OMB Action Could Improve the Transparency of Government-wide Performance and Financial Data Congress has passed legislation to increase the transparency and accessibility of federal performance and financial data. For example, GPRAMA modernized agency reporting requirements to ensure that they make timely, relevant data available to inform decision making by Congress and agency officials as well as improve transparency for the public. Results of our 2017 survey, however, show the need for improvements in the public availability of agency performance information. An estimated 17 percent of managers reported that their agency’s performance information is easily accessible to the public to a great or very great extent, the same percentage as in 2013. Moreover, of the 87 percent of managers that reported there are performance measures for the programs they are involved in, 25 percent reported that they use information obtained from performance measurement when informing the public about how programs are performing to a great or very great extent. This is not statistically different from the 30 percent estimated in 2013. The DATA Act, enacted in 2014, built on previous transparency legislation by expanding what federal agencies are required to report regarding their spending. The act significantly increases the types of data that must be reported, requires government-wide data standards, and regular reviews of data quality to help improve the transparency and accountability of federal spending data. OMB provides websites and guidance to make agency performance and financial information available to the public; however, our prior work has identified a number of areas related to Performance.gov and the DATA Act where OMB action is needed to improve the transparency and accessibility of this information. Performance.gov: Since 2013, our work has identified a number of issues with Performance.gov, the website intended to serve as a central source of information on the federal government’s goals and performance. Over time, we have recommended that OMB take a number of specific actions to improve the website. For example, in June 2013, we found that the website offered an inconsistent user experience and presented accessibility and navigation challenges. To clarify the purpose of the website and enhance its usability, we recommended that OMB take steps to systematically collect customer input. In August 2016, we reported that OMB was not meeting all of the reporting requirements for Performance.gov, and did not have a plan to develop and improve the website. We recommended that OMB ensure that information presented on Performance.gov consistently complies with reporting requirements and develop a plan for the website that includes, among other things, a customer outreach plan. OMB agreed with these recommendations and, in July 2017, OMB staff informed us that they will be partnering with a vendor to redesign Performance.gov to improve the accessibility of information on the website. To inform this redesign, OMB staff said that they will consider our previous recommendations and plan to engage a wide group of stakeholders, including Congress, agency staff, and interested members of the public and outside organizations. OMB staff anticipated releasing updated agency reporting guidance in the fall of 2017 and the redesigned website in February 2018. Under GPRAMA, OMB is required to make available, through Performance.gov, quarterly updates on progress toward CAP goals and APGs. As described earlier, in June 2017 OMB announced that reporting to Performance.gov has been discontinued through the end of fiscal year 2017 as agencies develop new priority goals. However, Performance.gov does not state that it will not be updated, nor does it provide the location of the final progress updates for these goals. OMB’s guidance states that agencies should report the results of progress on their previous APGs in their annual performance reports for fiscal year 2017. Moreover, OMB staff told us that the existing updates on Performance.gov for CAP goals, last updated in December 2016, represent the final updates on those goals, although they are not labeled as such on the website. As a result, those interested in progress updates and reported results for the previous priority goals may not know where they will be able to find this information, limiting the transparency and accessibility of those results for decision makers and the public. DATA Act: The DATA Act requires federal agencies to disclose their spending and link this to program activities so that policymakers and the public can more effectively track federal spending. The act has the potential to improve the accuracy and transparency of federal spending information and increase its usefulness for government decision making and oversight. Since the DATA Act became law, OMB and Treasury have taken significant steps to make more complete and accurate federal spending data available. These have included standardizing data element definitions to make it easier to compare different federal agencies’ financial information, and issuing guidance to help agencies submit required data. In May 2017, federal agencies started to report data under the standardized definitions developed under the act. We have made a number of recommendations to address challenges that could affect the consistency and quality of the data. Addressing these recommendations could help ensure that financial data are provided to the public in a transparent and useful manner. For example, in January 2016, we found some standardized data element definitions were imprecise or ambiguous, which could result in inconsistent or potentially misleading reporting. We recommended that OMB provide agencies with additional guidance to address potential issues with the clarity, consistency, and quality of reported data. OMB released guidance in May and November 2016, but in April 2017 we found that additional guidance was needed to help agencies implement certain data definitions to produce data that would be consistent and comparable across agencies. We are in the process of examining the quality of the data that was submitted by agencies in May 2017 and was made available to the public on an early version of the USAspending.gov website. We expect to issue the results of this work in fall 2017. More Complete Public Reporting of Performance Information Could Enhance Oversight and Accountability Our past work also identified a number of actions agencies need to take to make performance information more transparent. Increasing the accessibility of this information could enhance oversight and accountability of agency performance and results. CAP goals: In May 2016, we found that while selected CAP goal teams were working to develop performance measures to track progress, they were not consistently reporting on their efforts to develop these measures. We recommended that OMB report on Performance.gov the actions that CAP goal teams are taking to develop performance measures and quarterly targets to help ensure that measures are aligned with major activities, and ensure that it is possible to track teams’ progress toward establishing measures. While OMB agreed with this recommendation, it did not address it before reporting on the CAP goals was discontinued, as discussed earlier. Customer service standards: As we described earlier, in 2017, an estimated 48 percent of federal managers that indicated they have performance measures for the programs they are involved in also agreed to a great or very great extent that they have customer service performance measures. There has been no statistically significant change relative to our last survey in 2013, or the initial survey in 1997. Relatedly, in October 2014, we reviewed customer service standards at five federal agencies. Customer service standards inform customers about what they have a right to expect when they request services, and the standards should include goals for the quality and timeliness of a service an agency provides to its customers. They should also be easily available to the public so that customers know what to expect, when to expect it, and from whom. In our review of standards at five agencies, however, we found that only Customs and Border Protection had standards that were easily available to the public. We recommended the other four agencies—the United States Forest Service, Federal Student Aid, the National Park Service (NPS), and the Veterans Benefits Administration (VBA)—make their standards more easily accessible to the public. As of July 2017, only VBA had done so. Major management challenges: In June 2016, we found that 14 of the 24 CFO Act agencies did not describe their major management challenges in their performance plans, as required by GPRAMA. Furthermore, 22 of the 24 agencies reviewed did not report complete performance information for each of their major management challenges, including performance goals, milestones, indicators, and planned actions that they have developed to address such challenges. As a result, it was not always transparent what these agencies considered to be their major management challenges or how they planned to resolve these challenges. We recommended that the 22 agencies describe their major management challenges in their agency performance plans and include goals, measures, milestones, and information on planned actions and responsible officials. As of August 2017, 8 agencies—the U.S. Agency for International Development, Small Business Administration, Nuclear Regulatory Commission, OPM, National Aeronautics and Space Administration (NASA), and the Departments of Education, State, and Veterans Affairs—had fully implemented our recommendations; the other 14 agencies had not. Quality of performance information: In September 2015, we found that six selected agencies reported limited information on the actions they are taking to ensure the quality of their performance information for selected APGs, as required by GPRAMA. We recommended that all six of the agencies work with OMB to fully report this information. In response, the Department of Homeland Security and NASA described how they ensure reliable performance information is reported to external audiences. As of June 2017, the Departments of Agriculture, Defense, the Interior, and Labor had not yet taken actions to address this recommendation by providing more specific explanations of how they ensure reliable performance information is reported for their APGs. Unnecessary reports: GPRAMA requires that OMB guide an annual review of agencies’ plans and reports for Congress and include in the President’s budget a list of those plans and reports determined to be outdated or duplicative. However, in July 2017, we found that OMB did not implement the report review process on an annual basis, as required. We also found that OMB published the list of agency plans and reports on Performance.gov, rather than in the President’s annual budget, where they may be more visible and useful to congressional decision makers and others. Therefore, we recommended that OMB instruct agencies to identify outdated or duplicative reports on an annual basis and submit or reference the list of identified plans and reports with the President’s annual budget. OMB agreed with these recommendations. In July 2017, OMB stated it would include a list of report modification proposals in the President’s fiscal year 2019 budget as required by GPRAMA. For all of the unimplemented recommendations described above, we will continue to monitor agencies’ actions. Open Innovation Can Help Agencies Engage the Public to Achieve Results, but Guidance for Implementing Initiatives Should Be Improved In addition to providing access to performance and financial information, federal agencies can directly engage and collaborate with citizens, nonprofits, academic institutions, and other levels of government using open innovation strategies. Open innovation involves using various tools and approaches to harness the ideas, expertise, and resources of those outside an organization to address an issue or achieve specific goals. In October 2016, we found that in recent years agencies had frequently used five open innovation strategies—singularly or in combination—to collaborate with citizens and encourage their participation in agency initiatives. (See figure 21.) Our October 2016 report found that agencies can use these strategies for a variety of purposes. To develop new ideas, solutions to specific problems, or new products: For example, from April 2015 to November 2016, the Department of Energy held a prize competition to create more efficient devices that would double the energy captured from ocean waves. According to the competition’s website, the winning team achieved a five-fold improvement. To enhance collaboration and agency capacity by leveraging external resources, knowledge, and expertise: For example, every 2 years since 2009, the Federal Highway Administration has regularly engaged stakeholders to identify and implement innovative ideas that have measurably improved the execution of highway construction projects. To collect the perspectives and preferences of a broad group of citizens and external stakeholders: For example, the Food and Drug Administration used in-person and online dialogue to engage outside stakeholders in the development of an online platform designed to make key datasets easily accessible to the public. Subsequently, in June 2017, we found that OMB, the Office of Science and Technology Policy (OSTP), and the General Services Administration (GSA) developed resources to support the use of open innovation strategies by federal agencies. These resources included guidance, staff to assist agencies in implementing initiatives, and websites to improve access to relevant information. For example, GSA developed a step-by-step implementation guide, program management team, and website to help agency staff carry out prize competitions and challenges. Agencies have also developed their own resources, including guidance, staff positions, and websites, to reach specific audiences and to provide tailored support for open innovation strategies they use frequently. For example, NASA’s Solve website provides a central location for the public to find the agency’s challenges and citizen science projects, as well as links to relevant resources. We also evaluated key government-wide guidance for the five strategies listed above to determine the extent to which the guidance reflects leading practices for effectively implementing open innovation initiatives. We identified these practices in our October 2016 report. We found that the guidance for each strategy reflected these practices to differing extents, as shown in figure 22. We made 22 recommendations to GSA, OMB, and OSTP to enhance the guidance. GSA and OMB generally agreed with these recommendations and OSTP neither agreed nor disagreed. We will monitor their progress toward implementing these recommendations. Conclusions GPRAMA provides important tools that can help decision makers better achieve results and address the federal government’s significant and long-standing governance challenges. Although OMB and agencies have made progress in improving implementation of the act over the years, our work has highlighted numerous opportunities for further improvements. In 2017, OMB removed the priority designation of CAP goals and APGs. For those goals, this action stopped related data-driven reviews and quarterly updates of progress on Performance.gov until new priority goals are published next year. What OMB considers to be the final results of CAP goals for fiscal years 2014 to 2017 already are on Performance.gov (although not labeled as such). In addition, agencies may report on their former APGs in their annual fiscal year 2017 performance reports. However, Performance.gov does not state that it will not be updated or provide the location of the final progress updates for these goals, limiting transparency and its value to the public. OMB has stated its plans to restart implementation of those provisions in February 2018, with the start of a new goal cycle. We believe it is critical for OMB to do so, given the important role those tools play in addressing key governance challenges and the results we have seen in better managing crosscutting areas and driving performance improvements across the government. In addition, OMB has postponed implementation of the federal program inventory. To date, the inventory has only been developed once, in 2013, despite requirements for regular updates to reflect current budget and performance information. OMB has given a variety of reasons for the delays over the past 4 years—most recently, to determine the right strategy to merge implementation of the DATA Act and PMIAA with GPRAMA’s program inventory requirements. Although OMB staff told us that they expect to issue guidance by the end of 2018 to resume implementation of the program inventory requirements, they have not provided more specific time frames and milestones related to the program inventory requirements. Doing so would help agencies prepare for resumed implementation. Moreover, publicly disclosing planned implementation time frames and associated milestones would help ensure that interested stakeholders, such as federal decision makers and the public, are prepared to engage with agencies as they develop and update their program inventories, which in turn could help ensure the inventories meet stakeholders’ needs. A well-developed inventory would provide key program, budget, and performance information in one place to help federal decision makers better understand the federal investment and results in given policy areas, and better identify and manage fragmentation, overlap, and duplication. Information architecture offers one approach to developing an inventory. As OMB determines a strategy for implementing the program inventory and develops its guidance, considering such a systematic approach to planning, organizing, and developing the inventory that centers on maximizing the use and usefulness of information could help it ensure the inventory meets GPRAMA requirements as well as the needs of decision makers and the public. Moreover, such an approach could also help OMB implement our past recommendations related to the program inventory, which are intended to ensure the inventory provides more complete information and is useful to various stakeholders. Our survey of federal managers continues to generally show no improvement in their reported use of performance information in decision making, nor in the employment of practices that can enhance such use. One area where our survey data and past work show promise is through the use of regular, leadership-driven reviews of performance data at agencies, especially when conducted in line with related leading practices. However, GPRAMA only requires these data-driven reviews for APGs, which represent a small subset of goals, both within individual agencies as well as across the government. This is probably why most federal managers were not familiar with the reviews. Identifying and sharing practices for expanding the use of those reviews—such as for additional agency-wide performance goals and at lower levels within agencies—could significantly enhance the use of performance information and drive to better and greater results. Recommendations for Executive Action We are making the following four recommendations to OMB: The Director of OMB should update Performance.gov to explain that quarterly reporting on the fiscal year 2014 through 2017 CAP goals and fiscal year 2016 and 2017 APGs was suspended, and provide the location of final progress updates for these goals. (Recommendation 1) The Director of OMB should revise and publicly issue OMB guidance— through an update to its Circular No. A-11, a memorandum, or other means—to provide time frames and associated milestones for implementing the federal program inventory. (Recommendation 2) The Director of OMB should consider—as OMB determines its strategy for resumed implementation of the federal program inventory—using a systematic approach, such as the information architecture framework, to help ensure that GPRAMA requirements and our past recommendations for the inventory are addressed. (Recommendation 3) The Director of OMB should work with the Performance Improvement Council to identify and share among agencies practices for expanding the use of data-driven performance reviews beyond APGs, such as for other performance goals and at lower levels within agencies, that have led to performance improvements. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to the Director of the Office of Management and Budget for review and comment. In comments provided orally and via email, OMB staff agreed with the recommendations in this report. OMB staff also asked us to (1) consider revising the draft title of the report, to better reflect progress in GPRAMA implementation, and (2) clarify our recommendations on issuing guidance for implementing the federal program inventory and expanding the use of data-driven performance reviews, by describing possible actions that could be taken to implement them. We agreed and made revisions accordingly. We are sending copies of this report to interested congressional committees, the Director of the Office of Management and Budget, and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or mihmj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The GPRA Modernization Act (GPRAMA) includes a statutory provision for us to periodically evaluate implementation of the act. Since 2012, we have issued over 30 products in response to this provision; this is the third summary report. This report assesses how implementation of GPRAMA has affected the federal government’s progress in resolving key governance challenges in (1) addressing crosscutting issues, (2) ensuring performance information is useful and used in decision making, (3) aligning daily operations with results, and (4) building a more transparent and open government. We reviewed relevant statutory requirements, related Office of Management and Budget (OMB) guidance, and our recent work related to GPRAMA implementation and the four key governance challenges included in our reporting objectives. Specifically, since our last summary report in September 2015, we examined various aspects of GPRAMA implementation in 12 products that covered 35 agencies, including the 24 agencies covered under the Chief Financial Officers (CFO) Act of 1990, as amended (identified in table 2). We interviewed OMB and Performance Improvement Council staff to obtain (1) their perspectives on GPRAMA implementation and progress on the four governance challenges, and (2) updates on the status of our past recommendations. We also received updates from other agencies on the status of our past recommendations to them related to GPRAMA implementation. To supplement this review, we administered our periodic survey of federal managers on organizational performance and management issues from November 2016 through March 2017. This survey is comparable to five previous surveys we conducted in 1997, 2000, 2003, 2007, and 2013. We selected a stratified random sample of 4,395 people from a population of approximately 153,779 mid-level and upper-level civilian managers and supervisors working in the 24 executive branch agencies covered by the CFO Act, as shown in table 2. We obtained the sample from the Office of Personnel Management’s (OPM) Enterprise Human Resources Integration (EHRI) database as of September 30, 2015, which was the most recent fiscal year data available at the time. We used file designators indicating performance of managerial and supervisory functions. In reporting survey data, we use the term “government-wide” and the phrases “across the government” or “overall” to refer to the 24 CFO Act executive branch agencies. We use the terms “federal managers” and “managers” to collectively refer to both managers and supervisors. We designed the questionnaire to obtain the observations and perceptions of respondents on various aspects of results-oriented management topics. These topics include the presence and use of performance measures, any hindrances to measuring performance and using performance information, agency climate, and program evaluation use. To assess implementation of GPRAMA, the questionnaire included questions to collect respondents’ views on various provisions of GPRAMA, such as cross-agency priority goals, agency priority goals, and related quarterly performance reviews. Similar to the five previous surveys, the sample was stratified by agency and by whether the manager or supervisor was a member of the Senior Executive Service (SES). The management levels covered general schedule (GS) or equivalent schedules at levels comparable to GS-13 through GS-15 and career SES or equivalent. Stratifying the sample in this way ensured that the population from which we sampled covered at least 90 percent of all mid- to upper-level managers and supervisors at the departments and agencies we surveyed. Most of the items on the questionnaire were closed-ended, meaning that depending on the particular item, respondents could choose one or more response categories or rate the strength of their perception on a 5-point extent scale ranging from “no extent” to “very great extent.” On most items, respondents also had an option of choosing the response category “no basis to judge/not applicable.” A few items had other options, such as “yes,” “no,” or “do not know,” or a 3-point familiarity scale (“not familiar,” “somewhat familiar,” and “very familiar”). We asked many of the items on the questionnaire in our earlier surveys, though we introduced a number of new items in 2013, including the sections about GPRAMA and program evaluations. For 2017, we added a new question on use of performance information (question 12) and a new question on program evaluation (question 24). Before administering the survey, questions were reviewed by our staff, including subject matter experts, a survey specialist, and a research methodologist. We also conducted pretests of the new questions with federal managers in several of the 24 CFO Act agencies. We changed the wording of subquestions or added clarifying examples based on pretester feedback. To administer the survey, we e-mailed managers in the sample to notify them of the survey’s availability on our website and we included instructions on how to access and complete the survey. To follow up with managers in the sample who did not respond to the initial notice, we emailed or called multiple times to encourage survey participation or provide technical assistance, as appropriate. Similar to our last survey, we worked with OPM to obtain the names of the managers and supervisors in our sample, except for those within selected subcomponents whose names were withheld from the EHRI database. Since Foreign Service officials from the Department of State (State) are not in the EHRI database, we drew a sample for that group with the assistance from State. We worked with officials at the Department of Homeland Security (DHS) and the Department of the Treasury (Treasury) to gain access to these individuals to maintain continuity of the population of managers surveyed from previous years. The Department of Justice (DOJ) was concerned about providing identifying information (e.g., names, e-mail addresses, and phone numbers) of federal agents to us, so we administered the current survey to DOJ managers in our sample through DOJ officials. To identify the sample of managers whose names were withheld from the EHRI database, we provided DOJ with the last four digits of Social Security numbers, the subcomponent, duty location, and pay grade information. To ensure that DOJ managers received the same survey administration process as the rest of the managers in our sample to the extent possible, we provided DOJ with text for the survey activation and reminder e-mails similar to ones we emailed to managers at other agencies. DOJ administered the survey to these managers and emailed them one reminder to complete the survey. To help determine the reliability and accuracy of the EHRI data elements used to draw our sample of federal managers, we checked the data for reasonableness and the presence of any obvious or potential errors in accuracy and completeness and reviewed past analyses of the reliability of this database. For example, we identified cases where the managers’ names were withheld and contacted OPM to discuss this issue. We also checked the names of the managers in our selected sample provided by OPM with the applicable agency contacts to verify these managers were still employed with the agency. We noted discrepancies when they occurred and excluded them from our population of interest, as applicable. On the basis of these procedures, we believe the data we used from the EHRI database are sufficiently reliable for the purpose of the survey. Of the 4,395 managers selected for the 2017 survey, we found that 388 of the sampled managers had retired, separated, or otherwise left the agency or had some other reason that excluded them from the population of interest. These exclusions included managers that the agency could not locate, and therefore we were unable to request that they participate in the survey. We received usable questionnaires from 2,726 sample respondents, for a weighted response rate of about 67 percent of the remaining eligible sample. The weighted response rate across 23 of the 24 agencies ranged from 57 percent to 82 percent, while DOJ had a weighted response rate of 36 percent. See the supplemental material for each agency’s response rate. We conducted a nonresponse bias analysis using information from the survey and sampling frame as available. The analysis confirmed discrepancies in the tendency to respond to the survey related to agency and SES status. The analysis also revealed some differences in response propensity by age and GS level; however, the direction and magnitude of the differences on these factors were not consistent across agencies or strata. Our data may be subject to bias from unmeasured sources for which we cannot control. Results, and in particular estimates from agencies with low response rates such as DOJ, should be interpreted with caution because these estimates are associated with a higher level of uncertainty. The overall survey results are generalizable to the government-wide population of managers as described above. The responses of each eligible sample member who provided a usable questionnaire were weighted in the analyses to statistically account for all members of the population. All results are subject to some uncertainty or sampling error as well as nonsampling error, including the potential for nonresponse bias as noted above. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. The magnitude of sampling error will vary across the particular surveys, groups, or items being compared because we (1) used complex survey designs that differed in the underlying sample sizes, usable sample respondents, and associated variances of estimates, and (2) conducted different types of statistical analyses. For example, the 2000 and 2007 surveys were designed to produce agency-level estimates and had effective sample sizes of 2,510 and 2,943, respectively. However, the 1997 and 2003 surveys were designed to obtain government-wide estimates only, and their sample sizes were 905 and 503, respectively. Consequently, in some instances, a difference of a certain magnitude may be statistically significant. In other instances, depending on the nature of the comparison being made, a difference of equal or even greater magnitude may not achieve statistical significance. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. The percentage estimates presented in this report based on our sample for the 2017 survey have 95 percent confidence intervals within plus or minus 5.5 percentage points of the estimate itself, unless otherwise noted. We also note in this report when we are 95 percent confident that changes from 1997 or 2013 relative to 2017 are statistically significant. Online supplemental material shows the questions asked on the survey along with the percentage estimates and associated 95 percent confidence intervals for each item for each agency and government-wide. In a few instances, we report estimates with larger margins of error because we deemed them reliable representations of given findings due to the statistical significance of larger differences between comparison groups. In all cases, we report the applicable margins of error. In addition to sampling errors, the practical difficulties of conducting any survey may also introduce other types of errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information available to respondents, or in how the data were entered into a database or analyzed can introduce unwanted variability into the survey results. With this survey, we took a number of steps to minimize these nonsampling errors. For example, our staff with subject matter expertise designed the questionnaire in collaboration with our survey specialists. As noted earlier, the new questions added to the survey were pretested to ensure they were relevant and clearly stated. When the data were analyzed, a second independent analyst on our staff verified the analysis programs to ensure the accuracy of the code and the appropriateness of the methods used for the computer-generated analysis. Since this was a web-based survey, respondents entered their answers directly into the electronic questionnaire, thereby eliminating the need to have the data keyed into a database, thus avoiding a source of data entry error. To supplement descriptive analysis of the survey questions, we generated an index to gauge government-wide use of performance information. The index, which was identical to one we reported in 2014, averaged manager’s responses to 11 questions deemed to relate to the concept of performance information use. The index runs from 1 (corresponding to an average value of “to no extent”) to 5 (corresponding to an average value of “to a very great extent”). We used Cronbach’s alpha to assess the internal consistency of the scale. Our government- wide index score weights each agency’s contribution equally, and provides a relative measure of the use of performance information over time rather than an absolute indicator of the government-wide level of use of performance information. We conducted this performance audit from January 2016 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Recommendations from GAO’s Work Related to the GPRA Modernization Act The Office of Management and Budget (OMB) and agencies have taken some actions to address our recommendations related to implementation of the GPRA Modernization Act of 2010 (GPRAMA); however, the majority of recommendations remain open. Since GPRAMA was enacted in January 2011, we have made 100 recommendations in 18 reports to OMB and agencies aimed at improving the act’s implementation (table 3). Of those 100, OMB and the agencies have implemented 42 recommendations. Fifty-eight recommendations require additional action. Nearly half (47) of our recommendations are directed to OMB. For the 23 recommendations that OMB has implemented, many represent revisions to guidance to better reflect GPRAMA’s requirements or to enhance implementation. Many of the 24 recommendations to OMB that are not implemented deal with long-standing or complex challenges, on which OMB has taken limited action to date. Of those, we have designated 3 as priorities for OMB to address. Agencies have also taken some action on our recommendations, implementing 19 of the 53 recommendations we have made. The following tables present each of the 100 recommendations along with a summary of actions taken to address it. Tables 4 and 5 provide information about our recommendations to OMB that are implemented and not implemented, respectively. Tables 6 and 7 provide information about our recommendations to other agencies that are implemented and not implemented, respectively. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the above contact, Benjamin T. Licht (Assistant Director) and Shannon Finnegan (Assistant Director) supervised this review and the development of the resulting report. Leah Q. Nash (Assistant Director), Elizabeth Fan (Analyst-in-Charge), and Adam Miles (Analyst-in- Charge) supervised the development and administration of the Federal Managers Survey and the resulting supplemental material. Peter Beck, Valerie Caracelli, Karin Fangman, Steven Flint, Robert Gebhart, Ricky Harrison Jr., John Hussey, Jill Lacey, Won Lee, Krista Loose, Meredith Moles, Anna Maria Ortiz, Steven Putansu, Alan Rozzi, Cindy Saunders, Stephanie Shipman, Shane Spencer, Andrew J. Stephens, and Brian Wanlass also made key contributions. Ann Czapiewski and Donna Miller developed the graphics for this report. John Ahern, Divya Bali, Jeff DeMarco, Alexandra Edwards, Ellen Grady, Jyoti Gupta, Erinn L. Sauer, and Katherine Wulff verified the information presented in this report. Related GAO Products Prior Summary Reports on the Government Performance and Results Act (GPRA) Modernization Act (GPRAMA) Implementation Managing for Results: Implementation of GPRA Modernization Act Has Yielded Mixed Progress in Addressing Pressing Governance Challenges. GAO-15-819. Washington, D.C.: September 30, 2015. Managing For Results: Executive Branch Should More Fully Implement the GPRA Modernization Act to Address Pressing Governance Challenges. GAO-13-518. Washington, D.C.: June 26, 2013. Results of the Periodic Surveys on Organizational Performance and Management Issues Supplemental Material for GAO-17-775: 2017 Survey of Federal Managers on Organizational Performance and Management Issues. GAO-17-776SP. Washington, D.C.: September 29, 2017. Program Evaluation: Annual Agency-wide Plans Could Enhance Leadership Support for Program Evaluations. GAO-17-743. Washington, D.C.: September 29, 2017. Managing for Results: Agencies’ Trends in the Use of Performance Information to Make Decisions. GAO-14-747. Washington, D.C.: September 26, 2014. Managing for Results: Executive Branch Should More Fully Implement the GPRA Modernization Act to Address Pressing Governance Challenges. GAO-13-518. Washington, D.C.: June 26, 2013. Managing for Results: 2013 Federal Managers Survey on Organizational Performance and Management Issues, an E-supplement to GAO-13-518. GAO-13-519SP. Washington, D.C.: June 26, 2013. Program Evaluation: Strategies to Facilitate Agencies’ Use of Evaluation in Program Management and Policy Making. GAO-13-570. Washington, D.C.: June 26, 2013. Government Performance: Lessons Learned for the Next Administration on Using Performance Information to Improve Results. GAO-08-1026T. Washington, D.C.: July 24, 2008. Government Performance: 2007 Federal Managers Survey on Performance and Management Issues, an E-supplement to GAO-08-1026T. GAO-08-1036SP. Washington, D.C.: July 24, 2008. Results-Oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater Results. GAO-04-38. Washington, D.C.: March 10, 2004. Managing for Results: Federal Managers’ Views on Key Management Issues Vary Widely Across Agencies. GAO-01-592. Washington, D.C.: May 25, 2001. Managing for Results: Federal Managers’ Views Show Need for Ensuring Top Leadership Skills. GAO-01-127. Washington, D.C.: October 20, 2000. The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven. GAO/GGD-97-109. Washington, D.C.: June 2, 1997. Reports Related to GPRAMA Implementation Federal Programs: Information Architecture Offers a Potential Approach for Inventory Development. GAO-17-739. Washington, D.C.: September 28, 2017. Managing for Results: Selected Agencies’ Experiences in Implementing Strategic Reviews. GAO-17-740R. Washington, D.C.: September 7, 2017. Federal Reports: OMB and Agencies Should More Fully Implement the Process to Streamline Reporting Requirements. GAO-17-616. Washington, D.C.: July 14, 2017. Open Innovation: Executive Branch Developed Resources to Support Implementation, but Guidance Could Better Reflect Leading Practices. GAO-17-507. Washington, D.C.: June 8, 2017. Performance Partnerships: Agencies Need to Better Identify Resource Contributions to Sustain Disconnected Youth Pilot Programs and Data to Assess Pilot Results. GAO-17-208. Washington, D.C.: April 18, 2017. Open Innovation: Practices to Engage Citizens and Effectively Implement Federal Initiatives. GAO-17-14. Washington, D.C.: October 13, 2016. Tiered Evidence Grants: Opportunities Exist to Share Lessons from Early Implementation and Inform Future Federal Efforts. GAO-16-818. Washington, D.C.: September 21, 2016. Performance.gov: Long-Term Strategy Needed to Improve Website Usability. GAO-16-693. Washington, D.C.: August 30, 2016. Tax Expenditures: Opportunities Exist to Use Budgeting and Agency Performance Processes to Increase Oversight. GAO-16-622. Washington, D.C.: July 7, 2016. Managing for Results: Agencies Need to Fully Identify and Report Major Management Challenges and Actions to Resolve them in their Agency Performance Plans. GAO-16-510. Washington, D.C.: June 15, 2016. Managing for Results: OMB Improved Implementation of Cross-Agency Priority Goals, But Could Be More Transparent About Measuring Progress. GAO-16-509. Washington, D.C.: May 20, 2016. Managing for Results: Greater Transparency Needed in Public Reporting on the Quality of Performance Information for Selected Agencies’ Priority Goals. GAO-15-788. Washington, D.C.: September 10, 2015. Pay for Success: Collaboration among Federal Agencies Would Be Helpful as Governments Explore New Financing Mechanisms. GAO-15-646. Washington, D.C.: September 9, 2015. Managing for Results: Practices for Effective Agency Strategic Reviews. GAO-15-602. Washington, D.C.: July 29, 2015. Managing for Results: Agencies Report Positive Effects of Data-Driven Reviews on Performance but Some Should Strengthen Practices. GAO-15-579. Washington, D.C.: July 7, 2015. Program Evaluation: Some Agencies Reported that Networking, Hiring, and Involving Program Staff Help Build Capacity. GAO-15-25. Washington, D.C.: November 13, 2014. Government Efficiency and Effectiveness: Inconsistent Definitions and Information Limit the Usefulness of Federal Program Inventories. GAO-15-83. Washington, D.C.: October 31, 2014. Managing for Results: Selected Agencies Need to Take Additional Efforts to Improve Customer Service. GAO-15-84. Washington, D.C.: October 24, 2014. Managing for Results: Enhanced Goal Leader Accountability and Collaboration Could Further Improve Agency Performance. GAO-14-639. Washington, D.C.: July 22, 2014. Managing for Results: OMB Should Strengthen Reviews of Cross-Agency Goals. GAO-14-526. Washington, D.C.: June 10, 2014. Managing for Results: Implementation Approaches Used to Enhance Collaboration in Interagency Groups. GAO-14-220. Washington, D.C.: February 14, 2014. Managing for Results: Leading Practices Should Guide the Continued Development of Performance.gov. GAO-13-517. Washington, D.C.: June 6, 2013. Managing for Results: Agencies Should More Fully Develop Priority Goals under the GPRA Modernization Act. GAO-13-174. Washington, D.C.: April 19, 2013. Managing for Results: Agencies Have Elevated Performance Management Roles, but Additional Training Is Needed. GAO-13-356. Washington, D.C.: April 16, 2013. Managing for Results: Data-Driven Performance Reviews Show Promise But Agencies Should Explore How to Involve Other Relevant Agencies. GAO-13-228. Washington, D.C.: February 27, 2013. Managing for Results: A Guide for Using the GPRA Modernization Act to Help Inform Congressional Decision Making. GAO-12-621SP. Washington, D.C.: June 15, 2012. Managing for Results: GAO’s Work Related to the Interim Crosscutting Priority Goals under the GPRA Modernization Act. GAO-12-620R. Washington, D.C.: May 31, 2012. Managing for Results: Opportunities for Congress to Address Government Performance Issues. GAO-12-215R. Washington, D.C.: December 9, 2011.
Why GAO Did This Study Full implementation of GPRAMA could facilitate efforts to reform the federal government and make it more effective. GPRAMA includes a provision for GAO to review the act's implementation. This report assesses how GPRAMA implementation has affected the federal government's progress in resolving key governance challenges in (1) addressing cross-cutting issues, (2) ensuring performance information is useful and used, (3) aligning daily operations with results, and (4) building a more transparent and open government. To address these objectives, GAO reviewed statutory requirements, OMB guidance, and GAO's recent work related to GPRAMA implementation and the key governance challenges. GAO also interviewed OMB staff and surveyed a stratified random sample of 4,395 federal managers from 24 agencies on various performance and management topics. With a 67 percent response rate, the survey results are generalizable to the government-wide population of managers. What GAO Found The Office of Management and Budget (OMB) and agencies have made some progress in more fully implementing the GPRA Modernization Act (GPRAMA), but GAO's work and 2017 survey of federal managers highlight numerous areas where improvements are needed. Cross-cutting issues: Various GPRAMA provisions are aimed at addressing cross-cutting issues, such as cross-agency and agency priority goals and related data-driven reviews of progress towards those goals. To ensure alignment with the current administration's priorities, OMB's 2017 guidance removed the priority status of those goals, which stopped quarterly data-driven reviews and related public progress reports until new goals are published. OMB plans to resume implementation of these provisions in February 2018. GPRAMA also requires OMB and agencies to implement an inventory of federal programs, which could help decision makers better identify and manage fragmentation, overlap, and duplication. OMB and agencies implemented the inventory once, in May 2013. In October 2014, GAO found several issues limited the usefulness of that inventory. Since then, OMB has postponed updating the inventory, citing among other reasons the passage of subsequent laws. OMB has yet to develop a systematic approach for resuming implementation of the inventory and specific time frames for doing so. A systematic approach to developing the inventory could help ensure it provides useful information for decision makers and the public. Performance information: Survey results show federal managers generally reported no improvements in their use of performance information in decision making for various management activities, or practices that can enhance such use, since GAO's 2013 survey. For example, the use of performance information to streamline programs to reduce duplicative activities (an estimated 33 percent in 2017) is statistically significantly lower relative to 2013 (44 percent). In contrast, managers who were familiar with and whose programs were subject to quarterly data-driven reviews reported that those reviews were used to make progress toward agency priority goals. Identifying and sharing practices to expand the use of such reviews—for other performance goals and at lower levels within agencies—could lead to increased use of performance information. Daily operations: Agencies have made progress in developing results-oriented cultures but need to take additional actions. GAO's past work found that high-performing organizations use performance management systems to help individuals connect their daily activities to organizational goals. In 2017, about half of federal managers reported using performance information when setting expectations with employees (no change from GAO's last survey in 2013). Transparent and open government: GAO's past work identified a number of needed improvements to Performance.gov, the central government-wide website required by GPRAMA. The site is to provide quarterly updates on priority goals in effect through September 2017, but those updates stopped in December 2016. According to OMB, the existing information for cross-agency priority goals is the final update, and agencies should publish final updates on their priority goals in annual performance reports. Performance.gov does not provide users with this information, thereby limiting the transparency and accessibility of those results. What GAO Recommends In addition to following through on plans to resume implementation of key GPRAMA provisions, GAO recommends that OMB (1) consider a systematic approach to developing the program inventory, (2) revise guidance to provide specific time frames for inventory implementation, (3) identify and share practices for expanding the use of data-driven reviews, and (4) update Performance.gov to explain that reporting on priority goals was suspended and provide the location of final progress updates. OMB staff agreed with these recommendations.
gao_GAO-18-153T
gao_GAO-18-153T_0
Background The mission of IRS, a bureau within the Department of the Treasury, is to (1) provide America’s taxpayers top quality service by helping them understand and meet their tax responsibilities and (2) enforce the law with integrity and fairness to all. In carrying out its mission, IRS annually collects over $3 trillion in taxes from millions of taxpayers, and manages the distribution of over $400 billion in refunds. To guide its future direction, the agency has two strategic goals: (1) deliver high quality and timely service to reduce taxpayer burden and encourage voluntary compliance; and (2) effectively enforce the law to ensure compliance with tax responsibilities and combat fraud. IRS Relies on Major IT Investments for Tax Processing Effective management of IT is critical for agencies to achieve successful outcomes. This is particularly true for IRS, given the role of IT in enabling the agency to carry out its mission and responsibilities. For example, IRS relies on information systems to process tax returns; account for tax revenues collected; send bills for taxes owed; issue refunds; assist in the selection of tax returns for audit; and provide telecommunications services for all business activities, including the public’s toll-free access to tax information. For fiscal year 2016, IRS was pursuing 23 major and 114 non-major IT investments to carry out its mission. According to the agency, it expended approximately $2.7 billion on these investments during fiscal year 2016, including $1.9 billion, or 70 percent, for operations and maintenance activities, and approximately $800 million, or 30 percent, for development, modernization, and enhancement. We have previously reported on a number of the agency’s major investments, to include the following investments in development, modernization, and enhancement: The Affordable Care Act investment encompasses the planning, development, and implementation of IT systems needed to support tax administration responsibilities associated with key provisions of the Patient Protection and Affordable Care Act. IRS expended $253 million on this investment in fiscal year 2016. Customer Account Data Engine 2 is being developed to replace the Individual Master File investment, IRS’s authoritative data source for individual tax account data. A major component of the program is a modernized database for all individual taxpayers that is intended to provide the foundation for more efficient and effective tax administration and help address financial material weaknesses for individual taxpayer accounts. Customer Account Data Engine 2 data is also expected to be made available for access by downstream systems, such as the Integrated Data Retrieval System for online transaction processing by IRS customer service representatives. IRS expended $182.6 million on this investment in fiscal year 2016. The Return Review Program is IRS’s system of record for fraud detection. As such, it is intended to enhance the agency’s capabilities to detect, resolve, and prevent criminal and civil tax noncompliance. In addition, it is intended to allow analysis and support of complex case processing requirements for compliance and criminal investigation programs during prosecution, revenue protection, accounts management, and taxpayer communications processes. According to IRS, as of May 2017, the system has helped protect over $4.5 billion in revenue. IRS expended $100.2 million on this investment in fiscal year 2016. We have also reported on the following investments in operations and maintenance: Mainframes and Servers Services and Support provides for the design, development, and deployment of server; middleware; and large systems and enterprise storage infrastructures, including supporting systems software products, databases, and operating systems. This investment has been operational since 1970. IRS expended $499.4 million on this investment in fiscal year 2016. Telecommunications Systems and Support provides for IRS’s network infrastructure services such as network equipment, video conference service, enterprise fax service, and voice service for over 85,000 employees at about 1,000 locations. According to IRS, the investment supports the delivery of services and products to employees, which translates into service to taxpayers. IRS expended $336.4 million on this investment in fiscal year 2016. Individual Master File is the authoritative data source for individual taxpayer accounts. Using this system, accounts are updated, taxes are assessed, and refunds are generated as required during each tax filing period. Virtually all IRS information system applications and processes depend on output, directly or indirectly, from this data source. IRS expended $14.3 million on this investment in fiscal year 2016. GAO, Congress, and the Administration Have Highlighted the Need for Government-wide Improvements for IT Acquisitions and Operations In fiscal year 2017, the federal government planned to spend more than $89 billion for IT that is critical to the health, economy, and security of the nation. However, we have reported that prior IT expenditures have often resulted in significant cost overruns, schedule delays, and questionable mission-related achievements. In light of these ongoing challenges, in February 2015, we added improving the management of IT acquisitions and operations to our list of high-risk areas for the federal government. This area highlights several critical IT initiatives in need of additional congressional oversight, including (1) reviews of troubled projects; (2) efforts to increase the use of incremental development; (3) efforts to provide transparency relative to the cost, schedule, and risk levels for major IT investments; (4) reviews of agencies’ operational investments; (5) data center consolidation; and (6) efforts to streamline agencies’ portfolios of IT investments. We noted that implementation of these initiatives has been inconsistent and more work remains to demonstrate progress in achieving acquisitions and operations outcomes. Between fiscal years 2010 and 2015, we made about 800 recommendations related to this high-risk area to the Office of Management and Budget and agencies. As of September, 2017, about 54 percent of these recommendations had been implemented. The Federal Information Technology Acquisition Reform provisions (commonly referred to as FITARA), enacted as a part of the Carl Levin and Howard P. ‘Buck’ McKeon National Defense Authorization Act for Fiscal Year 2015, aimed to improve federal IT acquisitions and operations and recognized the importance of the initiatives mentioned above by incorporating certain requirements into the law. For example, among other things, the act requires the Office of Management and Budget to publicly display investment performance information and review federal agencies’ IT investment portfolios. The current administration has also initiated additional efforts aimed at improving federal IT. Specifically, in March 2017, the administration established the Office of American Innovation, which has a mission to, among other things, make recommendations to the President on policies and plans aimed at improving federal government operations and services and modernizing federal IT. Further, in May 2017, the administration established the American Technology Council, which has a goal of helping to transform and modernize federal agency IT and how the federal government uses and delivers digital services. Recently this council worked with several agencies to develop a draft report on modernizing IT in the federal government. The council released the draft report for public comment in August 2017. GAO Reviews Have Identified Weaknesses with IRS’s Management of Its Modernization Activities and Legacy Systems In reviews that we have undertaken over the past several years, we have identified various opportunities for the IRS to improve the management of its IT investments. These reviews have identified a number of weaknesses with the agency’s reporting on the performance of its modernization investments to Congress and other stakeholders. In this regard, we have pointed out that information on investments’ performance in meeting cost, schedule, and scope goals is critical to determining the agency’s progress in completing key IT investments. We have also stressed the importance of the agency addressing weaknesses in its process for prioritizing modernization activities. Accordingly, we have made a number of related recommendations, which IRS is in various stages of implementing. In our June 2012 report on IRS’s performance in meeting cost, schedule, and scope goals for selected investments, we noted that, while IRS reported on the cost and schedule of its major IT investments, the agency did not have a quantitative measure of scope—a measure that shows whether these investments delivered planned functionality. We stressed that having such a measure is a good practice as it provides information about whether an investment has delivered the functionality that was paid for. Accordingly, we recommended that the agency develop a quantitative measure of scope for its major IT investments, to have more complete information on the performance of these investments. In response, IRS started developing a quantitative measure of scope for selected investments in December 2015 and has been working to gradually expand the measure to other investments. In April 2013, based on another review of IRS’s performance in meeting cost, schedule, and scope goals, we reported that there were weaknesses, to varying degrees, in the reliability of IRS’s investment performance information. Specifically, we found that IRS had not updated investment cost and schedule variance information with actual amounts on a timely basis (i.e., within the 60-day time frame required by the Department of Treasury) in about 25 percent of the activities associated with the investments selected in our review. In addition, the agency had not specified how project managers should estimate the cost and schedule performance of ongoing projects. As a result of these findings, we recommended that IRS ensure that its projects consistently follow guidance for updating performance information 60 days after completion of an activity and develop and implement guidance that specifies best practices to consider when estimating ongoing projects’ progress in meeting cost and schedule goals. IRS agreed with, and subsequently addressed, the recommendation related to updating performance information on a timely basis. However, the agency partially disagreed with the recommendation to develop guidance on estimating progress in meeting cost and schedule goals for ongoing projects. In this regard, we had suggested the use of earned value management data as a best practice to determine projected cost and schedule amounts. IRS did not agree with the use of the technique, stating that it was not part of the agency’s current program management processes and that the cost and burden to use earned value management would outweigh the value added. We disagreed with the agency’s view of earned value management because best practices have found that its value generally outweighs the cost and burden of its implementation (although we suggested it as one of several examples of practices that could be used to determine projected amounts). We also stressed that implementing our recommendation would help improve the reliability of reported cost and schedule variance information, and that IRS had flexibility in determining which best practices to use to calculate projected amounts. For those reasons, we maintained that our recommendation was warranted. However, IRS has yet to address the recommendation. We reported in April 2014, that the cost and schedule performance information that IRS reported for its major investments was for the fiscal year only. We noted that this reporting would be more meaningful if supplemented with cumulative cost and schedule performance information in order to better indicate progress toward meeting goals. In addition, we noted that the reported variances for selected investments were not always reliable because the estimated and actual cost and schedule amounts on which they depended had not been consistently updated in accordance with Department of Treasury reporting requirements as we had previously recommended. We recommended that IRS report more comprehensive and reliable cost and schedule information for its major investments. The agency agreed with our recommendation and said it believed it had addressed the recommendation in its quarterly reports to Congress. We disagreed with IRS’s assertion, however, noting that, while the report includes cumulative costs, they are cumulative for the fiscal year, not for the investment or investment segment as we recommended and they therefore do not account for cost variances from prior fiscal years. We therefore maintained our recommendation. In February 2015, after assessing the status and plans of the Return Review Program and Customer Account Data Engine 2, we reported that these investments had experienced significant variances from initial cost, schedule, and scope plans; yet, IRS did not include these variances in its reports to Congress because the agency had not addressed our prior recommendations. Specifically, IRS had not addressed our recommendation to report on how delivered scope compared to what was planned, and it also did not address guidance for determining projected cost and schedule amounts, or the reporting of cumulative cost and schedule performance information. We stressed that implementing these recommendations would improve the transparency of congressional reporting so that Congress has the appropriate information needed to make informed decisions. We made additional recommendations for the agency to improve the reliability and reporting of investment performance information and management of selected major investments. IRS agreed with the recommendations and has since addressed them. In our most recent report in June 2016, we assessed IRS’s process for determining its funding priorities for both modernization and operations. We found that the agency had developed a structured process for allocating funding to its operations activities consistent with best practices, which specify that an organization should document policies and procedures for selecting new and reselecting ongoing IT investments, and include criteria for making selection and prioritization decisions. However, IRS did not have a similarly structured process for prioritizing its modernization activities, to which the agency allocated hundreds of millions of dollars for fiscal year 2016. Agency officials stated that discussions were held to determine the modernization efforts that were of highest priority to meet IRS’s future state vision and technology roadmap. The officials reported that staffing resources and lifecycle stage were considered, but there were no formal criteria for making final determinations. Senior IRS officials said they did not have a structured process for the selection and prioritization of business systems modernization activities because the projects were established; and there were fewer competing activities than for operations support. Nevertheless, we stressed that, while there may have been fewer competing activities, a structured, albeit simpler, process that is documented and consistent with best practices would provide transparency into the agency’s needs and priorities for appropriated funds. We concluded that such a process would better assist Congress and other decision makers in carrying out their oversight responsibilities. Accordingly, we recommended that IRS develop and document its processes for prioritizing IT funding. The agency agreed with the recommendations and has taken steps to address them. Further, we found that IRS had reported complete performance information for two of the six selected investments in our review, to include a measure of progress in delivering scope, which we have been recommending since 2012. However, the agency did not always use best practices for determining the amount of work completed by its own staff, resulting in inaccurate reports of work performed. Consequently, we recommended that IRS modify its processes for determining the work performed by its staff. The agency disagreed with the recommendation, stating that the costs involved would outweigh the value provided. Specifically, IRS stated that modifying the use of the level of effort measure would equate to a certified earned value management system, which would add immense burden on IRS’s programs on various fronts and would outweigh the value it provides. However, we did not specify the use of an earned value management system in our report and believe other methods could be used to more reliably measure work performed.. In addition, we believed that it is a reasonable expectation for IRS to reliably determine the actual work completed, as opposed to assuming that work is always completed as planned since, as noted in our report, 22 to 100 percent of the work for selected projects was performed by IRS staff. Accordingly, we maintained that the recommendation was still warranted. IRS Faces Challenges with Managing Its Aging Legacy Systems Our work has also emphasized the importance of IRS more effectively managing its aging legacy systems. For example, in November 2013, we reported on the extent to which 10 of the agency’s large investments had undergone operational analyses—a key performance evaluation and oversight mechanism required by the Office of Management and Budget to ensure investments in operations and maintenance continue to meet agency needs. We noted that IRS’s Mainframe and Servers Services and Support had not had an operational analysis for fiscal year 2012. As a result, we recommended that the Secretary of Treasury direct appropriate officials to perform an operational analysis for the investment, including ensuring that the analysis addressed the 17 key factors identified in the Office of Management and Budget’s guidance for performing operational analyses. The department did not comment on our recommendation but subsequently implemented it. In addition, we previously reported on legacy IT systems across the federal government, noting that these systems were becoming increasingly obsolete and that many of them used outdated software languages and hardware parts that were unsupported. As part of that work, we noted that the Department of the Treasury used assembly language code—a computer language initially used in the 1950s and typically tied to the hardware for which it was developed—and Common Business Oriented Language (COBOL)—a programming language developed in the late 1950s and early 1960s—to program its legacy systems. It is widely known that agencies need to move to more modern, maintainable languages, as appropriate and feasible. For example, the Gartner Group, a leading IT research and advisory company, has reported that organizations using COBOL should consider replacing the language and, in 2010, noted that there should be a shift in focus to using more modern languages for new products. The use of COBOL presents challenges for agencies such as IRS given that procurement and operating costs associated with this language will steadily rise, and because fewer people with the proper skill sets are available to support the language. Further, we reported that IRS’s Individual Master File was over 50 years old and, although IRS was working to modernize it, the agency did not have a time frame for completing the modernization or replacement. Thus, we recommended that the Secretary of the Treasury direct the Chief Information Officer to identify and plan to modernize and replace legacy systems, as needed, and consistent with the Office of Management and Budget’s draft guidance on IT modernization, including time frames, activities to be performed, and functions to be replaced or enhanced. The department had no comments on our recommendation. We will continue to follow-up with the agency to determine the extent to which this recommendation has been addressed. In addition, we have ongoing work identifying risks associated with IRS’s legacy IT systems, and the agency’s management of these risks. In summary, IRS faces longstanding challenges in managing its IT systems. While effective IT management has been a prevalent issue throughout the federal government, it is especially concerning at IRS given the agency’s extensive reliance on IT to carry out its mission of providing service to America’s taxpayers in meeting their tax obligations. Thus, it is important that the agency establish, document, and implement policies and procedures for prioritizing its modernization efforts, as we have recently recommended, and provide Congress with accurate information on progress in delivering such modernization efforts. In addition, we have emphasized the need for IRS to address the inherent challenges associated with aging legacy systems so that it does not continue to maintain investments that have outlived their effectiveness and are consuming resources that outweigh their benefits. Continued attention to implementing our recommendations will be vital to helping IRS ensure the effective management of its efforts to modernize its aging IT systems and ensure its multibillion dollar investment in IT is meeting the needs of the agency. Chairman Buchanan, Ranking Member Lewis, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contacts and Staff Acknowledgments If you or your staffs have any questions about this testimony, please contact me at (202) 512-9286 or at pownerd@gao.gov. Individuals who made key contributions to this testimony are Sabine Paul (Assistant Director), Rebecca Eyler, and Bradley Roach (Analyst in Charge). Related GAO Products IRS 2013 Budget: Continuing to Improve Information on Program Costs and Results Could Aid in Resource Decision Making, GAO-12-603 (Washington, D.C.: June 8, 2012) Information Technology: Consistently Applying Best Practices Could Help IRS Improve the Reliability of Reported Cost and Schedule Information, GAO-13-401 (Washington, D.C.: April 17, 2013) Information Technology: Agencies Need to Strengthen Oversight of Multibillion Dollar Investments in Operations and Maintenance, GAO-14-66 (Washington, D.C.: Nov. 6, 2013) Information Technology: IRS Needs to Improve the Reliability and Transparency of Reported Investment Information, GAO-14-298 (Washington, D.C.: April 2, 2014) Information Technology: Management Needs to Address Reporting of IRS Investments’ Cost, Schedule, and Scope Information, GAO-15-297 (Washington, D.C.: February 25, 2015) Information Technology: Federal Agencies Need to Address Aging Legacy Systems, GAO-16-468 (Washington, D.C.: May 25, 2016) This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The IRS, a bureau of the Department of the Treasury, relies extensively on IT to annually collect more than $3 trillion in taxes, distribute more than $400 billion in refunds, and carry out its mission of providing service to America's taxpayers in meeting their tax obligations. For fiscal year 2016, IRS expended approximately $2.7 billion for IT investments, 70 percent of which was allocated for operational systems. GAO has long reported that the effective and efficient management of IT acquisitions and operational investments has been a challenge in the federal government. Accordingly, in February 2015, GAO introduced a new government-wide high-risk area, Improving the Management of IT Acquisitions and Operations. GAO has also reported on challenges IRS has faced in managing its IT acquisitions and operations, and identified opportunities for IRS to improve the management of these investments. In light of these challenges, GAO was asked to testify about IT management at IRS. To do so, GAO summarized its prior work regarding IRS's IT management, including the agency's management of operational, or legacy, IT systems. What GAO Found GAO has issued a series of reports in recent years which have identified numerous opportunities for the Internal Revenue Service (IRS) to improve the management of its major acquisitions and operational, or legacy, information technology (IT) investments. For example, In June 2016, GAO reported that IRS had developed a structured process for allocating funding to its operations activities, consistent with best practices; however, GAO found that IRS did not have a similarly structured process for prioritizing modernization activities to which the agency allocated hundreds of millions of dollars for fiscal year 2016. Instead, IRS officials stated that they held discussions to determine the modernization efforts that were of highest priority to meet IRS's future state vision and technology roadmap, and considered staffing resources and lifecycle stage. However, they did not use formal criteria for making final determinations. GAO concluded that establishing a structured process for prioritizing modernization activities would better assist Congress and other decision makers in ensuring that the right priorities are funded. Accordingly, GAO recommended that IRS establish, document, and implement policies and procedures for prioritizing modernization activities. IRS agreed with the recommendation and has efforts underway to address it. In the same report, GAO noted that IRS could improve the accuracy of reported performance information for key development investments to provide Congress and other external parties with pertinent information about the delivery of these investments. This included investments such as Customer Account Data Engine 2, which IRS is developing to replace its 50-year old repository of individual tax account data, and the Return Review Program, IRS's system of record for fraud detection. GAO recommended that IRS take steps to improve reported investment performance information. IRS agreed with the recommendation, and has efforts underway to address it. In a May 2016 report on legacy IT systems across the federal government, GAO noted that IRS used assembly language code to program key legacy systems. Assembly language code is a computer language initially used in the 1950s that is typically tied to the hardware for which it was developed; it has become difficult to code and maintain. One investment that used this language is IRS's Individual Master File which serves as the authoritative data source for individual taxpayer accounts. GAO noted that, although IRS has been working to replace the Individual Master File, the bureau did not have time frames for its modernization or replacement. Therefore, GAO recommended that the Department of Treasury identify and plan to modernize and replace this legacy system, consistent with applicable guidance from the Office of Management and Budget. The department had no comments on the recommendation. What GAO Recommends GAO has made a number of recommendations to IRS to improve its management of IT acquisitions and operations. IRS has generally agreed with the recommendations and is in various stages of implementing them.
gao_GAO-19-80
gao_GAO-19-80_0
Background Grade-Crossing-Safety Trends Grade-crossing safety has improved significantly since 1975, but since 2009, the number of crashes and fatalities at grade crossings has plateaued (see fig. 1). The yearly number of grade-crossing crashes declined from 12,126 in 1975 to 2,117 in 2017. In that time frame, fatalities dropped from 917 to 273. The most significant reductions in grade-crossing crashes and fatalities were achieved from 1975 to 1985, when states closed or improved the most dangerous crossings. Grade- crossing safety continued to improve until the mid-2000s, though at a slower rate. Since 2009, the number of grade-crossing crashes and fatalities remains at around 2,100 crashes and 250 fatalities a year. These fatalities typically make up less than one percent of all highway- related fatalities. The decrease in crashes and fatalities occurred as the volume of train and highway traffic generally increased over the years. FRA expects the traffic volumes to continue to increase and has expressed concern that grade-crossing crashes and fatalities may also increase. The Section 130 Program As a set-aside portion of FHWA’s much larger Highway Safety Improvement Program (HSIP), the Section 130 Program provides funds to state DOTs for the elimination of hazards at highway-rail grade crossings. States determine what improvements need to be made at grade crossings. FHWA has oversight responsibilities regarding the use of federal funds as part of its administration of federal-aid highway programs and funding, including HSIP funds. FHWA uses a statutory formula to distribute to states Section 130 Program funds, which averaged $235 million per year during the last 10 years (fiscal years 2009 through 2018). Section 130 Program projects are funded at a 90 percent federal share, with the state or the roadway authority funding the remaining 10 percent. States have 4 years to obligate their program funds before they expire, meaning that in any given fiscal year, states can obligate funds appropriated in that year as well as any unobligated funds from the previous 3 fiscal years. In addition, states may choose to combine funds from multiple years to fund relatively expensive projects. The Section 130 Program’s requirements direct states to establish an implementation schedule for grade-crossing-safety improvement projects that, at a minimum, include warning signs for all public grade crossings. Grade crossings are generally categorized as “active” or “passive” depending on the type of traffic control devices that are present. As of July 2018, according to FRA’s National Highway-Rail Crossing Inventory, there were approximately 68,000 public grade crossings with electronic, or active, traffic control devices in the United States. Another approximately 58,000 public grade crossings have passive traffic-control devices, which include signs and supplementary pavement markings. The requirements also specify that at least 50 percent of Section 130 Program funding must be dedicated to the installation of protective devices at grade crossings, including traffic control devices. States can use remaining program funds for any hazard elimination project. States may also use program funds to improve warning signs and pavement markings or to improve the way the roadway aligns with the tracks (e.g., to ensure low-clearance vehicles do not get stuck on the tracks). In addition, states can use up to 2 percent of the funds to improve their grade-crossing inventories and to collect and analyze data. See figure 2 for examples of the types of projects eligible for Section 130 Program funds and graphical depictions of grade crossings before and after safety improvements have been made. The Federal Role in Grade-Crossing Safety FHWA and FRA are the primary agencies responsible for safety at grade crossings, and they both play key—yet distinct—roles. FHWA oversees the Section 130 Program and monitors states’ uses of program funds through 52 division offices located in each state, the District of Columbia, and Puerto Rico and through headquarters staff in Washington, D.C. In addition, FHWA’s division staff reviews states’ processes for prioritizing and selecting grade-crossing-safety improvement projects. FHWA does not evaluate the appropriateness of individual grade-crossing projects, but instead helps states determine that projects meet program eligibility requirements. Division staff assists in the implementation of Section 130 Program state-administered projects, and they may participate in state- DOT-led, on-site reviews of grade crossings under consideration for Section 130 Program projects. FHWA headquarters staff is responsible for FHWA-wide initiatives, such as working with stakeholders to establish standards for traffic control devices and systems at grade crossings and for engineering oversight of state-administered safety improvement projects. FRA provides safety oversight of both freight and passenger railroads by: collecting and analyzing data; issuing and enforcing numerous safety regulations, including on grade-crossings’ warning systems; conducting focused inspections, audits, and accident providing technical assistance to railroads and other stakeholders. Specifically, FRA oversees rail safety through eight regional offices and through headquarters staff in Washington, D.C. Regional staff monitor railroads’ compliance with federal safety regulations through inspections and provide technical assistance and guidance to states. In 2017, FRA created a new discipline for grade-crossing safety and is hiring new grade-crossing inspectors. These inspectors conduct field investigations, identify regulatory defects and violations, recommend civil penalty assessments when appropriate, and may participate in state- DOT-led teams that conduct on-site reviews of grade crossings to evaluate potential safety improvements. According to FRA documentation, FRA’s new inspectors will also work with a variety of stakeholders to institute new types of training, explore new safety concepts and technologies, and assist in the development of new or modified highway-rail grade-crossing-safety regulations, initiatives, and programs. The inspectors will also work with FHWA and other DOT operating administrations in a cooperative effort to improve grade- crossing safety. FRA regional staff also investigates select railroad crashes, including those at grade crossings, to determine root causation and any contributing factors, so that railroads can implement corrective actions. FRA headquarters staff develops analytical tools for states to use to prioritize grade-crossing projects. In addition, headquarters staff manages research and development to support improved railroad safety, including at grade crossings. FRA’s Office of Railroad Safety maintains the National Highway-Rail Crossing Inventory database and the Railroad Accident/Incident Reporting System on grade-crossing crashes. Both states and railroads submit information to FRA’s crossing inventory, which is designed to contain information on every grade crossing in the nation. Railroads submit information such as train speed and volume; states submit information such as highway speed limits and average annual daily traffic. The Rail Safety Improvement Act of 2008 added requirements for both railroads and states to periodically update the inventory; however, the Moving Ahead for Progress in the 21st Century Act (MAP-21) repealed a provision providing DOT authority to issue implementing regulations that would govern states’ reporting to the inventory. According to FRA officials, while FRA’s regulations do not require states to report the information, FRA encourages them to do so. FRA regulations require railroads to report and update their information in the inventory every 3 years or sooner in some instances, such as if new warning devices are installed or the grade crossing is closed. FRA’s accident system contains details about each grade-crossing accident that has occurred. In addition to submitting immediate reports of fatal grade-crossing crashes, railroads are required to submit accident reports within 30 days after the end of the month in which the accident occurred and describe conditions at the time of the accident (e.g., visibility and weather); information on the grade crossing (e.g., type of warning device); and information on the driver (e.g., gender and age). FRA Has Focused Research on Understanding and Addressing Risky Behavior by Drivers at Grade Crossings Research Sought to Identify Risk Factors at Grade Crossings and Understand Driver Behavior In its role overseeing grade-crossing safety, FRA has sponsored a number of research efforts to better understand the causes of grade- crossing crashes and identify potential ways to improve engineering, education, and enforcement efforts. For example, FRA sponsored an in- depth data analysis of grade-crossing crashes to better identify which crossing characteristics increase the risk of an accident. The report, issued in 2017, found that the volumes of train and vehicle traffic at a crossing are the biggest predictors of grade-crossing crashes. Changes in vehicle and train traffic therefore affect the annual number of grade- crossing crashes. For example, as highway traffic decreased in 2008, possibly due to the economic recession and higher gas prices, so too did the number of grade-crossing crashes. As previously noted, FRA expects that the number of grade-crossing crashes will likely grow with anticipated increases in future train and highway traffic. As discussed below, vehicle and train volume are included in the U.S. DOT Accident Prediction Model, which some states use to select grade-crossing improvement projects. According to FRA officials, FRA is using the results of this recent in-depth data analysis to, in part, evaluate whether additional risk factors, such as the number of male drivers or trains carrying toxic materials, should be added to the model. FRA has targeted other research into understanding driver behavior at grade crossings, which is the leading cause of crashes. According to FRA’s accident data, in 2017, 71 percent of fatal crashes at public grade crossings occurred at those with gates. In 2004, the DOT Inspector General (IG) reported that 94 percent of grade-crossing crashes from 1994 to 2003 could be attributed to risky driver behavior or poor judgement. State officials we spoke with explained that drivers may become impatient waiting at a grade crossing and decide to go around the gates. Drivers may also line up over the grade crossing in heavy vehicular traffic, and be unable to exit before the gates come down. See figure 3 for examples of risky driver behavior at grade crossings. To better understand driver behavior, FRA sponsored a John A. Volpe National Transportation Systems Center (Volpe Center) study that recorded and analyzed drivers’ actions as they approached grade crossings. The researchers found that almost half of drivers were doing another task, such as eating, and over a third did not look in either direction while approaching passive grade crossings. We have previously reported, and many stakeholders we interviewed agreed, that in light of inappropriate driver behavior, technological solutions alone may not fully resolve safety issues at grade crossings. In addition, public-education and law-enforcement efforts can augment the effectiveness of technological solutions. According to FRA officials, they shared information on driver education with DOT’s National Highway Traffic Safety Administration (NHTSA) as NHTSA works more closely with states on driver education manuals. According to DOT officials, NHTSA updates its driver education materials every 2–3 years and plans to consider including grade-crossing-safety materials in the next versions. FRA Works with States to Research New Safety Measures to Address Risky Behavior at Grade Crossings FRA is also working with states and localities to research and develop new protective devices and other safety measures targeted at improving driver behavior at grade crossings. As most fatal crashes happen at grade crossings already equipped with gates, FRA and state and local agencies are exploring whether additional safety measures can improve safety at those locations. For example, in 2016 and 2017, FRA’s Grade Crossing Task Force worked with the Volpe Center and the City of Orlando to test whether photo enforcement at grade crossings could reduce risky driver behavior. The City of Orlando installed automated photo-enforcement devices at a grade crossing, and instead of issuing fines to drivers who had violated its warning devices, sent drivers a warning notice and educational safety materials. Eight months after the photo-enforcement system was installed, grade crossing violations decreased by 15 percent. While FRA judged these enforcement efforts successful at changing driver behavior, a 2015 FRA whitepaper noted that photo enforcement equipment is costly—on average costing over $300,000 per crossing to install and operate for 2 years—and may not be cost-effective for most grade crossings. FRA found that due to costs and state laws prohibiting photo-enforcement, only two photo- enforcement cameras were currently in operation at grade crossings across the country. States, localities, and FHWA are also exploring whether new types of pavement markings at grade crossings can improve driver behavior. According to DOT officials, FHWA is working with two states to develop new cross-hatch pavement markings for grade crossings that would comply with the Manual on Uniform Traffic Control Devices, similar to the “don’t block the box” type pavement markings used in intersections. FHWA also worked with a city to test the use of in-roadway lights to delineate the crossing. (See fig. 4). FRA and state DOTs are also trying to improve pedestrian safety at grade crossings by developing new safety measures. Grade-crossing accidents involving pedestrians are less frequent than those involving automobiles at grade crossings but have a higher fatality rate. While pedestrians were involved in only 9 percent of accidents at public crossings in 2017, almost 40 percent of fatal grade-crossing accidents involved pedestrians. To try to improve pedestrian safety, in 2012 the Volpe Center worked with New Jersey Transit to study whether adding additional pedestrian gate skirts— hanging gates that further block a crossing (see fig. 5)—would prevent people from ducking under the gates. The Volpe Center reported that these new gates had mixed success. While incidents of people going under and around the gates decreased, more people chose to cross the tracks in the street rather than at the sidewalk. Finally, FRA is exploring new automated and connected vehicle technologies that could reduce risky driver behavior at grade crossings. FRA, FHWA, and officials from one state we interviewed said they anticipate that such technology will be critical to further improving safety. Specifically, FRA and FHWA are coordinating with DOT’s Intelligent Transportation Systems Joint Program Office to develop pilot technology that would enable crossing infrastructure or trains to communicate wirelessly with vehicles. Vehicles can use this information to warn the driver that a crash or violation is imminent, or integrate with onboard active safety systems. According to FRA officials, they completed a proof of concept in 2013 and completed and tested a prototype of the technology in 2017. DOT officials said that DOT does not have a time frame for when automakers might begin incorporating such connected vehicle technologies and noted that retrofitting older cars with new equipment will likely make this a long-term effort. FRA shares information on its research in various ways with state DOTs, because states are responsible for deciding which safety measures to install at grade crossings. Specifically, FRA and FHWA jointly hold quarterly webinars with stakeholders, including state DOT officials, and conduct presentations at highway-rail safety workshops. Information on safety measures such as grade-crossing devices, signs, and markings are also included in the Railroad-Highway Grade Crossing Handbook. According to DOT officials, the handbook was developed jointly by FHWA and FRA. The last version of the handbook was updated in 2007 and includes some out of date information. FRA and FHWA officials said they began working on an update in 2017, but missed the July 2018 target completion date. According to FHWA officials, updating the handbook is a complex undertaking that has taken more time than they anticipated due to the extensive collaboration required among stakeholders. FHWA officials said they anticipate completing the update during the spring of 2019. States Use a Risk- Based Approach for Project Selection and May Use FRA Data States Consider Risk when Identifying Grade- Crossing Improvement Projects The risk of crashes at public grade crossings within a state factors into states’ selection of over 1,000 new Section 130 Program projects nationally each fiscal year. FHWA requires states to develop a grade crossing program that considers relative risk. FHWA officials said they review the methods that states use to select projects to ensure that risk is considered. According to a 2016 academic study of 50 states, most states use mathematical formulas, or “accident prediction models,” to help assess risk and identify grade crossings for potential projects. More specifically, these accident prediction models use factors such as grade crossing characteristics and accident history to rank grade crossings by risk. DOT provides one such model—the Accident Prediction Model—and some states have developed their own models. The study reported that 19 states used DOT’s model and 20 states used a different model. It also found that the DOT and commonly used state models include some similar grade-crossing characteristics to predict accident risk. For example, the selected models reviewed all considered vehicle- and train- traffic volume, which FRA has found to be the strongest predictors of grade-crossing crashes. FRA makes its Accident Prediction Model available to states online through its Web Accident Prediction System. This system is an online tool that uses FRA’s crossing inventory, crossing collision history, and the DOT Accident Prediction Model to predict accident risk for grade crossings in each state. Only one of the eight states in our review used the system as its primary source for ranking grade-crossing risk. Most of the other states perform their own calculations to rank grade crossings. Officials from two states said that they believe their state-maintained data are more reliable than FRA’s crossing inventory and explained that they go directly to their contacts at railroads to get updated information on factors such as train volume. Accident prediction models are only one source of information states use when selecting Section 130 Program projects. According to the state officials we spoke with, a variety of other considerations can also influence their decisions, including the following: Proximity of projects together along a railroad “corridor” in order to gain efficiencies and reduce construction costs. Requests from local jurisdictions or railroads. These stakeholders may have information on upcoming changes at a grade crossing, such as higher train volume or new housing developments nearby, which would increase risk but would not be reflected yet in the accident prediction model. Availability of local funding to provide the required 10 percent match for Section 130 Program projects, while trying to spread the funds fairly across the state. States may also consider grade crossings that have had close calls in the past, such as where a car narrowly avoided being hit by a train. FRA does not require railroads to report on these close calls, or “near misses;” however, according to state officials, railroads sometimes provide this information to states on an ad-hoc basis. State officials from four of the eight states we spoke with said they considered near misses when selecting Section 130 Program projects. A 2004 Volpe Center report noted that studying close calls was a proactive way to improve safety. According to the report, FRA sponsored a workshop to learn about the benefits of collecting and analyzing close calls. However, stakeholders we interviewed noted challenges formalizing near-miss reporting. For example, Volpe Center officials said these reports are subjective in nature—what one engineer considers a close call, others may not. FRA developed another online tool—GradeDec—to allow states to compare the costs and benefits for various grade-crossing improvement projects. GradeDec uses models to analyze a project’s risk and calculate cost-benefit ratios and net present value for potential projects. FRA provides state DOTs with on-site GradeDec workshops upon request. While FRA officials noted that many state and local governments have registered to use the program, none of the state officials we spoke with identified GradeDec as a tool that they use to conduct cost-benefit analysis. Officials from two state DOTs we spoke with said that cost- benefit analyses could help them better identify and select the most cost- effective crossing safety projects in the future. According to the academic study of 50 states noted above, because of limited funding for grade-crossing improvements, states should consider the life-cycle costs of the projects as well as net present value to help select projects. As discussed later in this report, the small number of crashes at grade crossings can make it challenging to distinguish between different projects in terms of their effectiveness in reducing accidents. Finally, after they have considered risk factors and created a list of potential grade crossings for improvement, state officials, along with relevant stakeholders from railroads and local governments, conduct field reviews of the potential projects. According to state officials, these reviews help identify grade-crossing characteristics that may not be included in the accident prediction models, such as vegetation that would obstruct drivers’ views. FRA Has Taken Steps to Improve Inventory Data and Is Formalizing How Inspectors Will Validate the Data’s Accuracy In 2008, legislation was enacted mandating reporting by states and railroads to the National Highway-Rail Crossing Inventory. However, the fact that reporting to the inventory remained voluntary until 2015 has had lingering effects on the completeness of the data in the inventory. In 2015, as mandated by statute, FRA issued regulations requiring railroads to update certain data elements for all grade crossings every 3 years. However, our analysis of FRA’s crossing inventory found that 4 percent of grade crossings were last updated in 2009 or earlier. In addition, because MAP-21 repealed DOT’s authority to issue regulations that would govern state reporting to the inventory, state reporting of grade-crossing data remains voluntary, according to FRA officials, and all state-reported information is not complete. Our analysis of state-reported data in FRA’s crossing inventory found varying levels of completeness. For example, while some state-reported data fields were almost entirely complete, 33 percent of public grade crossings were missing data on posted highway speed. We also found that of the crossings for which states reported the year when the highway-traffic count was conducted, 64 percent of the highway-traffic counts for public grade crossings, another important risk factor, had not been updated since 2009, or earlier. According to the 2015 final rule, FRA will continue to evaluate whether additional regulations to address state reporting are needed to maintain the crossing inventory’s accuracy. FRA officials told us that improving inventory data will help them better deploy their limited resources, particularly their grade-crossing inspectors, and said that they have taken steps to help improve the data. In 2017, FRA regional officials conducted field reviews to verify the latitude and longitude data for grade crossings in the inventory, data that states are responsible for updating. In addition, FRA expects its grade-crossing inspectors as part of their inspections to review and identify issues with the railroad- and state-reported inventory data. According to FRA officials, FRA has begun to both transition its 19 grade-crossing managers into grade-crossing inspectors and also hire new inspectors, for an eventual total of 24 inspectors and eight regional specialists to supervise their activities. To help ensure railroads’ compliance with crossing inventory regulations, officials said that the inspectors will use spot checks to validate the inventory data by comparing grade-crossing characteristics in the field with the information railroads submitted to the inventory. In addition, FRA has incorporated information on inventory-reporting requirements into the grade-crossing inspectors’ training. Finally, FRA is currently developing guidelines for the grade-crossing inspections similar to those for other FRA safety disciplines. FRA headquarters officials acknowledged that they are still clarifying the details for the inspections that will be included in the compliance manuals that inspectors will use. Specifically, they said they are still determining appropriate inspector workloads and drafting specific guidelines that will need to be integrated into FRA’s regional inspection plans. FRA officials said they are working to develop and make available inventory inspection guidance to the grade-crossing managers and inspectors by December 31, 2018. In the meantime, FRA held training that included information on inventory-reporting requirements. In August 2018, FRA developed guidance for grade-crossing inspections specific to quiet zones in response to a recommendation we made in 2017. It is important that FRA meets its goal to issue similar guidance specific to reviewing the accuracy of the inventory data, as FRA cannot have reasonable assurance that inspections that are already under way are being conducted in such a manner that would allow them to consistently identify data reliability issues at each crossing. States Reported Challenges Implementing Certain Project Types and Measuring Projects’ Effectiveness, and FHWA’s Efforts to Assess the Program’s Effectiveness Have Limitations The Program’s Requirements and Other Challenges Cited by States Contribute to the Selection of Active- Warning Equipment Projects over Other Projects About 75 percent of all Section 130 Program projects states implemented in fiscal year 2016 involved installing or updating active grade-crossing equipment, including warning lights and protective gates (see fig. 6). The prevalence of this type of project is in part due to the Section 130 Program requirement that states spend at least 50 percent of funds on protective devices. Other than eliminating a grade crossing, adding protective devices has long been considered the most effective way of reducing the risk of a crash. Officials from six of eight state DOTs we interviewed told us that the numbers and types of grade-crossing projects they implement are dependent on the amount of Section 130 Program funding they receive and the cost of the projects. As previously described, funds are set aside from the Highway Safety Improvement Program and distributed to states by a statutory formula that includes factors such as the number of grade crossings in each state. Officials from six of the eight state DOTs we spoke to agreed that the set-aside nature of the program was crucial in allowing them to implement projects, many of which they said would not have been possible without Section 130 Program funds. For example, many said the formula funding ensures that grade-crossing projects are completed along with highway safety projects, particularly given the fact that fatalities resulting from grade-crossing crashes account for so few when compared to highway deaths. Overall, fatalities resulting from grade-crossing crashes account for less than 1 percent of all highway- related fatalities. In fiscal year 2018, the funds distributed ranged from a low of approximately $1.2 million for eight states and Washington, D.C., to over $16 million for California and over $19 million for Texas. The number of grade crossings in the eight states and Washington, D.C. ranged from 5 to 380, while California had almost 6,000 and Texas had over 9,000. Project implementation costs varied by project type and ranged widely depending on project scope. Based on 2016 DOT data, some typical project costs ranged as follows: adding signs to passive grade crossings—$500 to $1,500; adding flashing lights and two gates to passive grade crossings— $150,000 to $300,000; adding four gates to grade crossings with flashing lights—$250,000 - closing a grade crossing—$25,000 to $100,000; and separating a grade crossing from traffic (Grade Separation)—$5 million to $40 million. State officials we spoke with cited several challenges in pursuing certain types of controversial, innovative, and expensive projects that could help them address the evolving nature of risk at grade crossings and difficulty in measuring the effectiveness of their projects. First, most state DOT officials said that the cost of grade-separation projects and, at times, the controversy of eliminating grade crossings through closure reduces the number of these projects, while acknowledging that they are the most effective ways to improve safety. These types of projects made up only 3 percent of Section 130 Program projects in fiscal year 2016 (see fig. 6). Grade-separation projects are often more expensive than the annual Section 130 Program funding available to states. In 2018, only eight states received annual Section 130 Program funding sufficient to fund a $7-million grade-separation project. As discussed previously, to fund relatively expensive projects, states may choose to combine funds from multiple years. Also, states and railroads may make incentive payments to localities for the permanent closure of a grade crossing. In addition to the cost, most state DOT officials reported challenges obtaining local support for closing grade crossings. They said closures may inconvenience residents who use the road and force emergency responders to take longer routes, potentially slowing response times. Grade-separation projects address these safety concerns and may be more agreeable to residents, but they are substantially more expensive. While up to $7,500 in Section 130 Program funding can be used to help incentivize communities to close grade crossings, officials from some of our selected state DOTs said this amount is generally not enough to persuade local officials to support the closing. Second, officials from many state DOTs we interviewed also reported that the requirements of the Section 130 Program create challenges for them in implementing what they considered to be innovative projects. For example, the program requirement that 50 percent of funds be used on protective devices, combined with what one researcher described to us as the tendency by states to implement “known” projects—i.e., protective devices—may impede states’ selection of new, more innovative safety projects. Officials we interviewed from many state DOTs described challenges related to the program’s requirements. They noted that they are prevented from using Section 130 Program funds for new types of safety technologies not yet incorporated into FHWA’s Manual on Uniform Traffic Control Devices. As noted previously in this report, outside the Section 130 Program FHWA is working with states and localities to explore whether new types of pavement markings at grade crossings, not in the manual, can improve driver behavior. One state DOT official we interviewed suggested changes to allow states to fund one grade- crossing pilot project per year or to use a set percentage of program funds to finance a pilot project that could help them explore promising but as yet unproven technologies. Third, state DOT officials from four of the eight selected states also said it can be difficult to find funding for the required 10 percent state match. As previously mentioned, while certain rail-safety projects are eligible for up to 100 percent federal funding, Section 130 Program projects are funded at a 90 percent federal share. According to DOT documentation we reviewed, only some states have a dedicated source for such a match, and state DOT officials from one of our selected states said their state cannot use state funds for the 10 percent match. Some state DOT officials said this situation can drive project selection. For example, they sometimes chose projects based on which localities or railroads were willing to provide matching funds or offer cost savings. Finally, many state officials cited challenges in measuring the effectiveness of grade crossing projects in reducing crashes or the risk of crashes. In particular, state officials we spoke to said it can be difficult to use before-and-after crash statistics as a measure of effectiveness because of the low number and random nature of crashes. Also, as FRA research has shown and as FHWA and FRA have noted, reporting on before-and-after grade-crossing accident statistics can be misleading, given the infrequency of crashes and crashes that are not the result of grade crossing conditions. States’ required Section 130 Program annual progress reports to the Secretary of DOT call for states to report on the effectiveness of the improvements they made. FHWA reporting guidance suggests they define effectiveness as the reduction in the number of fatalities and serious injuries after grade-crossing projects were implemented, consistent with statutory requirements. In addition, FHWA guidance states that consideration should be given to quantifying effectiveness in the context of fatalities and serious injuries. However, states often report no differences in crashes after specific projects were implemented, and there have been instances where states reported a slight increase in crashes. Such an increase does not necessarily mean that the project was not effective in reducing the overall risk of a crash. Also, not all projects are implemented at grade crossings where there has been a crash. Among other information, states also typically report information on funding and data on the numbers and types of projects implemented. In addition, the extent to which states report projects’ effectiveness varies greatly. Given states’ responsibility for implementing the Section 130 Program and the differences in the amounts of funding they receive, FHWA officials said states should determine and report on the appropriate effectiveness metrics for their programs. According to FHWA officials, during the 2017 reporting year, a few states requested examples of what to include when reporting effectiveness, and FHWA responded with examples of various methods they could use, such as a benefit-cost ratio or the percentage decrease in fatalities, serious injuries, and crashes. Regardless of the difficulty in measuring the effectiveness of specific projects, most state DOT officials we interviewed stressed the importance of the Section 130 Program in funding grade-crossing projects. FHWA Reports Provide Limited Insight into the Program’s Effectiveness, and FHWA Has Not Evaluated Program Requirements in Light of Changing Risk Conditions FHWA’s biennial report to Congress is intended to provide information to Congress on the progress being made by the states in implementing projects to improve safety and, in addition, make recommendations for future implementation of the program. FHWA reviews states’ annual Section 130 Program reports and uses them to formulate the report to Congress every 2 years. FHWA’s 2018 report highlights that the Section 130 Program has seen great success since 1975, with a decrease of approximately 74 percent in fatalities at the same time that there was an increase in vehicle and train traffic. The report described the latest available 10-year trend, from 2007 to 2016, as showing a 31 percent decrease in fatalities. Fatalities have also decreased when adjusted for train traffic. However, FHWA officials acknowledged in interviews with us that crashes and fatalities have remained constant since about 2009, with more recent data showing a slight increase in fatalities over the last 2 to 3 years, data that are consistent with the increases in overall roadway fatalities. The officials said increased train- and vehicle-traffic volumes could be contributing to that increase, in addition to other factors, such as more bicycle riders and pedestrians using grade crossings. As described earlier, states have generally already used Section 130 Program funding to address safety at the riskiest grade crossings by adding protective measures, typically lights and gates. Yet crashes continue to occur at these improved grade crossings. Given these trends and the challenges discussed earlier related to the requirements of the Section 130 Program, it is not clear whether the program remains effective in continuing to reduce the risk of crashes and fatalities at grade crossings. As required, FHWA’s biennial report includes a section on “recommendations for future implementation” of the Section 130 Program. As part of this, FHWA reports on challenges and actions being taken to address them. FHWA’s 2018 report identified one of the same challenges we heard about from state DOT officials related to the inability or unwillingness of local agencies to provide matching funds and the relatively low amount of funding designed to incentivize localities to close crossings. FHWA reported on its efforts to address these challenges, including by providing guidance, resources, and supportive training to states and local agencies and serving as a clearinghouse for innovative methods of supporting projects. However, with the exception of the funding challenge, FHWA’s most recent report does not include the other challenges state officials identified to us related to the requirements of the Section 130 Program discussed above. These include program funding requirements that may impede innovative approaches and the difficulties of using before-and-after crash statistics to measure effectiveness. Many state DOT officials we spoke with said there may be an opportunity to more broadly assess the Section 130 Program at the national level. It could be more informative to comprehensively assess more detailed crash trends, such as those that look forward over multiple years across the more than 1,700 crashes nationwide, rather than on the approximately 35 that occur on average within a state, and identify strategies to address those trends. Doing so could help FHWA learn more about why crashes are continuing and what types of projects may be effective. There could be ways to evaluate the program in a more comprehensive way; many state DOT officials we interviewed told us such a comprehensive evaluation could help improve program effectiveness in a number of ways, including by enabling the program to better keep up with the rapid pace of technological change and re- examining eligibility requirements that limit the flexibility of states to consider other types of projects beyond engineering. Also, most state DOT officials we interviewed agreed that education and enforcement efforts are crucial to further improving safety, as did 8 out of 10 other stakeholders we spoke to, as well as officials from Volpe Center and NTSB. However, according to FHWA officials, those project types are not allowed under the Section 130 Program’s requirements. The officials said FHWA has partnered with FRA and NHTSA on research efforts, such as driver-behavior studies, to inform grade-crossing safety issues. However, the officials said that FHWA has not conducted a program evaluation of the Section 130 Program to consider whether the program’s funding and other requirements allow states to adequately address ongoing safety issues such as driver behavior. FHWA officials said that there is no federal requirement for them to conduct such a program evaluation. We have previously reported that an important component of effective program management is through program performance assessment, which helps establish a program’s effectiveness—the extent to which a program is operating as it was intended and the extent to which a program achieves what the agency proposes to accomplish. This type of evaluative information helps the executive branch and congressional committees make decisions about the programs they oversee. Assessing program performance includes conducting program evaluations, which are individual systematic studies that answer specific questions about how well a program is meeting its objectives. In addition, federal internal-control standards state that management should identify, analyze, and respond to significant changes in a program’s environment that could pose new risks. FHWA officials said the fact that crashes and fatalities have held steady while the volume of train and vehicle traffic has increased is an indication that grade-crossing safety has continued to improve. However, specific to fatalities per million train-miles, FHWA’s 2018 biennial report shows this rate to be fairly constant since 2009. As noted previously, FRA expects train and traffic volumes to continue to increase and has expressed concern that grade-crossing crashes and fatalities may also increase. Without conducting a program evaluation, FHWA cannot ensure that the Section 130 Program is achieving one of the national goals of the federal- aid highway program, to reduce fatalities and injuries. In addition, It is difficult to see how FHWA, in its biennial reports to Congress, could make informed recommendations for future program implementation without conducting a program evaluation to assess, among other things, whether program requirements first established some four decades ago continue to reduce fatalities and injuries. We note that as part of a program evaluation, some changes that FHWA, working with FRA, identifies as potentially having merit to improve the program’s effectiveness could require a statutory change. Conclusions The continued number of crashes and fatalities at grade crossings with devices intended to warn of a train’s presence calls into question whether the Section 130 Program is structured to help states continue making progress toward the national goal to reduce fatalities and injuries. An evaluation of the program’s requirements could help determine whether Congress should consider better ways to focus federal funds to address the key factor in crashes—risky driver behavior. An FHWA program evaluation could also help determine whether, for example, states could more strategically target emerging safety problems if changes were made to the types of projects eligible for funding under the Section 130 Program. FRA’s new grade-crossing inspectors are meant to increase the effectiveness of FRA’s rail-safety oversight activities, and accordingly, these FRA inspectors, along with FRA researchers, may be well positioned to help FHWA evaluate potential changes to improve the effectiveness of the Section 130 Program. Recommendation for Executive Action The Administrator of FHWA, working with FRA, should evaluate the Section 130 Program’s requirements to determine whether they allow states sufficient flexibility to adequately address current and emerging grade-crossing safety issues. As part of this evaluation, FHWA should determine whether statutory changes to the program are necessary to improve its effectiveness. (Recommendation 1) Agency Comments We provided a draft of this report to DOT for review and comment. In written comments, reproduced in appendix II, DOT concurred with our recommendation. DOT also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, the Administrator of the Federal Highway Administration, and the Administrator of the Federal Railroad Administration. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report examines (1) what has been the focus of Federal Railroad Administration’s (FRA) grade-crossing-safety research, (2) how states select and implement grade-crossing projects and what railroad- and state-reported data are available from FRA to inform states’ decisions, and (3) the challenges states reported in implementing and assessing projects and the extent to which the Federal Highway Administration (FHWA) assesses the program’s effectiveness. The scope of this work focused on the nation’s more than 128,000 public grade crossings. We did not include private grade crossings, as states can only use Railway- Highway Crossings Program (commonly referred to as the Section 130 Program) funds to improve safety at public grade crossings. While FRA provides safety grants to address rail issues, including for grade-crossing projects, we focused our work on the Section 130 Program because it is the primary source of federal funding directed at grade-crossing-safety improvement. For each objective we reviewed: pertinent statutes and FHWA and FRA regulations and documents; interviewed FHWA and FRA program officials in headquarters; and conducted in-depth interviews with a non- generalizable sample of organizations that included officials from 4 freight and passenger railroads, 12 state agencies from 8 states, 6 FRA regional offices, and 8 FHWA state division offices. We also spoke with representatives from relevant associations and officials from NTSB and Volpe Center. We selected these organizations based on our initial background research, prior work, and input from other stakeholders, among other things. See the paragraph below for additional selection details and table 5 for a complete list of organizations we spoke with. We selected eight states as part of our non-generalizable sample for interviews. These states included Arizona, California, Florida, Illinois, Missouri, New Jersey, North Carolina, and Pennsylvania. The states were selected to include a mix of state experiences based on a variety of factors, including the number of grade crossings and crashes at those crossings, and the amount of Section 130 Program funding they received. Specifically, we selected four states from those in the top 25 percent of all states in terms of their number of grade crossings and the amount of Section 130 Program funds they received. We selected the other four states to include a mix of these factors. We also considered geographical diversity and recommendations from FRA and FHWA officials. Within these eight states, we conducted in-depth interviews with FHWA division staff, FRA regional staff, and state officials. A variety of state agencies administer the Section 130 Program within their state; the state officials we spoke with from our eight selected states worked for agencies such as state departments of transportation, corporation commissions, and public utility commissions. We also spoke with a non-generalizable sample of four railroads: Amtrak, CSX, Norfolk Southern, and Sierra Northern. We selected railroads based on a variety of factors including geographic location and stakeholder recommendations. We also conducted additional work related to each of the objectives. To describe the focus of FRA’s grade-crossing-safety research, we examined FRA research aimed at understanding the causes of grade- crossing crashes and identifying potential improvements and described FRA efforts to test new approaches that could improve safety. We did not assess the quality of FRA’s research, as that was beyond the scope of this engagement. Instead, we described the nature of the research. We also spoke with FRA research and development staff, Volpe researchers, and state partners about this work. To describe how states select and implement grade-crossing projects, and what FRA data are available to inform their decisions, we reviewed an academic study that included a literature review and interviews with state officials to describe how states select Section 130 Program projects. We spoke with the researcher and determined the study to be reliable for the purposes of our reporting objectives. We also spoke with officials from our eight selected states, FHWA division staff, and FRA regional staff, and reviewed the states’ 2017 Section 130 Program reports. As part of this objective, we also assessed the reliability of data reported for all railroads in FRA’s National Highway-Rail Crossing Inventory data as of August 31, 2018. For public grade crossings that were not closed, we examined a selection of fields within the database to identify the frequency of missing data (see table 1), data anomalies (see table 2), relational errors, where two related data fields had values that were incompatible (see table 3), and when the data was last updated (see table 4). Specifically, we conducted the following electronic tests on the crossing inventory data to determine if they were within reasonable ranges, were internally consistent, and appeared complete: Before conducting our analysis, we filtered the inventory data to only include open, public, at-grade crossings. To understand FRA’s efforts to improve its crossing inventory data, we interviewed FRA regional and headquarters staff and reviewed job descriptions for FRA’s new grade- crossing inspectors. Finally, to determine the challenges states reported in implementing and assessing grade-crossing safety projects and the extent to which FHWA assesses the program’s effectiveness, we reviewed program requirements and state project data and other components from FHWA’s 2016 and 2018 Section 130 Program biennial reports to Congress. We also reviewed FHWA’s summary of fiscal year 2018 program funds provided to states and federal laws and guidance related to implementing projects and measuring performance. We interviewed state DOT officials from the eight selected states and other stakeholders on the challenges states reported in implementing and assessing projects, and FHWA and FRA officials for their perspectives on managing the program, including how FHWA measures performance and assesses program effectiveness. We compared information collected from FHWA and FRA to federal internal-control standards and criteria on program evaluation identified in our previous work. In addition, we reviewed FHWA and FRA documents designed to guide states, such as the Grade Crossing Handbook, the Manual on Uniform Traffic Control Devices, the Action Plan and Project Prioritization Noteworthy Practices Guide, and other related documents. We conducted this performance audit from November 2017 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Transportation Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Susan A. Fleming, (202) 512-2834, Flemings@gao.gov. Staff Acknowledgments In addition to the individual named above, Maria Edelstein (Assistant Director); Gary Guggolz (Analyst in Charge); Steven Campbell; Tim Guinane; Ben Licht; Catrin Jones; Delwen Jones; SaraAnn Moessbauer; Malika Rice; Larry Thomas; and Crystal Wesco made key contributions to this report.
Why GAO Did This Study Crashes at highway-rail grade crossings are one of the leading causes of railroad-related deaths. According to FRA data, in 2017, there were more than 2,100 crashes resulting in 273 fatalities. Since 2009 crashes have occurred at a fairly constant rate. The federal government provides states funding to improve grade-crossing safety through FHWA's Section 130 Program. The persistence of crashes and deaths raises questions about the effectiveness of the federal grade-crossing-safety program. GAO was asked to review federal efforts to improve grade-crossing safety. This report examines: (1) the focus of FRA's grade-crossing-safety research, (2) how states select and implement grade-crossing projects and what data are available from FRA to inform their decisions, and (3) the challenges states reported in implementing and assessing projects and the extent to which FHWA assesses the program's effectiveness. GAO analyzed FRA data; reviewed FRA's, FHWA's, and states' documents; reviewed a study of states' selection of projects; and interviewed FRA and FHWA headquarters and field staff, and officials from a non-generalizable sample of eight states, selected to include a mix in the number of grade crossings and crashes, and geographic diversity. What GAO Found Research sponsored by the Federal Railroad Administration (FRA) has identified driver behavior as the main cause of highway-rail grade crossing crashes and that factors such as train and traffic volume can contribute to the risk of a crash. (See figure.) Over 70 percent of fatal crashes in 2017 occurred at grade crossings with gates. To meet the requirements of the federal grade-crossing program, states are responsible for selecting and ensuring the implementation of grade-crossing improvement projects. Most state DOT officials and other relevant transportation officials use local knowledge of grade crossings to supplement the results of models that rank grade crossings based on the risk of an accident. These states generally consider the same primary risk factors, such as vehicle and train traffic. FRA is taking steps to improve the data used in its model to help states assess risk factors at grade crossings. For example, FRA's grade-crossing inspectors will review and identify issues with railroad- and state-reported inventory data. FRA is currently developing guidelines, which it plans to finalize by the end of 2018, to implement these inspections as it has for other types of FRA inspections. Officials we spoke with in eight states reported challenges in pursuing certain types of projects that could further enhance safety, in part because of federal requirements. While safety has improved, many crashes occur at grade crossings with gates, and officials said there could be additional ways to focus program requirements to continue improving safety. States' and the Federal Highway Administration's (FHWA) reporting focuses on the program's funding and activity, such as the number and types of projects, yet the low number of crashes makes it difficult to assess the effectiveness of projects in reducing crashes and fatalities. FHWA reports the program has been effective in reducing fatalities by about 74 percent since 1975. However, since 2009, annually there have been about 250 fatalities—almost one percent of total highway fatalities. FRA expects future crashes to grow, in part, due to the anticipated increase in rail and highway traffic. An evaluation of the program should consider whether its funding and other requirements allow states to adequately address ongoing safety issues. FHWA officials said they are not required to perform such evaluations. GAO has previously reported on the importance of program evaluations to determine the extent to which a program is meeting its objectives. An evaluation of the program could lead FHWA to identify changes that could allow states to more strategically address problem areas. What GAO Recommends GAO recommends that FHWA evaluate the program's requirements to determine if they allow states the flexibility to address ongoing safety issues. The Department of Transportation concurred with GAO's recommendation.
gao_GAO-19-199
gao_GAO-19-199_0
Background History of the CMO Position DOD first took steps to establish a CMO role in May 2007, when it designated the Deputy Secretary of Defense as the department’s CMO. Subsequently, Congress included a provision in the NDAA for Fiscal Year 2008 to codify the Deputy Secretary of Defense as the DOD CMO, establish a new position known as the Deputy Chief Management Officer (DCMO) to assist the Deputy Secretary, and name the Under Secretaries of the military departments as CMOs of their respective organizations. The military departments also established DCMO positions to assist the CMOs with overseeing their business operations. In addition, the NDAA for Fiscal Year 2009 required the secretary of each military department to establish an office of business transformation and develop business transformation plans, with measurable performance goals and objectives, to achieve an integrated management system for the business operations of each military department. Further, DOD’s guidance states that the DOD DCMO should coordinate with the military department CMOs to identify and exchange information necessary to facilitate the execution of the Deputy Secretary of Defense’s responsibilities in his role as the DOD CMO. In October 2008, DOD issued Department of Defense Directive 5105.82 to assign the authorities and responsibilities of the DCMO. Among other duties, the DCMO was responsible for recommending methodologies and measurement criteria to better synchronize, integrate, and coordinate the business operations of the department and advising the Secretary of Defense on performance goals and measures and assessing progress against those goals. For a full list of the DCMO authorities and responsibilities identified in DOD Directive 5105.82, see appendix II. CMO Statutory Authorities and Responsibilities In December 2016, Congress initially established the standalone CMO position to be effective on February 1, 2018 in section 901(c) of the NDAA for Fiscal Year 2017. In December 2017, Congress repealed and replaced this provision in the NDAA for Fiscal Year 2018 and later added additional responsibilities and functions in the John S. McCain NDAA for Fiscal Year 2019. Table 1 summarizes key CMO statutory authorities and responsibilities, and appendix II provides a more detailed comparison of these authorities and responsibilities. Key Strategies for Implementing CMO Positions In November 2007, we reported on key strategies for implementing CMO positions. We developed these strategies based on our work, in which we (1) gathered information on the experiences and views of officials at four organizations that rely on chief management officials and (2) convened a forum to gather insights from individuals with experience and expertise in business transformation, federal and private sector management, and change management. The forum brought together former and current government executives and officials from private business and nonprofit organizations to discuss when and how a CMO or similar position might effectively provide the continuing, focused attention essential for integrating key management functions and undertaking multiyear organizational transformations. Our work identified the following six key strategies: Define the specific roles and responsibilities of the CMO position. Ensure that the CMO has a high level of authority and clearly delineated reporting relationships. Foster good executive-level working relationships for maximum effectiveness. Establish integration and transformation structures and processes in addition to the CMO position. Promote individual accountability and performance through specific job qualifications and effective performance management. Provide for continuity of leadership in the CMO position. DOD Has Taken Some Steps to Implement the CMO Position, but Key Issues Related to Authorities and Responsibilities Remain Unresolved DOD Has Begun to Implement Its CMO Position and Restructure the OCMO, with a Focus on Data Responsibilities In February 2018, DOD formally established the position of the CMO and an office in support of the CMO (OCMO). In establishing the office, the Secretary of Defense stated that all resources and personnel (military, civilian, and contractor) assigned within the existing DCMO office were to transfer to the OCMO. Generally, the department has been focused on updating organizational structures and strengthening the OCMO’s data capabilities, as described below. DOD Has Not Resolved Three Key Issues Related to the CMO’s Authorities and Responsibilities Despite its efforts to establish and restructure the OCMO, DOD has not fully addressed three key issues related to the CMO’s statutory authorities and responsibilities, including: (1) how the CMO will exercise the authority to direct the military departments; (2) how the CMO will exercise oversight of the DAFAs; and (3) which responsibilities, if any, will transfer from the CIO to the CMO. Unresolved Issue #1: The CMO’s Authority to Direct the Military Departments on Business Reform Issues The Secretary of Defense has charged the CMO with leading DOD’s enterprise business operations and with unifying business management efforts across the department and other responsibilities as set forth in section 132a of title 10, United States Code. Moreover, the NDAA for Fiscal Year 2019 directed the Secretary of Defense, acting through the CMO, to reform DOD’s enterprise business operations across all organizations and elements of the department with respect to any activity relating to civilian resources management, logistics management, services contracting, or real estate management. Fulfilling these responsibilities depends, in part, on the CMO’s visibility into the business operations of all components of the department, including the military departments, as well as the ability to identify and execute DOD-wide business reforms, including those that may affect the military departments. Congress addressed the issue of the CMO’s relationship to the military departments in section 132a, which authorizes the CMO, subject to the authority, direction, and control of the Secretary of Defense and Deputy Secretary of Defense, to direct the secretaries of the military departments and the heads of all other elements of DOD on matters for which the CMO has responsibility under the statute. DOD leadership has provided some guidance regarding the CMO’s responsibilities for efforts that are department-wide and therefore involve the military departments. For example: In a May 2017 memorandum, the Deputy Secretary of Defense directed all DOD components to conduct a thorough review of business operations throughout the department and to propose initiatives that drive increased effectiveness in pursuit of greater efficiency. The memorandum identified the DCMO as the lead for this effort and tasked the DCMO with integrating all initiatives. All responsibilities and authorities assigned to the DCMO were transferred to the CMO on February 1, 2018. More recently, in May 2018, DOD issued its FY 2018-FY 2020 National Defense Business Operations Plan (Plan), which states that the CMO is personally responsible for overseeing implementation of business reforms. The Plan further establishes, and gives the CMO responsibility for carrying out, a strategic objective to improve and strengthen business operations through a move to DOD-enterprise or shared services and reduce administrative and regulatory burden. However, DOD leadership has not determined how the CMO will exercise this authority in instances where the military departments have concerns or disagree with decisions that the CMO makes. In our discussions with the Army, Navy, and Air Force’s CMO offices, officials from each military department explained that they frequently met with the CMO and were involved in discussing business operation initiatives with potential for implementation across multiple military departments. According to these officials, these discussions were collaborative and the CMO did not have to exercise his authority to direct the services. However, we found two instances in which the lack of a determination as to how the CMO is to direct the business-related activities of the military departments led to questions about the respective roles and authorities of the CMO and the military departments as they relate to business reform. In one case, officials from the military departments questioned the CMO’s authority to make binding decisions; in the other, the military departments sought to pursue reform activities without CMO involvement and oversight, even though the CMO has responsibility for leading DOD’s enterprise business reform efforts. First, officials told us that in a July 2018 meeting of the Reform Management Group (RMG) the CMO approved a decision to consolidate DOD’s contract writing systems into a single system. According to OCMO officials, the effort to move to a single contract writing system would increase data visibility, lessen or eliminate redundant contracting needs, provide for greater management insight, and increase the buying power of the department. However, officials told us the military departments, which had voiced concerns about moving to one consolidated system in a previous RMG meeting, expressed reservations. Specifically, a DOD official who participated in the RMG meetings told us the military departments cited a concern about loss of individual authorities and requirements, among other issues. Several DOD officials we spoke with described the RMG meeting as the first time the question of the CMO’s authority to make decisions for enterprise-wide business reform and to direct the military departments had been raised at an RMG meeting. According to officials who were present at the meeting, participants discussed whether the RMG is a voting body and what authority the CMO has to make unilateral decisions for the RMG. When we spoke with officials about this matter in January 2019, they said this question was still unresolved. Second, the Secretaries of the Army, Navy, and Air Force, in a December 10, 2018 memorandum to the Secretary of Defense, requested the Secretary direct the military departments to jointly review organizations, activities, processes, and procedures that might be reformed or restructured to enhance lethality and readiness or reduce cost. While the departments asked for support the Secretary deemed appropriate from the Joint Staff, the Office of the Secretary of Defense, and others, it did not request support or involvement from the CMO. Further, the memorandum stated that the military department secretaries envision a process where they would make recommendations directly to the Secretary. However, the memorandum made no mention of CMO involvement in the review, notwithstanding Congressional, Secretary of Defense, and Deputy Secretary of Defense direction that calls for the CMO to oversee DOD’s business reform efforts. Without a determination by the Secretary or Deputy Secretary of Defense about how the CMO is to direct the business-related activities of the military departments, the CMO’s ability to lead DOD’s reform of its enterprise business operations and to direct the military departments may be limited, potentially leading to fragmented business reform efforts. Unresolved Issue #2: The CMO’s Oversight Responsibilities for the Defense Agencies and DOD Field Activities (DAFA) DOD’s 19 defense agencies and eight DOD field activities are intended to perform many of DOD’s business operations, including consolidated supply and service functions such as human resources services, on a department-wide basis. We have previously identified numerous instances of fragmentation, overlap, and duplication and have recommended actions to increase coordination or consolidation to address related inefficiencies that affect the DAFAs. For example, in September 2018, we reported that there is fragmentation and overlap within the DAFAs that provide human resources services to other defense agencies or organizations within DOD. Our September 2018 report on the DAFAs also found that DOD does not comprehensively or routinely assess the continuing need for its DAFAs. GAO-18-592. DOD was statutorily required to periodically review the services and supplies each DAFA provides to ensure there is a continuing need for each, and that the provision of services and supplies by each DAFA, rather than by the military departments, is more effective, economical, or efficient (See 10 U.S.C. § 192 (c)). Since 2012, DOD has relied on existing processes, such as its annual budget process, to fulfill this review requirement. However, DOD did not provide sufficient evidence that these processes satisfy the statute. For example, while DOD reviews the DAFAs during the budget process, it does not specifically review the provision of services by the DAFAs rather than the military departments. that provide shared business services for the department, as designated by the Secretary or Deputy Secretary of Defense. In January 2018, the Deputy Secretary reported to Congress that the Secretary of Defense formally identified the Pentagon Force Protection Agency and Washington Headquarters Services (WHS) as the DAFAs that provide shared business services, and directed that they would fall under the authority, direction, and control of the CMO. However, both of these organizations had already been identified as providing shared business services and aligned under the previous DCMO. In addition, the Deputy Secretary’s January 2018 report to Congress did not explain why these two DAFAs, but not others, were designated as providing shared business services. In the January 2018 report to Congress, the Deputy Secretary of Defense also stated that, under his direction, the DCMO and Director of CAPE were leading defense reform work that would result in recommendations on, among other things, any required organizational changes. According to DOD’s report, such changes would include the designation of, and oversight arrangements for, other DAFAs providing shared business services that require CMO oversight. The recommendations were expected in late summer 2018. However, when we asked OCMO officials for a status update in November 2018, they acknowledged that they had not yet conducted the review. In November 2018, OCMO officials told us they had recently begun a review of the DAFAs but had not designated any additional DAFAs as providing shared business services. OCMO officials explained that the DAFAs were prioritized for review, with WHS being selected to be the first reviewed. The review will assess what role WHS performs and how efficiently it is performing that role, and will compare WHS performance to commercial benchmarks, according to OCMO officials. As of January 2019, officials said they expected to complete the review of WHS on February 16, 2019. According to officials, the next DAFAs to be reviewed will be DLA, the Defense Finance and Accounting Service, and the Defense Information Systems Agency. In addition, OCMO officials said that they plan to conduct a review of business functions performed in multiple DAFAs to identify opportunities to consolidate shared services for greater efficiency. For example, because WHS performs some human resource functions, as do certain other DAFAs, the OCMO is assessing how human resources management can be improved across the department. OCMO officials indicated they expect additional DAFAs to be identified as shared business services as a result of this review. Additionally, officials said they expect that the review will be completed in January 2020, but have not determined when or how the Secretary of Defense will designate additional DAFAs as providing shared business services. They have also not determined what those decisions would mean for the OCMO’s management of its responsibility to provide direct authority, control, and direction over those DAFAs. In section 921 of the John S. McCain NDAA for Fiscal Year 2019, Congress also expanded and codified the CMO’s authority over the DAFAs by requiring the Secretary of Defense, acting through the Under Secretary of Defense, Comptroller, to direct the head of each DAFA specified by the Secretary for the purpose of section 921, to transmit its proposed budget for enterprise business operations for a fiscal year to the CMO for review, beginning in fiscal year 2020. Section 921 further provides that the CMO shall submit a report to the Secretary containing the CMO’s comments and certification of whether each proposed budget achieves a required level of efficiency and effectiveness for enterprise business operations, consistent with guidance for budget review established by the CMO. Under section 921, the Secretary of Defense has discretion to determine which DAFAs’ proposed budgets are subject to CMO review. In November 2018, OCMO officials told us that the Secretary of Defense had not yet designated any DAFAs as required to submit their budgets for review. However, they stated that the OCMO is working with the DOD Comptroller to determine how the DAFA budget review will be conducted. They also said that they have hired consultants under an existing blanket purchase agreement contract to assist with developing a methodology for this review. OCMO officials told us they believed they would be ready to conduct the required review by fiscal year 2020, as required by the statute. However, it is unclear whether this review will result in a determination of which DAFAs are required to submit their proposed budgets for review. Until the Secretary of Defense makes a determination regarding the CMO’s relationship to the DAFAs, including whether additional DAFAs should be identified as providing shared business services and which DAFAs will be required to submit their proposed budgets for CMO review, the CMO’s ability to effectively oversee and streamline the DAFAs’ business operations may be limited. Unresolved Issue #3: The Transfer of Responsibilities from the Chief Information Officer to the CMO As described in table 1 of this report, section 910 of the NDAA for Fiscal Year 2018 provided that, effective January 1, 2019, the CMO would assume certain responsibilities for business systems and management that were formerly performed by the CIO. Section 903 of the John S. McCain NDAA for Fiscal Year 2019 clarified this provision by amending the statute (10 U.S.C. § 142) which established and provides responsibilities for the DOD CIO. However, in July 2018, DOD officials told us no formal action had been taken to determine which, if any, responsibilities would transition or to assess the resource impact this would have on both offices because they had concerns about the statutory requirement and how it would affect IT management at the department. For example, CMO officials expressed the belief that all IT roles and responsibilities should be consolidated under one position. We have previously found that having department-level CIO responsibilities performed by multiple officials could make the integration of various information and technology management areas, as envisioned by law, more difficult to achieve. The CMO told us in July 2018 that he had begun engaging informally with Congress to discuss the department’s concerns about the transition of certain responsibilities from the CIO to the CMO, and that he would engage further once the newly confirmed CIO felt prepared to join those discussions. However, in November 2018, the Acting CMO told us that the OCMO was still exploring all of the authorities that Congress had provided, and, as such, felt that further engagement with Congress was premature at this point. The Acting CMO added that she and the CIO had worked out an informal agreement regarding which areas they would each manage, but did not identify specific tasks that would transfer to the CMO or provide any details of this agreement. At the same time, OCMO officials acknowledged in November 2018 that the OCMO had not conducted an analysis to determine which responsibilities should formally transfer or what resource ramifications, if any, this transfer would have on both offices. Without an analysis to help DOD determine which duties should transfer from the CIO to the CMO, including identifying any associated resource impacts, DOD will remain reliant on this informal agreement. Such reliance could cause confusion within the department about who is responsible for key IT functions. Moreover, section 3506 of title 44, United States Code states that in similar circumstances, where a CIO is designated for DOD and for each military department, the respective duties of the CIOs shall be clearly delineated. DOD Lacks Guidance That Institutionalizes All of the CMO’s Authorities and Responsibilities In part because the issues identified above have not been resolved, DOD agreed that it does not have department-wide guidance, such as a chartering directive, that fully and clearly institutionalizes the CMO position by articulating how all of the CMO’s authorities and responsibilities are to be operationalized. The department has issued several documents that refer to some of the CMO’s authorities and responsibilities, but these documents were issued as the CMO’s role under the statute was evolving, and none of them, either individually or collectively, encompass all of the CMO’s current authorities and responsibilities. For example: DOD Directive 5105.82 (Oct. 17, 2008) established the responsibilities and authorities of the DCMO. These responsibilities included, among others, advising the Secretary of Defense on performance goals and measures and assessing progress against those goals; and ensuring that strategic plans, performance goals, and measures were aligned with, and assured accountability to, DOD strategic goals. However, this document is now outdated—for example, it assigns the DCMO responsibilities related to the Defense Business Transformation Agency, which an OCMO official agreed no longer exists. Moreover, the directive does not reflect the additional authorities and responsibilities for the CMO position that are delineated in section 132a of title 10, United States Code, as amended. Table 3 at appendix II summarizes all authorities and responsibilities included in this directive. Secretary of Defense Memorandum (Feb. 1, 2018) established the CMO position and outlined its authorities and responsibilities consistent with section 132a of title 10, United States Code. The authorities and responsibilities outlined in this memorandum align closely with those specified for the CMO in the statute, but the memorandum does not explain how these authorities and responsibilities are to be operationalized. For example, this memorandum does not address how the CMO will interact with other DOD organizations, such as the military departments, as DOD traditionally has done through its chartering directives. Table 4 at appendix II summarizes authorities and responsibilities included in Secretary of Defense memorandums. Secretary of Defense Memorandum (July 12, 2018) addressed the CMO’s role in supporting the Deputy Secretary of Defense on enterprise management and performance accountability. According to this memorandum, the CMO supports the Chief Operating Officer to ensure all DOD leaders are unified and aligned across all assigned responsibilities and functions, through strong management practices, integrated processes, and best value business investments, and to support the Deputy Secretary of Defense in his capacity as the department’s Chief Operating Officer. However, the CMO’s responsibilities in supporting the Deputy Secretary of Defense as the Chief Operating Officer outlined in this memorandum are not specified by any other relevant guidance documents. CMO Action Memorandum (July 27, 2018) responded to the Secretary’s February 1, 2018 memorandum and restated several of the CMO’s authorities and responsibilities, consistent with 132a of title 10, United States Code, and provided information on the plans to restructure the OCMO and to establish the CMO Action Group. The Secretary of Defense’s July 12, 2018 Memorandum directed the Deputy Secretary of Defense to provide amplifying guidance on CMO responsibilities and authorities emanating from statute as well as delegating additional discretionary authorities or responsibilities to the CMO. Issuance of this amplifying guidance would be consistent with one of the key strategies we identified for implementation of a CMO position— clearly defining roles and responsibilities of the position and communicating them throughout the organization. In November 2018, however, officials told us that they expected the CMO vacancy to delay progress on codifying any decisions on the CMO’s statutory and discretionary authorities in a chartering directive. Additionally, in November 2018, a senior OCMO official stated that the office needed to complete its reorganization prior to the department issuing updated guidance. Until the Deputy Secretary of Defense resolves the issues previously discussed and issues guidance (such as a chartering directive) to codify the CMO’s authorities and responsibilities and specify how those are to be operationalized, questions regarding the extent of the CMO’s authority and responsibility are likely to persist, preventing a shared understanding across the department of the CMO’s role. Further, the lack of guidance could affect the ability of the department to make progress in conducting necessary business reforms—one of three key priorities identified in the 2018 National Defense Strategy. Conclusion DOD has made progress in implementing some of the authorities and responsibilities Congress has provided the CMO. However, DOD has not resolved several key issues that limit its ability to implement all statutory authorities and responsibilities. Specifically, DOD has yet to resolve key issues, such as how the CMO will exercise authority to direct the military departments and exercise direction and control over DAFAs that provide shared business services. Additionally, without analyzing the authorities and responsibilities that will transfer from the CIO to the CMO and the resource impact, if any, those new responsibilities will have on the OCMO, DOD risks creating confusion within the department about which official is responsible for key information technology functions. While DOD has issued several documents delineating some of the CMO’s authorities and responsibilities, the department does not currently have formal and current guidance, such as a DOD chartering directive, that institutionalizes all the CMO’s authorities and responsibilities. Considering the evolution of the CMO’s authorities and responsibilities since the position was created, guidance that fully encompasses all CMO authorities and responsibilities and explains how they are to be operationalized could help to institutionalize and sustain the position beyond the tenure of the current acting CMO. Recommendations for Executive Action We are making the following four recommendations to the Secretary of Defense: The Secretary of Defense should ensure that the Deputy Secretary of Defense makes a determination as to how the CMO is to direct the business-related activities of the military departments. (Recommendation 1) The Secretary of Defense should ensure that the Deputy Secretary of Defense makes a determination regarding the CMO’s relationship with the DAFAs, including whether additional DAFAs should be identified as providing shared business services and which DAFAs will be required to submit their proposed budgets for enterprise business operations to the CMO for review. (Recommendation 2) The Secretary of Defense should ensure that the CMO and CIO conduct an analysis to determine which responsibilities should transfer from the CIO to the CMO, including identifying any associated resource impacts, and share the results of that analysis with the Congress. (Recommendation 3) The Secretary of Defense should ensure that the Deputy Secretary of Defense, on the basis of the determinations regarding the CMO’s statutory and discretionary authorities, codify those authorities and how they are to be operationalized in formal department-wide guidance. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In its written comments, which are reproduced in Appendix III, DOD concurred with our recommendations and described ongoing and planned actions to address them. We are sending copies of this report to the Acting Secretary and Acting Deputy Secretary of Defense, the Acting DOD Chief Management Officer, the DOD Chief Information Officer, the Director, Cost Assessment and Program Evaluation, and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2775 or fielde1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Appendix I: Scope and Methodology To examine the extent to which DOD has implemented the authorities and responsibilities of its Chief Management Officer (CMO) position and issued guidance communicating within the department the authorities and responsibilities of the position, we reviewed related laws and key documents such as memorandums issued by the Secretary of Defense that outline some of the CMO’s authorities and responsibilities. To understand the authorities and responsibilities that Congress and DOD have assigned to this position, we reviewed section 901 of the National Defense Authorization Act (NDAA) for Fiscal Year 2017, which initially created the CMO position effective February 1, 2018; section 910 of the NDAA for Fiscal Year 2018, which codified and expanded the CMO’s authorities and responsibilities; and section 921 of the John S. McCain NDAA for Fiscal Year 2019, which further expanded the CMO’s authorities and responsibilities. We reviewed DOD’s August 2017 Report to Congress and its April 2018 National Defense Business Operations Plan. We also reviewed our November 2007 report on key strategies for implementing CMO positions. To understand ongoing actions to implement the authorities and responsibilities given to the CMO position, we interviewed DOD’s former CMO, who served from February to November 2018, as well as the current acting CMO and the chiefs of the five directorates or their representatives within the Office of the CMO (OCMO) in July 2018, to understand the responsibilities of these directorates. We also met with the nine reform teams charged with implementing initiatives to, among other things, move DOD toward an enterprise-wide, shared-service model. Additionally, we reviewed documentation from the reform teams to understand what business operation reform initiatives the CMO has prioritized and what progress has been made to implement and monitor these initiatives. To understand key initiatives DOD is pursuing to improve its business operations and how it monitors implementation of those initiatives, we attended demonstrations of DOD’s cost management framework and its reform team portal. We also met with an official from DOD’s Cost Assessment and Program Evaluation (CAPE) Office to gain additional insights on oversight of the reform teams from one of the co- chairs on the Reform Management Group. Additionally, we reviewed documentation from the OCMO containing personnel numbers and funding levels to determine the level and type of resources available to the CMO to assist in carrying out his responsibilities. To understand how the CMO collaborates with other DOD entities to lead business operation reform and how the responsibilities of the CMO and Chief Information Officer (CIO) may change, we met with officials from the Office of the DOD CIO. To understand how the CMO is interacting with and influencing the military departments’ business operations, we met with officials from the Army, Air Force, and Navy CMO and CIO offices. We performed our work under the authority of the Comptroller General to conduct evaluations to assist Congress with its oversight responsibilities. We conducted this performance audit from February 2018 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Summary of Key Authorities and Responsibilities for the Department of Defense Chief Management Officer Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Elizabeth A. Field, (202) 512-2775 or fielde1@gao.gov. Staff Acknowledgments In addition to the contact named above, Sally Newman (Assistant Director), Tracy Barnes, Margaret Best, Arkelga Braxton, William Carpluk, Timothy DiNapoli, Michael Holland, Chad Johnson, Kristi Karls, William Lamping, Ned Malone, Jared Sippel, Susan Tindall, Sarah Veale, and Lillian Yob made key contributions to this report.
Why GAO Did This Study DOD spends billions of dollars each year to maintain key enterprise business operations intended to support the warfighter, including systems and processes related to the management of contracts, finances, the supply chain, and support infrastructure. The 2018 National Defense Strategy identified reform of DOD's business practices as one of DOD's three strategic goals. GAO has previously reported that weaknesses in these business operations have resulted in inefficiencies and billions of dollars wasted. GAO has also identified the need for a CMO with significant authority and experience to focus concerted attention on DOD's long-term business transformation efforts. Congress initially established such a position in the National Defense Authorization Act for Fiscal Year 2017. This report evaluates the extent to which DOD has implemented its CMO position and issued guidance to communicate within the department the authorities and responsibilities of the position. GAO analyzed the statutory authorities and responsibilities assigned to the CMO position and evaluated DOD's actions to implement them. What GAO Found The Department of Defense (DOD) has taken steps to implement its Chief Management Officer (CMO) position which has been given the responsibility for managing DOD's business operations; however, unresolved issues remain for DOD to fully institutionalize the CMO's authorities and responsibilities. DOD has restructured the Office of the CMO (OCMO) to more closely align with the CMO's statutory authorities and responsibilities. Further, the OCMO is working to strengthen its data capabilities and has hired a Chief Data Officer and formed a Data Management and Analytics Steering Committee. Additionally, OCMO officials told us they are establishing cost baselines for each of DOD's major business functions. However, DOD has not fully addressed three key issues related to the CMO's authorities and responsibilities: The CMO's authority to direct the military departments on business reform issues . The law gave the CMO authority to direct the secretaries of the military departments on matters over which the CMO has responsibility. However, DOD has not determined how the CMO will exercise this authority, particularly when there is disagreement between the departments and the CMO. The CMO's oversight responsibilities of the Defense Agencies and DOD Field Activities (DAFAs) . The CMO is responsible for exercising authority, direction, and control over the designated DAFAs that provide shared business services—those business functions, such as supply chain and logistics and human resources operations, that are provided across more than one DOD organization. However, DOD has not determined how the CMO will exercise this authority, such as which DAFAs will submit their proposed budgets for CMO review. Transfer of responsibilities from the Chief Information Officer to the CMO. Under the law, the CMO will exercise responsibilities relating to business systems and management that previously belonged to the Chief Information Officer. However, DOD has not determined which, if any, responsibilities will transition from the Chief Information Officer to the CMO or assessed the impact of such a transition on associated resources. In part because these issues remain unresolved, DOD agreed that it does not have department-wide guidance that fully and clearly articulates how the CMO's authorities and responsibilities should be operationalized. Making determinations on the three unresolved issues and issuing guidance would help ensure a shared understanding throughout the department of the CMO's role in leading DOD's enterprise-wide business reform efforts. What GAO Recommends GAO is making four recommendations, including that DOD should address each of the three unresolved issues that impede its progress in institutionalizing statutory authorities and responsibilities, and issue guidance, such as a chartering directive that addresses how the CMO's authorities should be operationalized. DOD concurred with GAO's recommendations.
gao_GAO-19-70
gao_GAO-19-70_0
Background CDC—an operating division of the Department of Health and Human Services (HHS)—serves as the national focal point for disease prevention and control, environmental health, and promotion and education activities designed to improve the health of Americans. The agency is also responsible for leading national efforts to detect, respond to, and prevent illnesses and injuries that result from natural causes or the release of biological, chemical, or radiological agents. To achieve its mission and goals, the agency relies on an array of partners, including public health associations and state and local public health agencies. It collaborates with these partners on initiatives such as monitoring the public’s health, investigating disease outbreaks, and implementing prevention strategies. The agency also uses its staff located in foreign countries to aid in international efforts, such as guarding against global diseases. Table 1 describes the organization of CDC. CDC is staffed by approximately 20,000 employees across the United States and around the world. For fiscal year 2017, according to agency officials, the agency’s total appropriation was approximately $12 billion, of which it reported spending approximately $424 million on information technology. In addition, the officials stated that approximately $31 million (or about 7.3 percent of the amount spent on information technology) was for information security across all CDC information technology investments. CDC Relies on Information Systems to Help Achieve Its Mission CDC relies extensively on information technology to fulfill its mission and support related administrative needs. Among the approximately 750 systems reported in its inventory, the agency has systems dedicated to supporting public health science, practice, and administration. All of these systems rely on an information technology infrastructure that includes network components, critical servers, and data centers. CDC Has Defined Organizational Security Roles and Responsibilities At CDC, the chief information officer (CIO) is responsible for establishing and enforcing policies and procedures protecting information resources. The CIO is to lead the efforts to protect the confidentiality, integrity, and availability of the information and systems that support the agency and its operations, and is to report quarterly to the HHS CIO on the overall effectiveness of CDC’s information security and privacy program, including the progress of remedial actions. The CIO designated a chief information security officer (CISO), who is to oversee compliance with applicable information security and privacy requirements of the agency. The CISO, among other things, is responsible for providing information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, and disruption of information and information systems that support the operations and assets of the agency. To further ensure information security compliance, information systems security officers (ISSO) are responsible for managing the information security program within their respective organizations and report on security program matters to the CISO, including computer security-related incidents. ISSO responsibilities include ensuring that vendor-issued security patches are expeditiously installed and that system owners establish processes for timely removal of access privileges when a user’s system access is no longer necessary. In addition, security stewards are to perform operational security analyses supporting the efforts of the ISSO. Further, business stewards serve as program managers, accepting full accountability for the operations of the systems and ensuring that security is planned, documented, and properly resourced for each aspect of the information security program. Federal Laws and Guidance Establish Security Requirements to Protect Federal Information and Systems The Federal Information Security Modernization Act (FISMA) of 2014 provides a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. FISMA assigns responsibility to the head of each agency for providing information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of information systems used or operated by an agency or by a contractor of an agency or other organization on behalf of an agency. The law also delegates to the agency CIO (or comparable official) the authority to ensure compliance with FISMA requirements. The CIO is responsible for designating a senior agency information security officer whose primary duty is information security. The law also requires each agency to develop, document, and implement an agency-wide information security program to provide risk-based protections for the information and information systems that support the operations and assets of the agency. In addition, FISMA requires agencies to comply with National Institute of Standards and Technology (NIST) standards, and the Office of Management and Budget (OMB) requires agencies to comply with NIST guidelines. NIST Federal Information Processing Standards (FIPS) Publication 199 requires agencies to categorize systems based on an assessment of the potential impact that a loss of confidentiality, integrity, or availability of such information or information system would have on organizational operations, organizational assets, individuals, other organizations, and the nation. NIST FIPS 200 requires agencies to meet minimum security requirements by selecting the appropriate security controls, as described in NIST Special Publication (SP) 800-53. This NIST publication provides a catalog of security and privacy controls for federal information systems and a process for selecting controls to protect organizational operations and assets. The publication provides baseline security controls for low-, moderate-, and high-impact systems, and agencies have the ability to tailor or supplement their security requirements and policies based on agency mission, business requirements, and operating environment. Further, in May 2017, the President issued an executive order requiring agencies to immediately begin using NIST’s Cybersecurity Framework for managing their cybersecurity risks. The framework, which provides guidance for cybersecurity activities, is based on five core security functions: Identify: Develop the organizational understanding to manage cybersecurity risk to systems, assets, data, and capabilities. Protect: Develop and implement the appropriate safeguards to ensure delivery of critical infrastructure services. Detect: Develop and implement the appropriate activities to identify the occurrence of a cybersecurity event. Respond: Develop and implement the appropriate activities to take action regarding a detected cybersecurity event. Recover: Develop and implement the appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity event. According to NIST, these 5 functions occur concurrently and continuously, and provide a strategic view of the life cycle of an organization’s management of cybersecurity risk. Within the 5 functions are 23 categories and 108 subcategories that include controls for achieving the intent of each function. Appendix II provides a description of the framework categories and subcategories of controls. Security Control Deficiencies Placed Selected CDC Systems at Risk We reported in June 2018 that CDC had implemented numerous controls over the 24 systems we reviewed, but had not always effectively implemented controls to protect the confidentiality, integrity, and availability of these systems and the information maintained on them. Deficiencies existed in the technical controls and agency-wide information security program that were intended to (1) identify risk, (2) protect systems from threats and vulnerabilities, (3) detect cybersecurity events, (4) respond to these events, and (5) recover system operations. These deficiencies increased the risk that sensitive personally identifiable and health-related information, including information regarding the transfer of biological agents and toxins dangerous to public health, could be disclosed or modified without authorization. As shown in table 2, deficiencies existed in all 5 core security function areas for the selected systems we reviewed. CDC Had Identified Risk and Developed Policies and Plans, but Shortcomings Existed Controls associated with the identify core security function are intended to help an agency develop an understanding of its resources and related cybersecurity risks to its systems, assets, data, and capabilities. These controls include identifying and assessing cybersecurity risk and establishing information security policies, procedures, and plans. We reported in June 2018 that, although CDC had taken steps to implement these controls, it had not (1) categorized the risk-related impact of a key system, identified threats, or reassessed risk for systems or facilities when needed; (2) sufficiently documented technical requirements in policies, procedures, and standards; and (3) described intended controls in facility security plans. CDC Did Not Appropriately Categorize at Least One Key System, but Assessed Risk to Some Extent at System and Entity-wide Levels CDC Categorized Systems Based on Potential Impact of Compromise, but Did Not Appropriately Categorize a Key General Support System As discussed earlier, FIPS Publication 199 requires agencies to categorize systems based on an assessment of the potential impact that a loss of confidentiality, integrity, or availability of such information or information system would have on organizational operations, organizational assets, individuals, other organizations, and the nation. For networks and other general support systems, NIST SP 800-60 notes that the categorization should be based on the high water mark of supported information systems, and on the information types processed, transmitted across the network, or stored on the network or support system. Further, CDC’s architecture design principles state that high- impact systems are to be maintained on dedicated machinery and be physically and logically secured from lower-risk systems. CDC had categorized the 24 systems we reviewed, but the assigned impact level was not always appropriate. In this regard, the agency did not ensure that high-impact systems were logically secured from a lower- risk system. Specifically, seven selected high-impact systems relied on a general support system that the agency had categorized as a moderate- impact system (i.e., a lower-risk system). As a result, the high-impact systems were relying on controls in a less secure environment. Officials from the Office of the Chief Information Officer (OCIO) explained that the categorization of the supporting system was outdated based on changes to the agency’s operating environment and that they planned to re- evaluate the assigned impact level. CDC Assessed Risk at the System Level, but Did Not Assess Threats, Document Risk-based Decisions, or Reassess Risk When Needed According to NIST SP 800-30, risk is determined by identifying potential threats to an organization and vulnerabilities in its systems, determining the likelihood that a particular threat may exploit vulnerabilities, and assessing the resulting impact on the organization’s mission, including the effect on sensitive and critical systems and data. NIST also states that assessments should be monitored on an ongoing basis to keep current on risk-impacting changes to the operating environment. CDC had developed system-level risk assessments for the 8 selected mission-essential systems, and had summarized its risks in a risk assessment report. However, only two of the eight risk assessments had identified potential threats, and only one of these assessments determined the likelihood and impact of threats to that system. Further, CDC had not always documented risks associated with less secure configuration settings or monitored its assessments to address changes to the operating environment. For example, among the 94 technical control deficiencies that we identified for the 24 systems we reviewed, OCIO officials stated that the agency had not implemented controls for 20 deficiencies due to technical constraints. However, CDC did not address risks associated with decisions not to implement controls for these reasons in the system risk assessments. OCIO officials also partially attributed 5 of the 94 technical control deficiencies to new cybersecurity threats and to threat vectors that turned initially sound architecture decisions into vulnerabilities. However, CDC had not addressed such changes in the risk assessments for the affected systems. By not assessing threats or the likelihood of their occurrence and impact and by not documenting the risks, CDC cannot have assurance that appropriate controls are in place commensurate with the level of risk. CDC Had a Process in Place to Assess Risk to Systems from an Entity-wide Perspective Beyond the system level, newly discovered threats or vulnerabilities may require an agency to make risk decisions from an entity-wide perspective. An entity-wide perspective is needed because the threats and vulnerabilities may affect more than specific systems. CDC had a process in place to assess risk from an entity-wide perspective. This process included regular meetings among OCIO and program office staff to discuss policy, threats, and incidents. Specifically, ISSOs held monthly meetings as a continuous monitoring working group to discuss policy updates. In addition, an OCIO official held quarterly briefings that included presentations on incident response tools, incident statistics, and potential threats. OCIO officials also held ad hoc meetings, as necessary, regarding vulnerability and threat concerns when the agency received email alerts from the Federal Bureau of Investigation, the Department of Homeland Security (DHS), or HHS. CDC Had Not Updated Facility Risk Assessments In addition to assessing risks for systems, agencies are to assess the risk to their facilities. The Interagency Security Committee (ISC) requires agencies to determine the security level for federal facilities, and to conduct risk assessments at least once every 5 years for Level I and Level II facilities and at least once every 3 years for Level III, Level IV, and Level V facilities. However, the two facility risk assessments that we reviewed had not been updated in a timely manner. Specifically, the risk assessments, covering Level III and Level IV facilities that house the 24 reviewed systems, had been last updated in January 2009 and March 2014—8 years earlier and just over 3 years earlier, at the time of our review in July 2017. According to a CDC physical security official, the agency had previously relied on a third-party assessor to perform the assessments. The official also said that the agency planned to conduct its own facility risk assessments and had recently developed procedures for conducting these assessments. Until it performs these assessments, CDC may not be aware of new risks to its facilities or the controls needed to mitigate the risks. CDC Had Documented Controls in Policies, Procedures, and Standards, but Had Not Included Certain Technical Requirements FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes policies and procedures that (1) are based on a risk assessment, (2) cost- effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements. According to NIST SP 800-53, an agency should develop policies and procedures for each of the 18 NIST families of security controls to facilitate the implementation of the controls. CDC had documented numerous policies, procedures, and standards that addressed each of the 18 control families identified in NIST SP 800-53. For example, the agency had developed policies and procedures governing physical access to CDC facilities, role-based training of personnel with significant security responsibilities, security assessment and authorization of systems, and continuity of operations, in addition to standard operating procedures that covered numerous other controls. The agency had also developed the CDC IT Security Program Implementation Standards, which describes the agency’s security program requirements and minimum mandatory standards for the implementation of information security and privacy controls. In addition, the agency had documented configuration standards, which specified minimum configuration settings, for devices such as firewalls, routers, switches, as well as Unix and Windows servers. However, these policies and standards sometimes lacked the technical specificity needed to ensure controls were in place. To illustrate, the agency had not sufficiently documented detailed guidance or instructions to address numerous technical control deficiencies we identified, such as insecure network devices, insecure database configurations, not blocking certain email attachments, and not deploying a data loss prevention capability. According to OCIO officials, the agency’s periodic reviews and updates to existing cybersecurity policies and standards did not reveal and address these issues. Nevertheless, without clear and specific guidance or instructions for implementing technical controls, the agency had less assurance that controls were in place and operating as intended. CDC Had Identified and Updated Controls in System Security Plans Annually, but Had Not Developed Facility Security Plans FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes subordinate plans for providing adequate information security for networks, facilities, and systems or a group of information systems, as appropriate. NIST states that plans should be reviewed and updated to ensure that they continue to reflect the correct information about the systems, such as changes in system owners, interconnections, and authorization status, among other things. HHS and CDC policies require that such plans be reviewed annually. In addition, the ISC requires that agencies develop and implement an operable and effective facility security plan. CDC standards require the organization to prepare a facility security plan (or similar document). CDC had developed security plans for the 8 selected mission-essential systems. With a few exceptions, the plans addressed the applicable security controls for those systems. The agency also had reviewed and updated the plans annually. However, CDC had not developed security plans for the facilities housing resources for the selected systems. Physical security officials stated that they had not developed security plans because they did not have a sufficient number of staff to develop them. Without comprehensive security plans for the facilities, CDC’s information and systems would be at an increased risk that controls to address emergency situations would not be in place and personnel at the facilities would not be aware of their roles and responsibilities for implementing sound security practices to protect systems housed at these CDC locations. CDC Had Implemented Controls Intended to Protect Its Systems, but Deficiencies Existed The protect core security function is intended to help agencies develop and implement the appropriate safeguards for their systems to ensure achieving the agency’s mission and to support the ability to limit or contain the impact of a potential cybersecurity event. Controls associated with this function include implementing controls to limit access to authorized users, processes or devices; encrypting data to protect its confidentiality and integrity; configuring devices securely and updating software to protect systems from known vulnerabilities; and providing training for cybersecurity awareness and performing security-related duties. Although CDC had implemented controls that were intended to protect its operating environment, we reported in June 2018 that the agency did not consistently (1) implement access controls effectively, (2) encrypt sensitive data, (3) configure devices securely or apply patches in a timely manner, or (4) ensure staff with significant security responsibilities received role-based training. CDC Did Not Consistently Implement Effective Access Controls A basic management objective for any agency is to protect the resources that support its critical operations from unauthorized access. Agencies accomplish this objective by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. Access controls include those related to identifying and authenticating users, authorizing access needed to perform job duties, protecting system boundaries, and physically protecting information system assets. However, CDC had not consistently implemented these controls. CDC Implemented Enterprise-wide Identification and Authentication Controls, but Did Not Consistently and Securely Configure Password Controls for Certain Accounts on Devices and Systems NIST SP 800-53 states that agencies should implement multi-factor authentication for their users of information systems. Multi-factor authentication involves using two or more factors to achieve authentication. A factor is something you know (password or personal identification number), something you have (token and personal identity verification (PIV) card), or something you are (biometric). Also, NIST and CDC policy state that information systems shall have password management controls established to include minimum password complexity requirements, password lifetime restrictions, prohibitions on password reuse, and user accounts temporarily locked out after a certain number of failed login attempts during a specified period of time. CDC had applied enterprise-wide solutions to ensure appropriate identification and multi-factor authentication of its general user community through, for example, the use of PIV cards. However, instances of weak password management controls existed for certain accounts on network devices, servers, and database systems. According to OCIO officials, password control deficiencies existed primarily due to technical constraints, administrators not being aware of technical requirements, or administrators not adequately monitoring configuration settings. Without more secure password settings, CDC’s information and systems are at an increased risk that unauthorized individuals could have guessed passwords and used them to obtain unauthorized access to agency systems and databases. CDC Authorized Users More Access than Needed to Perform Their Jobs NIST SP 800-53 states that agencies should employ the principle of least privilege, allowing only authorized access for users (or processes acting on behalf of users) that are necessary to accomplish assigned tasks. It also states that privileged accounts—those with elevated access permissions—should be strictly controlled and used only for their intended administrative purposes. CDC had implemented controls intended to ensure that users were granted the minimum level of access permissions necessary to perform their legitimate job-related functions. However, the agency had granted certain users more access than needed for their job functions, including excessive access permissions on a key server. According to OCIO officials, CDC systems had deficiencies related to restricting access primarily due to technical constraints or administrators not adequately monitoring configuration settings. By not appropriately restricting access, CDC’s information and systems are at an increased risk that individuals could deliberately or inadvertently compromise database systems or gain inappropriate access to information resources. CDC Did Not Effectively Implement Boundary Controls to Ensure Network Integrity NIST SP 800-53 states that agencies should control communications at information systems’ external boundaries. It states that, to manage risks, agencies should use boundary protection mechanisms to separate or partition computing systems and network infrastructures containing higher-risk systems from lower-risk systems. Although CDC had implemented multiple controls that were designed to protect system boundaries, the agency had not sufficiently separated higher-risk systems from lower-risk systems. According to OCIO officials, deficiencies in boundary protection controls existed due to new cybersecurity threats turning initially sound architecture decisions into vulnerabilities, technical constraints, and administrators not being aware of technical requirements or adequately monitoring configuration settings. Without stronger boundary controls, CDC’s information and systems are at an increased risk that an attacker could have exploited these boundary deficiencies and leveraged them to compromise CDC’s internal network. CDC Physically Protected Information System Assets, but Did Not Consistently Ensure Access Remained Appropriate NIST SP 800-53 states that agencies should implement physical access controls to protect employees and visitors, information systems, and the facilities in which they are located. In addition, NIST states that agencies should review access lists detailing authorized facility access by individuals at the agency-defined frequency. In its standards, CDC requires implementation of the NIST special publication and requires that access lists detailing authorized facility access by individuals be reviewed at least every 365 days. CDC had implemented physical security controls. The agency had implemented physical security measures to control access to certain areas and to ensure the safety and security of its employees, contractors, and visitors to CDC facilities. For example, CDC had issued PIV cards and Cardkey Proximity Cards to its employees and contractors, and had limited physical access to restricted areas based on the permissions it granted via these cards. However, the agency had not consistently reviewed authorized access lists. In this regard, CDC did not have a process in place for periodically reviewing the lists of individuals with access to rooms containing sensitive resources to ensure that such access remained appropriate. Without reviewing authorized access lists, CDC has reduced assurance that individual access to its computing resources and sensitive information is appropriate. CDC Had Not Consistently Encrypted Sensitive Authentication Data NIST SP 800-53 states that agencies should encrypt passwords both while stored and transmitted, and configure information systems to establish a trusted communication path between the user and the system. Additionally, NIST requires that, when agencies use encryption, they use an encryption algorithm that complies with FIPS 140-2. CDC had used FIPS-compliant encryption for its PIV card implementation, but had not effectively implemented encryption controls in other areas. According to OCIO officials, encryption control deficiencies existed primarily due to technical constraints, administrators not being aware of a technical solution, or configuration settings not being adequately monitored. By not using encryption effectively, CDC limits its ability to protect the confidentiality of sensitive information, such as passwords. CDC Had Not Consistently Configured Servers Securely or Applied Patches in a Timely Manner NIST SP 800-53 states that agencies should disable certain services with known security vulnerabilities. This includes configuring security control settings on operating systems in accordance with publicly available security checklists (or benchmarks) promulgated by NIST’s National Checklist Program repository. This repository contains, for example, the security configuration benchmarks established by the Center for Internet Security (CIS) for Windows servers. NIST also states that agencies should test and install newly-released security patches, service packs, and hot fixes in a timely manner. In addition, CDC policy required that software patches for remediating vulnerabilities designated as critical or high risk be applied to servers within 45 days of being notified that a patch is available or within 7 days of when an exploit is known to exist. Further, agency policy specified that administrators configure Windows servers in accordance with the CDC- approved security benchmarks. CDC had documented security configuration baselines, but had not always securely configured its systems or applied patches. In addition, the agency had not consistently configured security settings in accordance with prescribed security benchmarks or applied patches in a timely manner. For example: CDC had configured Windows servers to run unnecessary services. CDC had configured only about 62 percent of the security settings in accordance with prescribed benchmark criteria on the Windows and infrastructure servers supporting five systems that we reviewed. During our site visit in April 2017, CDC had not installed 21 updates on about 20 percent of the network devices, including 17 updates that the vendor considered to be critical or high-risk. The oldest of the missing updates dated back to January 2015. CDC had not updated database software supporting two selected systems to a more recent version that addressed vulnerabilities with a medium severity rating. According to OCIO officials, CDC had deficiencies in configuration and patching primarily due to administrators not being aware that there was a technical solution or did not adequately monitor configuration settings. By not securely configuring devices and installing updates and patches in a timely manner, the agency is at increased risk that individuals could have exploited known vulnerabilities to gain unauthorized access to agency computing resources. Staff Received Security Awareness Training, but At Least 15 Percent of Those with Significant Security Responsibilities Did Not Receive Role-Based Training According to NIST SP 800-53, agencies should provide adequate security training to individuals in a role such as system/network administrator and to personnel conducting configuration management and auditing activities, tailoring the training to their specific roles. In addition, one of the cybersecurity cross-agency priority goals requires that agencies implement training that reduces the risk that individuals will introduce malware through email and malicious or compromised web sites. Consistent with NIST SP 800-53, CDC policy required network users to receive annual security awareness training. Accordingly, for fiscal year 2017, all CDC staff completed the required annual security awareness training. CDC policy also required that those staff identified as having significant security responsibilities receive role-based training every 3 years. However, not all staff with significant security responsibilities received role-based training within the defined time frames. The agency used a tracking system to monitor the status of role-based training for 377 individuals who had been identified as having significant security responsibilities. As of May 2017, 56 (about 15 percent) of the 377 individuals had not completed the training within the last 3 years, and 246 (about 65 percent) of them had not taken training within the last year. In addition, CDC had not identified at least 30 other staff with significant security responsibilities who required role-based training. Specifically, none of the 18 security and database administrators for four selected systems were included among the individuals being tracked, although these administrators had significant security responsibilities. Further, the agency provided us with a list of 42 individuals whose job series indicated that they required role-based training. However, 12 of the 42 were not included among the tracked individuals. Furthermore, given the number of deficiencies identified and the rapidly evolving nature of cyber threats, CDC’s requirement that staff take role-based training only once every 3 years is not sufficient for individuals with significant cybersecurity responsibilities. According to OCIO officials, managers are responsible for identifying those individuals with significant security responsibilities. The process used to track training was manual and required an individual’s manager to specify training requirements. The officials noted that the agency plans to implement a new HHS annual role-based training requirement in fiscal year 2018 and that they intend to work to enhance oversight as the new requirement is implemented. The officials also stated that at least 10 of the 94 technical control-related deficiencies identified in our June 2018 report had resulted, at least in part, from staff not being aware of control requirements or solutions to address the deficiencies. As a result, CDC’s information and systems are at increased risk that staff may not have the knowledge or skills needed to appropriately protect them. CDC Had Not Effectively Implemented Controls Intended to Detect Incidents or Deficiencies The detect core security function is intended to allow for the timely discovery of cybersecurity events. Controls associated with this function include logging and monitoring system activities and configurations, assessing security controls in place, and implementing continuous monitoring. In June 2018, we reported that, although CDC had implemented controls intended to detect the occurrence of a cybersecurity event, it had not sufficiently implemented logging and monitoring capabilities or effectively assessed security controls. CDC Had Implemented Limited Logging and Monitoring Capabilities NIST SP 800-53 states that agencies should enable system logging features and retain sufficient audit logs to support the investigations of security incidents and the monitoring of select activities for significant security-related events. In addition, National Archives and Records Administration records retention guidance states that system files containing information requiring special accountability that may be needed for audit or investigative purposes should be retained for 6 years after user accounts have been terminated or passwords altered, or when an account is no longer needed for investigative or security purposes, whichever is later. NIST also states that agencies should monitor physical access to facilities where their information systems reside to detect physical security incidents. Further, NIST SP 800-53 states that agencies should monitor and control changes to configuration settings. Although CDC had implemented centralized logging and network traffic monitoring capabilities, the capabilities were limited. For example, the agency’s centralized logging system used for security monitoring had a limited storage capacity and did not meet the National Archives and Records Administration requirements. In addition, CDC had not centrally collected and monitored security event data for many key assets connected to the network. As a result, increased risk existed that CDC would not have been able to detect anomalous activities that may have occurred from malware attacks over time. OCIO officials stated that, as a compensating measure, the agency prevents direct communications between workstations. However, such a measure does not allow the agency to detect potentially inconsistent activities that may have occurred from malware attacks within the same data center. CDC also had not consistently reviewed physical access logs to detect suspicious physical access activities, such as access outside of normal work hours and repeated access to areas not normally accessed. Program offices responsible for 7 of the 8 selected mission-essential systems did not conduct such a review. According to OCIO officials, the offices were not aware of the need for a review. However, without reviewing physical access logs, CDC has reduced assurance that the agency would detect suspicious physical access activities. Further, CDC had not routinely monitored the configuration settings of its systems to ensure that the configurations were securely set. For example, for at least 41 of 94 technical control deficiencies we identified, OCIO officials cited quality control gaps where the change management process or system administrators had not discovered deficiencies resulting from insecure configuration settings. Without an effective monitoring process in place for system configurations, the agency was not aware of insecure system configurations. CDC Did Not Effectively Test or Assess Controls to Detect Deficiencies FISMA requires each agency to periodically test and evaluate the effectiveness of its information security policies, procedures, and practices. The law also requires agencies to test the management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems at a frequency depending on risk, but no less than annually. In addition, NIST SP 800- 53A identifies three assessment methods—interview, examine, and test— and describes the potential depth and coverage for each. Assessing a control’s effectiveness based on an interview is likely less rigorous than examining a control; similarly, examining a control is likely less rigorous than testing the control’s functionality. CDC had not sufficiently tested or assessed the effectiveness of the security controls for the 8 mission-essential systems that we reviewed. Although CDC annually assessed security controls of selected systems, the agency had only examined control descriptions in security plans to ensure accuracy. At least once every 3 years, the agency selected controls for a more in-depth assessment of the 8 mission-essential systems we reviewed. However, CDC had assessed only 191 (about 7 percent) of 2,818 controls described in the security plans for the selected systems. In addition, the agency used methods for assessing controls that were often not rigorous enough to identify the control deficiencies that we identified. For example, as depicted in figure 1, CDC relied exclusively on interviews—a less rigorous method—to assess 20 percent of the 191 controls it assessed for the selected systems. The security control tests and assessments were insufficient in part because CDC had not developed comprehensive security assessment plans or had not consistently implemented the plans for the 8 selected mission-essential systems we reviewed. For example, one system’s assessment plan indicated that five controls should be assessed using a testing methodology; instead, however, the assessor conducted interviews to determine whether controls were effective or not. OCIO officials stated that the security control test and assessment process is manual and staffing is limited. They stated that the agency intends to rely increasingly on automated tools—such as the tools implemented by the Continuous Diagnostics and Mitigation program—for performing the assessments. Nevertheless, by not assessing controls in an in-depth and comprehensive manner, CDC has limited assurance that the security controls are in place and operating as intended. Further, without developing and implementing comprehensive assessment plans, assessments may not be performed with sufficient rigor to identify control deficiencies. CDC Had Implemented Processes for Responding to Incidents or Identified Deficiencies, but Did Not Always Take Timely Corrective Actions The respond core security function is intended to support the ability to contain the impact of a potential cybersecurity event. Controls associated with this function include implementing an incident response capability and remediating newly-identified deficiencies. Although CDC had implemented controls for incident response to detect cybersecurity events, we reported in June 2018 that the agency had not maintained adequate information to support its incident response capability or taken timely corrective actions to remediate identified control deficiencies. CDC Had Implemented Incident Response Capabilities, but Did Not Maintain Adequate Information NIST SP 800-53 and SP 800-61 state that agencies should develop and document an incident response policy with corresponding implementation procedures and an incident response plan, and keep them updated according to agency requirements. NIST also states that agencies should implement an incident handling capability, including an incident response team that consists of forensic/malicious code analysts. In addition, agencies are to provide incident response training for the team and test the incident response capability to determine the effectiveness of the response. Further, NIST states that agencies are to monitor incidents by tracking and documenting them and maintain records about each incident, including forensic analysis. Finally, National Archives and Records Administration guidance states that records and data relevant to security incident investigations should be retained for 3 years. CDC had implemented an incident response capability. The agency had developed policy, procedures, and a plan that addressed incident response, and updated them annually. CDC had an incident response team that managed all of the incident handling and response efforts for the agency, and conducted forensic analyses for reported security incidents. Team members had undergone training, such as an advanced network forensic and analysis course offered by a private firm. In addition, the agency had periodically tested its incident handling capability by conducting penetration testing exercises. These exercises allowed the team to test its real-time response capabilities. CDC’s incident response procedures state that incident tickets should include a description of actions taken, response time, and whether actions have been completed or not. The agency’s procedures also require that computers affected by an incident be removed from the network immediately. Nevertheless, CDC had shortcomings in implementing its incident response capability and monitoring procedures. For the 11 security incidents CDC considered most significant over a 19-month period ending in March 2017, the agency had not consistently described the actions taken, the response times, or whether remedial actions had been completed. The agency also had not maintained audit log records for its security incidents. For example, the agency described recommended actions for 10 of the 11 incidents, but did not describe the actions that had been taken. In addition, although incident response team officials told us that all incident ticket records had been saved, CDC had not retained system log data that supported incident resolution for at least five of the incidents. The agency’s policy did not address record retention in accordance with National Archives and Records Administration guidance. Further, for two of the security incidents, the security incident tickets did not clearly indicate when two compromised workstations had been removed from the network. According to OCIO officials, shortcomings in fully documenting incidents resulted from the organization being understaffed, primarily due to budget limitations and the inability to hire qualified personnel. Without effectively tracking and documenting information system security incidents, CDC’s systems are at increased risk that the impact of security incidents would not be fully addressed. CDC Had Remedial Action Plans to Address Identified Deficiencies for Selected Systems, but Did Not Always Take Timely Corrective Actions or Have Plans for Other Needed Corrective Actions FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in information security policies, procedures, or practices. NIST SP 800-53 states that agencies are to develop a plan of action and milestones (POA&M) for an information system to document the agency’s planned remedial actions to correct identified deficiencies. CDC policy was consistent with the NIST guidelines. CDC had developed POA&Ms for deficiencies identified by its security control assessments, but had not remediated the deficiencies in a timely manner. For each of the 8 selected mission-essential systems, the agency had created plans for correcting control deficiencies. However, the agency did not implement several remedial actions by their due date. For example, expected completion dates had passed for correcting deficiencies associated with 4 of the 8 selected mission-essential systems. For these 4 systems, the completion dates were 1 to 8 months beyond the due dates at the time of our review in September 2017. According to Office of the Chief Information Security Officer officials, program offices that own the systems did not always communicate updates on the status of remedial actions for their respective systems, noting that deficiencies may have been corrected. Without effective communication to update its POA&Ms, CDC was not in a position to effectively manage its remedial actions and correct known deficiencies in a timely manner. CDC Had Developed and Tested Plans for System Recovery, but Had Not Assessed the Risk Associated with the Close Proximity of an Alternate Processing Site The recover core security function is intended to support timely recovery of normal operations to reduce the impact from a cybersecurity event. Controls associated with this function include developing and testing contingency plans to ensure that, when unexpected events occur, critical operations can continue without interruption or can be promptly resumed, and that information resources are protected. Losing the capability to process, retrieve, and protect electronically maintained information can significantly affect an agency’s ability to accomplish its mission. If contingency planning is inadequate, even relatively minor interruptions can result in lost or incorrectly processed data, which can cause financial losses, expensive recovery efforts, and inaccurate or incomplete information. NIST SP 800-53 states that agency systems should have a contingency plan that includes the identification of key personnel and the systems’ essential mission functions and addresses full information system restoration. For high-impact systems, NIST specifies that agencies test contingency plans at an alternate processing site that is separated from the primary processing site to reduce susceptibility to the same threats. In addition, NIST states that organizations should initiate corrective actions based on testing if they are needed. As we reported in June 2018, CDC had developed and fully tested contingency plans for each of the 8 selected mission-essential systems that we reviewed. Each plan identified key personnel and their contact information, essential mission functions of the systems, and instructions on how to fully restore the systems in the event of a disruption. Additionally, between January 2015 and May 2017, CDC had tested whether the 8 systems could be recovered at their respective alternate sites, and had initiated corrective actions based on the results of the tests. However, the alternate site for 6 of the 8 selected mission-essential systems was located in relatively close proximity to the main processing site. Although 2 systems had alternate sites located in another state, the alternate site for the other 6 systems was within the same metropolitan area. As a result, an event such as a natural disaster or substantial power outage could affect both the main and alternate sites for these systems, potentially rendering CDC unable to complete functions associated with its mission. Prompt restoration of service is necessary because the required recovery time for these systems ranged from 4 to 24 hours. Security plans for 3 of the systems recognized the hazards of having the sites within the same geographical region, but stated that CDC had accepted this risk. According to OCIO officials, having a site further away was cost prohibitive; however, the officials had not documented this analysis or the associated risk of having the agency’s processing sites located within the same geographical area. Without documenting the analysis and associated risk, CDC had less assurance that senior leadership was aware of the risk of agency systems being unavailable. As a consequence, senior leadership may not agree whether acceptance of the risk was warranted. CDC Had Not Consistently or Effectively Implemented Elements of Its Information Security Program An underlying reason for the information security deficiencies in selected systems was that, although the agency had developed and documented an agency-wide information security program, it had not consistently or effectively implemented elements of the program. FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes the following elements: periodic assessments of the risk and magnitude of the harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems that support the operations and assets of the agency; policies and procedures that (1) are based on risk assessments, (2) cost-effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; plans for providing adequate information security for networks, facilities, and systems or group of information systems, as appropriate; security awareness training to inform personnel of information security risks and of their responsibilities in complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in the information security policies, procedures, or practices of the agency; and plans and procedures to ensure continuity of operations for information systems. As discussed previously in this report, CDC had implemented aspects of each of these elements. For example, the agency had conducted risk assessments, developed security plans, assessed security controls, developed remedial action plans, and developed and tested contingency plans for each of the 8 selected mission-essential systems. In addition, the agency had documented numerous policies and procedures and ensured that staff had completed annual security awareness training. However, CDC’s program had shortcomings. For example, as discussed earlier in this report, CDC had not consistently or effectively: addressed threats, technical constraints, and the changing threat environment in its system risk assessments, or assessed the risk of having alternate processing sites within close proximity to each other; documented detailed technical requirements in policies and procedures, or facility controls in facility security plans; tracked and trained staff with significant security responsibilities; monitored configuration settings and comprehensively assessed remediated deficiencies in a timely manner; or documented its cost analysis and associated risk of having an alternate processing site within the same geographical region as its primary processing site. Until CDC addresses these shortcomings and consistently and effectively implements all elements of its information security program, the agency will lack reasonable assurance that its computing resources are protected from inadvertent or deliberate misuse. CDC Has Implemented Many of the Recommendations in Our June 2018 Report and Plans to Implement the Rest In our June 2018 report, we made 195 recommendations to CDC to strengthen its technical security controls and bolster its agency-wide information security program. Specifically, we recommended that the agency take 184 actions to resolve technical control deficiencies by implementing stronger access controls, encrypting sensitive data, configuring devices securely, applying patches in a timely manner, strengthening firewall rules, and implementing logging and monitoring controls more effectively, among other actions. We also made 11 recommendations for CDC to improve its information security program by, among other things, assessing risks as needed, documenting more detailed technical requirements, monitoring and assessing controls more comprehensively, and remediating deficiencies in a timely manner. Since the issuance of our June 2018 report, CDC has made significant progress in implementing the recommendations we made to resolve the technical security control deficiencies in the information systems we reviewed and to improve its information security program. In this regard, the agency has implemented many of the recommendations for improving technical security controls for the systems we reviewed and has developed plans to implement recommendations for enhancing its information security program. Specifically, as of August 3, 2018, CDC had fully implemented 102 (55 percent) of the 184 recommendations we made to fortify the technical security controls over the systems we reviewed. In addition, the agency had partially implemented 20 (11 percent) of the 184 recommendations. In these instances, CDC had made progress toward implementing the recommendations, but had not completed all of the necessary corrective actions for us to close the recommendations. Therefore, these recommendations remain open. Further, CDC did not provide any evidence that it had implemented the remaining 62 technical control- related recommendations. Table 3 summarizes the status of CDC’s efforts to implement the 184 recommendations that we made to resolve the technical control deficiencies, as of August 3, 2018. By implementing 102 recommendations, CDC (as of August 3, 2018) reduced some of the risks associated with certain key activities. Specifically, these efforts included protecting network boundaries and logging and monitoring security events for indications of inappropriate or unusual activity on systems—that we highlighted in our June 2018 report as being particularly vulnerable and requiring the agency’s greater priority and attention. In addition, the agency had implemented several of our recommendations to rectify a number of the security control deficiencies. These efforts included strengthening firewall rules, implementing stronger access controls, configuring devices securely, and expanding its audit monitoring capabilities. In addition, CDC had developed a plan of action and milestones (POA&M) for each of the identified technical control deficiencies and related recommendations that remained open as of August 3, 2018. The POA&Ms assigned organization responsibilities, identified estimated costs, identified points of contact, and established time frames for resolving the deficiencies and closing the related recommendations. The agency’s plans called for it to implement the majority of the remaining open technical control-related recommendations by September 2019, and all recommendations by September 2020, as shown in figure 2. Our June 2018 report also included 11 recommendations to CDC to improve its information security program. In particular, we recommended that the agency, among other things, evaluate system impact level categorizations to ensure they reflect the current operating environment; update risk assessments to identify threats and the likelihood of impact of the threat on the environment; and update the facility risk assessments. In addition, we recommended that the agency take the necessary steps to make sure staff with significant security roles and responsibilities are appropriately identified and receive role-based training; monitor the configuration settings of agency systems to ensure the settings are set as intended; update security control assessments to include an assessment of controls using an appropriate level of rigor; and remediate POA&Ms in a timely manner. Further, we recommended that the agency document the cost-benefit analysis with associated risk of having an alternate site within the same geographical region as the main site. As of August 3, 2018, the agency had partially implemented 1 of the 11 information security program-related recommendations, but had not provided any evidence that it had implemented the remaining 10 recommendations. Regarding the partially implemented recommendation, CDC had provided role-based training to all personnel performing significant security responsibilities. However, the agency still needed to establish and automate the identification process and the tracking of training records for individuals needing specialized security role-based training. CDC had developed plans to fully implement this recommendation and each of the remaining 10 information security program-related recommendations by July 2019. Fully implementing the open recommendations is essential to ensuring that the agency’s systems and sensitive information are not at increased and unnecessary risk of unauthorized use, disclosure, modification, or disruption. Agency Comments We received written comments on a draft of this report from CDC. In its comments, which are reprinted in appendix III, the agency stated that it recognizes the risks associated with operating a large, global information technology enterprise and has implemented processes, procedures, and tools to better ensure the prevention, detection, and correction of potential incidents. CDC also said cybersecurity remains a high priority and that it takes the responsibilities for protecting public health information and data entrusted to it seriously. To strengthen its cybersecurity program, the agency stated that it is restructuring and streamlining the cyber program and IT infrastructure of its Office of the Chief Information Officer. Further, CDC stated that it has leveraged GAO’s limited official use only report, issued in June 2018, to accelerate its implementation, infrastructure, and software deployments to complete phrases one and two of DHS’s Continuous Diagnostics and Mitigation program. The agency also said it concurred with, and highlighted a number of actions that it had planned or begun taking to remediate, the 11 security program recommendations that we made to CDC in our June 2018 report. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, and the department’s Office of the Inspector General, the Director of CDC, and interested congressional parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov, or Dr. Nabajyoti Barkakati at (202) 512-4499 or barkakatin@gao.gov. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objective was to assess the extent to which CDC had effectively implemented an information security program and controls to protect the confidentiality, integrity, and availability of its information on selected information systems. In June 2018, we issued a report which detailed the findings from our work in response to this objective. In the report, we made 184 recommendations to CDC to resolve the technical security control deficiencies in the information systems we reviewed and 11 additional recommendations to improve its information security program. We designated that report as “limited official use only” (LOUO) and did not release it to the general public because of the sensitive information it contained. This report publishes the findings discussed in our June 2018 report, but we have removed all references to the sensitive information. Specifically, we deleted the names of the information systems and computer networks that we examined, disassociated identified control deficiencies from named systems, deleted certain details about information security controls and control deficiencies, and omitted an appendix that was contained in the LOUO report. The appendix contained sensitive details about the technical security control deficiencies in the CDC’s information systems and computer networks that we reviewed, and the 184 recommendations we made to mitigate those deficiencies. We also provided a draft of this report to CDC officials to review and comment on the sensitivity of the information contained herein and to affirm that the report can be made available to the public without jeopardizing the security of CDC’s information systems and networks. In addition, this report addresses a second objective that was not included in the June 2018 report. Specifically, this objective was to determine the extent to which CDC had taken corrective actions to address the previously identified security program and technical control deficiencies and related recommendations for improvement that we identified in the earlier report. As noted in our June 2018 report, we determined the extent to which CDC had effectively implemented an information security program and controls to protect the confidentiality, integrity, and availability of its information on selected information systems. To do this, we initially gained an understanding of the overall network environment, identified interconnectivity and control points, and examined controls for the agency’s networks and facilities. We conducted site visits at two CDC facilities in Atlanta, Georgia. To evaluate CDC’s controls over its information systems, we used our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information. We based our assessment of controls on requirements identified by the Federal Information Security Modernization Act of 2014 (FISMA), which establishes key elements for an effective agency-wide information security program; NIST guidelines and standards; Department of Health and Human Services and CDC policies, procedures, and standards; and standards and guidelines from relevant security organizations, such as the National Security Agency, the Center for Internet Security, and the Interagency Security Committee. We had reviewed a non-generalizable sample of the agency’s information systems, focusing on those systems that (1) collect, process, and maintain private or potentially-sensitive proprietary business, medical, and personally identifiable information; (2) are essential to CDC’s mission; and (3) were assigned a Federal Information Processing Standard rating of moderate or high impact. Based on these criteria, we had selected eight mission-essential systems for our review. Of these systems, the agency had categorized 7 as high-impact systems and 1 as a moderate-impact system. For these 8 selected mission- essential systems, we had reviewed information security program-related controls associated with risk assessments, security plans, security control assessments, remedial action plans, and contingency plans. To assess the safeguards CDC implemented for its systems, we had examined technical security controls for 24 CDC systems, including systems the agency designated as high-value assets. These included 10 key systems, 8 of which were high- and moderate-impact mission- essential systems just described, 1 additional high-impact system, 1 additional moderate-impact system, and 14 general support systems. We selected the additional high-impact system because the agency re- categorized it as a high-impact system during our review. We selected the additional moderate-impact system because the agency used it to control physical access to highly sensitive CDC biologic lab facilities, including facilities that handle dangerous and exotic substances that cause incurable and deadly diseases. We selected 10 key systems, 8 of which were mission-essential systems, for review that (1) collect, process, and maintain private or potentially sensitive proprietary business, medical, and personally identifiable information; (2) are essential to CDC’s mission; (3) could have a catastrophic or severe impact on operations if compromised; or (4) could be of particular interest to potential adversaries. We also selected 14 general support systems that were part of the agency’s network infrastructure supporting the 10 key systems. To review controls over the 10 key systems and 14 general support systems, we had examined the agency’s network infrastructure and assessed the controls associated with system access, encryption, configuration management, and logging and monitoring. For reporting purposes, we had categorized the security controls that we assessed into the five core security functions described in the National Institute of Standards and Technology’s (NIST) cybersecurity framework. The five core security functions are: Identify: Develop the organizational understanding to manage cybersecurity risk to systems, assets, data, and capabilities. Protect: Develop and implement the appropriate safeguards to ensure delivery of critical infrastructure services. Detect: Develop and implement the appropriate activities to identify the occurrence of a cybersecurity event. Respond: Develop and implement the appropriate activities to take action regarding a detected cybersecurity event. Recover: Develop and implement the appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity event. These core security functions are described in more detail in appendix II. For the identify core security function, we had examined CDC’s reporting for its hardware and software assets; analyzed risk assessments for the eight selected mission-essential systems to determine whether threats and vulnerabilities were being identified; reviewed risk assessments for two facilities; analyzed CDC policies, procedures, and practices to determine their effectiveness in providing guidance to personnel responsible for securing information and information systems; and analyzed security plans for the eight selected systems to determine if those plans had been documented and updated according to federal guidance. We also evaluated the risk assessments for two facilities that housed the 8 mission-essential selected systems. For the protect core security function, we had examined access controls for the 24 systems. These controls included the complexity and expiration of password settings to determine if password management was being enforced; administrative users’ system access permissions to determine whether their authorizations exceeded the access necessary to perform their assigned duties; firewall configurations, among other things, to determine whether system boundaries had been adequately protected; and physical security controls to determine if computer facilities and resources were being protected from espionage, sabotage, damage, and theft. We also had examined configurations for providing secure data transmissions across the network to determine whether sensitive data were being encrypted. In addition, we had examined configuration settings for routers, network management servers, switches, firewalls, and workstations to determine if settings adhered to configuration standards, and inspected key servers and workstations to determine if critical patches had been installed and/or were up-to-date. Further, we had examined training records to determine if employees and contractors had received security awareness training according to federal requirements, and whether personnel who have significant security responsibilities had received training commensurate with those responsibilities. For the detect core security function, we had analyzed centralized logging and network traffic monitoring capabilities for key assets connected to the network; analyzed CDC’s procedures and results for assessing security controls to determine whether controls for the eight selected mission- essential systems had been sufficiently tested at least annually and based on risk. We also had reviewed the agency’s implementation of continuous monitoring practices to determine whether the agency had developed and implemented a continuous monitoring strategy to manage its information technology assets and monitor the security configurations and vulnerabilities for those assets. For the respond core security function, we had reviewed CDC’s implementation of incident response practices, including an examination of incident tickets for 11 incidents; and had examined the agency’s process for correcting identified deficiencies for the eight selected mission-essential systems. For the recover core security function, we had examined contingency plans for eight selected mission-essential systems to determine whether those plans had been developed and tested. In assessing CDC’s controls associated with this function, as well as the other four core functions, we had interviewed Office of the Chief Information Officer officials, as needed. Within the core security functions, as appropriate, we had evaluated the elements of CDC’s information security program based on elements required by FISMA. For example, we analyzed risk assessments, security plans, security control assessments, and remedial action plans for each of the 8 selected mission-essential systems. In addition, we had assessed whether the agency had ensured staff had completed security awareness training and whether those with significant security responsibilities received commensurate training. We also had evaluated CDC’s security policies and procedures. To determine the reliability of CDC’s computer-processed data for training and incident response records, we had evaluated the materiality of the data to our audit objective and assessed the data by various means, including reviewing related documents, interviewing knowledgeable agency officials, and reviewing internal controls. Through a combination of methods, we concluded that the data were sufficiently reliable for the purposes of our work. To accomplish our second objective—on CDC’s actions to address the previously identified security program and technical control deficiencies and related recommendations—we requested that the agency provide a status report of its actions to implement each of the recommendations. For each recommendation that CDC indicated it had implemented as of August 3, 2018, we examined supporting documents, observed or tested the associated security control or procedure, and/or interviewed the responsible agency officials to assess the effectiveness of the actions taken to implement the recommendation or otherwise resolve the underlying control deficiency. Based on this assessment and CDC status reports, we defined the status of each recommendation into the following 3 categories: closed-implemented—CDC had implemented the recommendation; open-partially implemented—CDC had made progress toward, but had not completed, implementing the recommendation; and open-not implemented—CDC had not provided evidence that it had acted to implement the recommendation. We conducted this performance audit from December 2016 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: The National Institute of Standards and Technology Cybersecurity Framework The National Institute of Standards and Technology’s cybersecurity framework consists of five core functions: identify, protect, detect, respond, and recover. Within the five functions are 23 categories and 108 subcategories, as described in the table. Appendix III: Comments from Department of Health and Human Services Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the individuals named above, Gary Austin, Jennifer R. Franks, Jeffrey Knott, and Chris Warweg (assistant directors); Chibuikem Ajulu-Okeke, Angela Bell, Sa’ar Dagani, Nancy Glover, Chaz Hubbard, George Kovachick, Sean Mays, Kevin Metcalf, Brandon Sanders, Michael Stevens, Daniel Swartz, and Angela Watson made key contributions to this report. Edward Alexander, Jr. and Duc Ngo (assistant directors); David Blanding, and Christopher Businsky also provided assistance.
Why GAO Did This Study CDC is responsible for detecting and responding to emerging health threats and controlling dangerous substances. In carrying out its mission, CDC relies on information technology systems to receive, process, and maintain sensitive data. Accordingly, effective information security controls are essential to ensure that the agency's systems and information are protected from misuse and modification. GAO was asked to examine information security at CDC. In June 2018, GAO issued a limited official use only report on the extent to which CDC had effectively implemented technical controls and an information security program to protect the confidentiality, integrity, and availability of its information on selected information systems. This current report is a public version of the June 2018 report. In addition, for this public report, GAO determined the extent to which CDC has taken corrective actions to address the previously identified security program and technical control deficiencies and related recommendations for improvement. For this report, GAO reviewed supporting documents regarding CDC's actions on previously identified recommendations and interviewed personnel at CDC. What GAO Found As GAO reported in June 2018, the Centers for Disease Control and Prevention (CDC) implemented technical controls and an information security program that were intended to safeguard the confidentiality, integrity, and availability of its information systems and information. However, GAO identified control and program deficiencies in the core security functions related to identifying risk, protecting systems from threats and vulnerabilities, detecting and responding to cyber security events, and recovering system operations (see table below). GAO made 195 recommendations to address these deficiencies. As of August 2018, CDC had made significant progress in resolving many of the security deficiencies by implementing 102 of 184 (about 55 percent) technical control recommendations, and partially implementing 1 of 11 information security program recommendations made in the June 2018 report. The figure shows the status of CDC's efforts to implement the 195 recommendations. Additionally, CDC has created remedial action plans to implement the majority of the remaining open recommendations by September 2019. Until CDC implements these recommendations and resolves the associated deficiencies, its information systems and information will remain at increased risk of misuse, improper disclosure or modification, and destruction.
gao_GAO-18-148
gao_GAO-18-148_0
Background Investments in federal IT have the potential to make agencies more efficient in fulfilling their missions. However, as we have previously reported, these investments too often result in failed projects that incur cost overruns and schedule slippages, while contributing little to mission- related outcomes. For example: The Farm Service Agency’s Modernize and Innovate the Delivery of Agricultural Systems program, which was to replace aging hardware and software applications that process benefits to farmers, was halted in July 2014 after investing about 10 years and at least $423 million, while only delivering about 20 percent of the functionality that was originally planned. Defense’s Expeditionary Combat Support System was canceled in December 2012, after spending more than a billion dollars and failing to deploy within 5 years of initially obligating funds. VA’s Financial and Logistics Integrated Technology Enterprise program was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011 due to challenges in managing the program. OPM’s Retirement Systems Modernization program was canceled in February 2011, after spending approximately $231 million on the agency’s third attempt to automate the processing of federal employee retirement claims. DHS’s Secure Border Initiative Network program was ended in January 2011, after the department obligated more than $1 billion to the program, because the program did not meet cost-effectiveness and viability standards. The tri-agency (Defense, NASA, and the National Oceanic and Atmospheric Administration) National Polar-orbiting Operational Environmental Satellite System was a weather satellite program that was disbanded by the White House Office of Science and Technology Policy in February 2010 after the program spent 16 years and almost $5 billion. The VA Scheduling Replacement Project was terminated in September 2009 after spending an estimated $127 million over 9 years. One approach to reducing software development risks is to divide investments into smaller parts, or increments. While a traditional waterfall software development effort usually is broadly scoped, multiyear, and produces a product at the end of a sequence of phases, an incremental development approach delivers software products in smaller modules with shorter time frames. This development technique has been recognized in prior law since 1996 and in OMB guidance since 2000. By following an incremental development approach, agencies have the potential to: deliver capabilities to their users more rapidly, giving them more flexibility to respond to changing agency priorities; increase the likelihood that each project will achieve its cost, schedule, and performance goals; obtain additional feedback from users, increasing the probability that each successive increment and project will meet user needs; more easily incorporate emerging technologies; and terminate a poorly performing investment, with fewer sunk costs. Since 2000, OMB Circular A-130 has directed agencies to incorporate an incremental development approach into their policies and ensure that investments implement them. Further, since 2012, OMB has required that functionality be delivered at least every 6 months. In addition, FITARA states that OMB is to require in its annual IT capital planning guidance that covered agency CIOs certify that IT investments are adequately implementing incremental development, as defined in capital planning guidance issued by OMB. Accordingly, in June 2015, OMB released two related sets of guidance on the implementation of FITARA that included instructions pertaining to CIO certification of adequate incremental development. In particular, agencies were to, among other things: Develop policies and processes which ensure CIO certification. OMB required agencies to define IT policies and processes which ensure that the CIO certifies that IT resources are adequately implementing incremental development. In the guidance, OMB defined adequate incremental development as the planned and actual delivery of new or modified technical functionality to users that occurs at least every 6 months for development of software or services. Report the status of CIO certification. OMB’s guidance required agency CIOs to certify in each major IT investment’s business case whether the investment’s plan for the current year adequately implements incremental development. OMB uses the major IT business cases to monitor major investments once they are funded. Performance information on each major investment, including the status of incremental delivery, is made publicly available on the web-based IT Dashboard. In using the IT Dashboard, OMB intends to provide transparency and oversight into these agencies’ investments. This public display of data is also intended to allow Congress and government oversight bodies, as well as the general public, to hold agencies accountable for the results and progress of the investments. Further, OMB issued its fiscal year 2018 and fiscal year 2019 capital planning guidance in June 2016 and August 2017, respectively, which required agency CIOs to provide the certifications needed to demonstrate compliance with FITARA. GAO Has Reported on Efforts to Improve IT Acquisitions Using Incremental Development During the past several years, we have reported on a variety of challenges related to improving federal IT acquisitions through the use of incremental development. In 2011, we identified seven successful investment acquisitions and nine common factors critical to their success. Specifically, we reported that department officials had identified seven successful investments that best achieved their respective cost, schedule, scope, and performance goals. Notably, all of these were smaller increments, phases, or releases of larger projects. For example, the Defense investment in our sample was the seventh increment of an ongoing investment; Energy’s system was the first of two phases; the DHS investment was rolled out to two locations prior to deployment to 37 additional locations; and Transportation’s investment had been part of a prototype deployed to four airports. Common factors critical to the success of three or more of the seven investments were: 1. Program officials were actively engaged with stakeholders. 2. Program staff had the necessary knowledge and skills. 3. Senior department and agency executives supported the programs. 4. End users and stakeholders were involved in the development of requirements. 5. End users participated in testing system functionality prior to formal end-user acceptance testing. 6. Government and contractor staff were stable and consistent. 7. Program staff prioritized requirements. 8. Program officials maintained regular communication with the prime contractor. 9. Programs received sufficient funding. These critical factors help support OMB’s objective of improving the management of large-scale IT acquisitions across the federal government. In May 2014, we reported on the status of incremental development at five agencies (Defense, DHS, HHS, Transportation, and VA). We noted that these agencies planned to deliver functionality for fewer than half of the investments in 12-month cycles and that only about one-fourth of these investments would deliver in 6-month increments, as required by OMB. Additionally, OMB staff reported to us that they did not expect that many investments would meet the 6-month requirement. Therefore, we questioned whether a 6-month delivery requirement was an appropriate government-wide goal and whether OMB should instead consider a 12- month time frame, as called for in its IT Reform Plan. Accordingly, we recommended that OMB require projects to deliver functionality at least every 12 months. OMB disagreed with our recommendation, asserting that changing the requirement from 6 to 12 months would reduce the emphasis on incremental development that it had been advocating and that 6 months was an appropriate goal. However, we noted in our report, agencies’ plans to deliver functionality every 6 months was low and it would not always be practical for certain types of investments to deliver functionality every 6 months. We therefore continue to believe our recommendation is appropriate. We also recommended that OMB develop and issue clearer guidance on incremental development to ensure that it has the necessary information to oversee the extent to which projects and investments are implementing its guidance. OMB took action to address this recommendation and issued capital planning guidance in fiscal year 2016 that requires agencies to report on whether each of their projects has delivered a production release every 6 months and to provide a rationale if functionality is not being delivered. In addition, we recommended that the five selected agencies—Defense, DHS, HHS, Transportation, and VA— update and implement their associated policies. Most agencies agreed with our recommendation or had no comment. As of September 2017, Defense, DHS, Transportation, and VA have addressed our recommendation. In February 2015, we added improving the management of IT acquisitions and operations to our high-risk list, citing a lack of disciplined and effective management and inconsistent application of best practices to the successful acquisition of IT projects throughout the federal government. In particular, we noted the critical importance of implementing incremental development in order to reduce investment risk and called on federal agencies to ensure that a minimum of 80 percent of the government’s major acquisitions deliver functionality at least every 12 months. In August 2016, we reported on the status of incremental development and noted that, for fiscal year 2016, 22 agencies had reported on the IT Dashboard that 64 percent of their software development projects would deliver useable functionality every 6 months, as required by OMB. However, shortcomings in OMB’s guidance—the lack of clarity regarding the types of projects where incremental development would not apply, and how the status of these nonsoftware projects should be reported— affected the accuracy of the data on the IT Dashboard. We therefore recommended in August 2016 that OMB clarify its existing guidance regarding what IT investments were and were not subject to requirements on the use of incremental development and how CIOs should report the status of projects that were not subject to these requirements. OMB did not specifically agree or disagree with our recommendation, but stated that it generally agreed with our report. In April 2017, OMB staff reported that the agency had taken action and included language to address our recommendation in its fiscal year 2018 guidance; however, an analysis of that guidance showed that it still lacked direction on how CIOs are to report the status of nonsoftware projects. In addition, for our August 2016 report, we reviewed seven departments’ guidance and found that only three departments (Commerce, DHS, and Transportation) had policies and processes to ensure that the CIO would certify that IT investments were adequately implementing incremental development in accordance with FITARA. We therefore made recommendations to the remaining four departments (Defense, Education, HHS, and Treasury) to establish a policy and process for the certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA. Two departments concurred with our recommendation, one department disagreed, and one department did not comment. As of August 2017, none of the four departments had taken action to address the recommendation; as discussed later in the report. We issued an update to our high-risk report in February 2017 and noted that, while progress has been made in addressing this high-risk area, significant work remains to be completed. For example, as of December 2016, OMB and agencies had implemented 366 (or about 46 percent) of the 803 open recommendations that we had made from fiscal years 2010 through 2015 related to IT acquisitions and operations. We also noted that agencies needed to make demonstrated progress in delivering functionality every 12 months on major acquisitions. Further, in April 2017, we reported on the results of a forum, convened by the Comptroller General on September 14, 2016, to explore challenges and opportunities for CIOs to improve federal IT acquisitions and operations—with the goal of better informing policymakers and government leadership. Thirteen current and former federal agency CIOs, members of Congress, and private sector IT executives noted the importance of federal agencies’ IT procurement offices and processes evolving to align with new technologies, as agencies are not always set up to take advantage of acquisitions using Agile development processes. Agencies Reported That Most of Their Major Software Development Investments Were Certified as Having Adequate Incremental Development, but Continue to Face Challenges and Identify Benefits Agencies reported to OMB through the IT Dashboard that more than half of their major software development investments were certified by the CIO as implementing adequate incremental development as of August 2016. For the remaining investments, the agencies offered various interpretations regarding what investments needed to be certified. For example, officials of several agencies reported that they were not utilizing incremental development for certain investments. In other instances, agencies did not provide a response to OMB regarding the question in the major IT business case about certification, or responded that they did not consider certification to be applicable for their investments. However, based on OMB’s guidance, a number of these “not applicable” responses were incorrectly reported, as these agencies had investments that included software development and were, therefore, required to report on the certification of adequate incremental development. In addition, officials from a majority of the agencies reported that multiple challenges had impacted their ability to implement adequate incremental development. These challenges related to inefficient governance processes; procurement delays; the lack of stable, prioritized requirements; and organizational and cultural changes associated with the transition from a traditional software methodology to an incremental methodology. Nevertheless, officials from 21 agencies reported that the certification process was beneficial because they used the information obtained during the process to assist with management oversight of major IT investments, including identifying investments that could be using a more effective incremental approach and using lessons learned to improve the agency’s incremental processes. CIOs Certified 62 Percent of Major IT Investments as Having Adequate Incremental Development FITARA states that, in its annual IT capital planning guidance, OMB is to require CIOs to certify that IT investments are adequately implementing incremental development. In 2015, OMB defined adequate incremental development as the planned and actual delivery of new or modified technical functionality to users that occurs at least every 6 months for development of software or services. Further, OMB’s IT capital planning guidance for fiscal year 2017 required CIOs to certify whether their agencies’ major IT investments had adequately implemented incremental development for the current year. Specifically, agencies were to respond to a question in the major IT business case regarding whether the CIO certified adequate incremental development for each investment with a response of either yes, no, or not applicable. Agencies’ responses to this question are publicly reported by OMB on the IT Dashboard. As of August 31, 2016, 21 of the 24 agencies in our review had reported on the IT Dashboard a total of 166 major software development investments that were planned to be primarily in development for fiscal year 2017. Of these 166 investments, the agencies reported that 62 percent (103 investments) were certified by the CIO as using adequate incremental development for fiscal year 2017, as shown in table 1 in alphabetical order by department and agency. (For additional details on the certification status of the 166 investments, see appendix II.) For the remaining 63 investments, 8 agencies either reported in the major IT business case that the investment was not certified as adequately implementing incremental development or that certification was not applicable. Three other agencies did not provide a response to the question regarding certification in the major IT business case submitted to OMB. Figure 1 shows the breakdown of responses by agency regarding investments that were not certified as implementing adequate incremental development, as reported on the IT Dashboard. Officials in the Office of the CIO at each of the 3 agencies provided a variety of reasons for why the 11 investments were not certified as implementing adequate incremental development. For example, HHS officials noted that certain investments are required to meet complex statutory requirements and, thus, a 6-month release schedule is not always appropriate for them. Interior officials stated that their investment had just been categorized as a major investment and, at the time of the submission of certification status, a baseline had not been approved. The officials stated, however, that the baseline has since been approved and the investment is expected to deliver functionality every 6 months. Further, SSA officials reported that 3 investments were not software development initiatives even though 2 of these investments had been inaccurately reported as such on the IT Dashboard. Regarding the 33 investments for which the 3 agencies did not provide a response in the major IT business case for the investment, officials from each agency’s Office of the CIO attributed the lack of a response to either data entry errors or the agency not being required to publicly report this information for the investments. In particular, USDA and Treasury officials reported that the lack of certification data on the IT Dashboard was the result of a data entry error. Treasury officials also stated that the agency’s missing responses were due to a lack of administrative oversight in reviewing the data for accuracy and consistency. The officials noted that the Treasury CIO had certified all of the agency’s investments but some investments failed to select the proper response in the business case. Defense officials reported that 16 investments were categorized as national security systems and, therefore, were exempt from public reporting on the IT Dashboard (though not exempt from acquisition policies regarding the use of incremental development). The officials said that they did not provide a response on the remaining 7 investments because 1 investment was not a software development effort and the other 6 investments were designated as major automated information systems and, therefore, the agency did not have to submit business cases to OMB with this information. Lastly, officials from the Office of the CIO at 7 agencies reported a variety of reasons for why they had provided a response of “not applicable” for 19 investments. For example, Interior officials stated that, at the time of the certification submission, the investment did not have any approved development projects and, therefore, the agency had indicated not applicable in its response for the one investment. However, the officials stated that the investment’s projects have since been approved and the CIO has reviewed the investment and certified adequate incremental development. For the remaining 18 investments at the other 6 agencies (Commerce, DHS, Education, Energy, HHS, and Transportation), officials from each agency’s Office of the CIO reported that the majority of the projects associated with their investments were not primarily related to software development, or that they were using either a non-incremental development methodology or a mixed non-incremental/incremental development methodology. As a result, the officials believed the certification of adequate incremental development was not applicable, even though at least one project within each of the investments involved software development. However, based on OMB’s guidance, these “not applicable” responses for the 18 investments were incorrectly reported and the agencies should have provided either a “yes” or “no” response to the certification question because the investment included software development. Specifically, OMB’s fiscal year 2017 capital planning guidance states that certification of incremental development applies to any investment that is developing software or services, as noted in its definition of adequate incremental development. In addition, staff in OMB’s Office of E-Government and Information Technology stated that a “not applicable” response to the question was only acceptable in cases where software development was not occurring, such as an investment related to infrastructure or technology refreshment of equipment. Staff in the Office of E-Government and Information Technology acknowledged the need for more meaningful oversight of agencies’ use of incremental development and stated that, beginning in fiscal year 2018, OMB will no longer require agencies to report CIO certification information in their investments’ major IT business cases or on the IT Dashboard. Rather, OMB staff stated that agencies would be required to separately provide the certifications needed to demonstrate compliance with FITARA. OMB’s revised approach and agencies’ implementation of OMB’s guidance are further discussed later in this report. Regardless of the reporting requirements in place, it remains critical that federal agencies report accurate incremental development information to OMB because of OMB’s plans to use this information for investment management and oversight. However, our September 2016 work has highlighted the poor quality of data related to incremental development at the project level, including whether a project is delivering a release every 6 months. Specifically, we reviewed seven agencies’ major IT software projects and found inconsistencies that affected the accuracy of the reported rates of delivery for all agencies—and at least a 10 percentage point difference in the reported rate on the IT Dashboard for five of these agencies. We therefore made recommendations to the seven agencies to improve their reporting of incremental development data on the IT Dashboard. Having accurate data on agency investments’ use of incremental development is critical for providing oversight and management of these investments and to ensure that OMB and lawmakers can hold CIOs accountable for the investments’ performance. We have previously made recommendations to Commerce, Defense, DHS, Education, HHS, Transportation, and Treasury to improve the accuracy of reporting on the IT Dashboard and continue to believe these recommendations are appropriate. In addition, until Energy, SSA, and USDA improve their reporting of incremental development data on the IT Dashboard, their efforts to improve the use of incremental development may not be successful. As a result, the agencies increase the risk that the potential impact of utilizing incremental development to more quickly deliver useful functionality to users and improve the likelihood that these multimillion dollar projects will meet their stated goals, may not be realized. Multiple Challenges Were Commonly Identified by Agencies as Impacting the Delivery of Incremental Functionality The majority of the 24 agencies in our review reported that multiple challenges had impacted their ability to adequately implement incremental development for their major IT software development investments. In particular, when presented with a list of challenges identified by our past work on incremental development, 21 of the agencies selected seven common challenges to developing investments incrementally. Each of these seven challenges was selected by 5 or more agencies. For example: 14 agencies identified problems with program staff over-utilization and the lack of skills and experience as their top challenge; 6 agencies reported that development work was slowed by inefficient governance and oversight processes; 5 agencies reported that development schedules were impeded by procurement delays; and 5 agencies identified the lack of stable, prioritized requirements as a challenge. In addition, 3 agencies identified a new challenge which had not been described in our prior work. Specifically, they reported that organizational and cultural changes associated with the transition from a traditional waterfall software methodology to an incremental methodology required more time and resources to implement than anticipated. Table 2 summarizes the common challenges identified by agencies and the number of agencies that reported each challenge, ranked by number of agencies reporting the challenge. Examples of the challenges—and actions taken to overcome them—are discussed following the table. Project staff were over-utilized or lacked the necessary skills and experience. Officials from the Office of the CIO at 14 agencies (DHS, Education, EPA, GSA, Justice, NASA, NRC, OPM, SBA, SSA, State, Treasury, USAID, and VA) reported challenges in implementing incremental development practices associated with project staff, such as a lack of staff with the necessary skills and experience in utilizing incremental approaches, inadequate training on these approaches, overutilization of business or subject matter experts, and the lack of engagement between product owners and subject matter experts. To address these challenges, agency officials reported implementing new approaches, such as training programs focused on incremental development, coaching strategies to assist project managers in managing acquisitions, and hiring practices. For example, among these agencies: DHS officials reported that project staffs’ lack of necessary skills and experience in understanding the requirements for managing major IT acquisitions is an ongoing issue, not only related to incremental development, but also to IT program and project management. The officials stated that they had developed an acquisition coaching and assistance strategy that was intended to establish an experienced team of acquisition coaches who were up-to-date on the latest acquisition, contracting, and development techniques to assist project managers in managing the acquisitions. The officials stated that they hoped to present lessons learned and recommendations on this strategy to the agency’s Agile working group in summer 2017. Treasury officials reported a significant need for specialized engineers, architects, and developers with skills in older programming languages to maintain its many legacy systems. For example, officials noted that the agency is modernizing its core taxpayer account processing applications, which utilized antiquated programming languages, to more modernized platforms. Treasury officials noted that they have been shifting staff to meet immediate needs; augmenting teams with contractors, where possible; and hiring new staff to fill critical open positions. Nevertheless, the officials said they have had to slow work on four key projects and delay the launch of other projects. In addition, the officials stated that they are relying on contractors more to meet the agency’s staffing needs. EPA officials noted that, as the agency transitions from using waterfall software development approaches to Agile-based approaches, it needs more skilled staff with experience in Agile development. These officials stated that the agency’s CIO had taken several actions to address this challenge, including creating an Office of Digital Services and Technical Architecture to promote Agile and user-centered design, establishing a fellowship program to bring outside Agile experts into the project teams, and creating a blanket purchase agreement to allow agency project teams to purchase Agile programming and consulting services. According to NRC officials, one of the greatest incremental delivery challenges has been the difficulty of engaging sufficient business area product owners and subject matter experts. For example, the officials explained that, despite product owners’ enthusiasm for increased engagement with developers, the demands of the agency’s core mission work presents challenges for these owners in being available for meetings related to Agile development activities. NRC officials informed us that the agency had addressed the challenge by working to establish a predictable, recurring schedule for product owner and subject matter expert engagement on development projects, where expectations are communicated to management about time commitments. Further, agency officials from a number of the 14 agencies that experienced this challenge reported varying approaches to implementing new incremental development training. For example, Treasury officials stated that the agency has developed in-house training for existing developers to meet the needs of its modernized programs. Education officials noted that the agency identified a select team of IT professionals within the agency to receive formal training in incremental development practices. Further, VA officials told us that its Enterprise Program Management Office is focused on training IT personnel on incremental development principles. Finally, SSA officials reported that the agency had launched a training program that had sent hundreds of developers through a 6-week boot camp program, which included courses in incremental development and modern coding languages. Programs did not receive sufficient funding or received funding later than needed. Officials from the Office of the CIO at nine agencies (GSA, NASA, OPM, SSA, State, Treasury, USAID, USDA, and VA) reported challenges associated with programs not receiving sufficient funding or not receiving funding until late in the fiscal year. These challenges were a result of changing funding priorities, budget cuts, and continuing resolutions, which disrupted delivery schedules and required agencies to delay, reprioritize, or discontinue the rollout of particular investments or modernization activities. Agencies reported adopting various approaches to overcome the challenges in this area, such as delaying project schedules, developing alternate plans for delivering functionality, and using flexible contracting strategies. For example: USDA officials reported that funding for a number of projects was not available until late in the fiscal year, which impacted project schedules. The officials stated that one component agency addressed the funding delay by adjusting schedule start dates for projects relative to the current fiscal year, which helped to improve schedule projections. OPM officials told us that they had faced challenges in performing work on incremental projects due to a lack of available resources caused by delays in receiving funding. The officials stated that they addressed this challenge by developing alternate plans to delivering incremental functionality with a different scope or focus for the system. VA officials reported that they faced challenges with funding IT efforts that span multiple years. The officials noted that administrative priorities often change over time, impacting the level of funding approved in subsequent years to undertake incremental development projects. To address this, officials noted that they used flexible contracting strategies—such as options that allow the government to continue the contract only when funding is assured, adjusting a contract’s time frames to match a delay, adjusting schedules, designing contracts so that a vendor is paid based on completion of measured functionality, and using the change request process to contribute funding to other projects. Treasury officials stated that the lack of a dedicated funding commitment had led to difficulties in longer-term strategic planning for IT improvements. The officials stated that resources assigned to certain IT projects had to be leveraged for legislatively mandated investments, causing project delays and pauses for these projects. As a result, the officials reported that the agency had been reviewing core initiatives and infrastructure programs, such as infrastructure, hardware, and software refresh and process improvements, to determine if they can scale back scope or lengthen schedules. The officials said that at least one program has been formally paused. Projects experienced management and organizational challenges that introduced delays. Officials from the Office of the CIO at seven agencies (Commerce, Interior, NASA, NRC, NSF, SBA, and Transportation) reported that management and organizational challenges had introduced delays in delivering functionality to users. These challenges included delays in testing and meeting delivery schedules due to dependencies on other systems or projects and a lack of approved software or appropriate equipment. Agency officials reported implementing various approaches to overcome these challenges, such as addressing external dependencies, tailoring development processes, and providing waivers for the acquisition of software and hardware. For example: Commerce officials reported that they faced organizational challenges in meeting scheduled delivery time frames due to delays with another project that was not ready for testing. In particular, the officials reported that one of their systems was ready for testing but experienced delays because the system had an interface with another system that was not ready for testing. The officials said that the delay in Commerce’s ability to test its system resulted in missed delivery milestones. In order to continue development, the project team separately tested its system without including the interface functionality. NRC officials reported that they had experienced delays in meeting their incremental projects’ delivery schedules due to dependencies on multiple complex projects. These officials told us that the agency addressed these delays by improving existing processes and implementing a change control board and an enterprise test development environment. SBA officials reported that delays were introduced when the agency did not have necessary software and hardware available for development activities. Officials noted that these challenges were a result of the agency not maintaining an updated inventory of approved software and developers not having access to laptops needed for development activities. SBA officials stated that the agency addressed the lack of approved software and equipment needed for incremental development by processing a waiver to use software tools and procuring laptops for the developers. Incremental development work was slowed by inefficient governance and oversight processes. Officials from the Office of the CIO at six agencies (DHS, HUD, NRC, State, USAID, and USDA) reported that they had experienced challenges in developing projects incrementally because they were required to follow agency processes that were lengthy, inefficient, or not easily adaptable to a more rapid incremental delivery release schedule. Agency officials also noted that a lack of understanding among project staff regarding the benefits of incremental development was a challenge. The officials reported implementing new guidance and management processes to overcome these challenges. For example: DHS officials reported that inefficient governance and oversight processes had caused delays in obtaining necessary approvals for moving projects forward. Specifically, these officials reported that the agency’s acquisition lifecycle framework did not allow for tailoring any of its processes to accommodate Agile development. The officials noted that these challenges were addressed with the publication of updated lifecycle documents that incorporated incremental development guidance into the agency’s policies and procedures. HUD officials reported that the agency’s internal approval process for the Privacy Act System of Record Notice did not accommodate incremental releases. Specifically, the agency’s incremental development process called for the release of functionality every 60 days, but the agency’s Privacy Office required 90 to 180 days to complete its approval process. HUD officials reported that the Office of the CIO is collaborating with the Privacy Office to expedite the existing approval process, and have proposed that a single system of record notice be prepared for each incremental development project, rather than one for each release. USAID officials reported that the time needed for defining and incorporating changes in response to IT security and privacy standards, processes, and artifacts provided before the system is granted an Authority to Operate is a challenge. These officials stated that the Office of the CIO has acquired additional knowledgeable staff to support projects in the incorporation and execution of security and privacy requirements. State officials reported that applying incremental development principles to projects has been a challenge because agency personnel have lacked a clear understanding of the benefits of incremental development and how to apply incremental concepts to unique project types. These officials reported that the agency was updating its guidance and processes to place greater emphasis on the importance of incremental development, and that the agency had established a review process to ensure projects plan for implementing incremental development. Project characteristics made rapid delivery of functionality infeasible or impracticable. Officials from the Office of the CIO at six agencies (Interior, Justice, Labor, SSA, Transportation, and Treasury) reported that they believed rapid delivery of functionality was infeasible or impracticable for projects that addressed human health and safety concerns, had legislative mandates that established immovable delivery time frames, were primarily for infrastructure deployment, were updates to existing systems to address legal or other regulatory changes, or were updates to legacy systems that utilize old programming languages. However, none of the agencies identified solutions for these challenges that enabled them to deliver functionality in the 6-month time frames required by OMB. For example: Transportation officials noted that Federal Aviation Administration projects, like those for its Next Generation Air Transportation System, are unique and complex due to safety concerns that impact the national airspace. As a result, these investments require years of design, development, and testing, which officials believe precludes using incremental approaches that must deliver usable functionality every 6 months. Labor officials reported that certain projects, which are initiated in response to an executive order or other external mandate, come with required delivery time frames. This results in relatively short development schedules that do not lend themselves to using an incremental approach. Justice officials reported that several of the agency’s investments primarily dealt with the deployment of secure telecommunications, data centers, and other network infrastructure, making it difficult to translate that delivery into meaningful increments. Justice officials stated that they did not deploy incremental development because the projects were infrastructure projects. Treasury officials reported that the development and maintenance of some major investments, such as the agency’s legacy tax systems, are not conducive to a 6-month delivery schedule due to the number of modifications that must be made based on changes to the tax laws, legislative mandates, and other system updates. Treasury officials stated that the agency has established a mature governance process for rolling out changes to these tax systems so there is only one annual update to the systems. SSA officials stated that using an incremental software development approach to modernize the agency’s legacy applications was challenging because the code for these applications was unstructured, overly complex, heavily interdependent, and utilized old programming languages. The officials stated that, in order to modernize these legacy applications, the project teams had to break programming changes into useful segments, streamline imbedded business process requirements, and rewrite the code using modern programming languages. As a result, the officials stated that these activities could not, at least initially, deliver functionality in smaller increments. Incremental development schedules were impeded by procurement delays. Officials from the Office of the CIO at five agencies (Education, HUD, Interior, OPM, and USDA) reported that they had experienced challenges with meeting incremental development schedules due to delays in getting contracts awarded or getting contract modifications approved. To overcome this challenge, agency officials reported that they negotiated with vendors and worked with the offices of procurement within their agencies to reduce delays and ensure all paperwork was completed in the time frames required. For example: Education officials reported that the agency uses contractors to perform most of its software development work. These officials stated that modifying existing contracts to require the use of incremental development approaches had caused delays in getting vendors to deliver functionality in 6-month increments. Education officials reported that they had negotiated with vendors to restructure delivery schedules in order to meet incremental delivery time frames. HUD officials reported that they had faced challenges in meeting project schedules due to delays in getting paperwork approved by the agency’s procurement office, which was busy with end-of-year activities. To address this, HUD officials stated that they collaborated with the Office of the Chief Procurement Officer to ensure the project teams submitted the required documentation for approval in advance of the procurement office’s end-of-year activities. OPM officials noted that they had faced challenges with adapting their procurement process to use incremental approaches. The officials stated that they worked with their Office of Procurement to incorporate incremental development procurement methodologies in order to reduce the time from contract initiation to award, as well as to reduce the amount of contract documentation and its complexity. Programs did not have stable, prioritized requirements. Officials from the Office of the CIO at five agencies (DHS, Justice, NSF, Transportation, and VA) reported that maintaining stable requirements, including defining a set of initial requirements, handling ongoing changes, and managing stakeholder expectations regarding the scope of, and number of changes to requirements, were challenges. To overcome these challenges, agencies reported strengthening standards, implementing training and coaching, and exercising better requirements and business practices. For example: DHS officials stated that managing stakeholder expectations related to requirements was challenging because product owners and business users expected project requirements not to change once they were developed, while development teams had planned for requirements to change and be reprioritized over the course of the project since the team was using an incremental approach. These officials reported that they issued new guidance and offered assistance and coaching for programs and projects to better identify and document needs and requirements, while encouraging business users to plan for and prioritize the backlog of items to be deployed incrementally. Justice officials reported that it was a challenge to finalize the scope of work for various projects because disparate stakeholders had competing priorities which led to constant changes in the requirements. The officials noted that, for one of the agency’s projects, the project team is currently establishing a process to obtain consensus on stakeholder priorities in advance. For other projects, Justice teams have sought or received training from experienced, certified Agile experts in developing customer requirements. NSF officials reported that, when first establishing its incremental development program, the agency had experienced challenges in defining a stable set of priority requirements for the initial increments. The officials told us that, to address this challenge, they elevated customers to fill the leadership roles of the working groups that provided the requirements to ensure the requirements of each increment were well defined and clearly prioritized. VA officials reported that, while the agency has transitioned to Agile development methods over the past several months, it still works through challenges in developing detailed user stories with its business partners, and reported many instances when a project was undertaken without knowing the full scope of requirements. VA officials reported that they took several actions to help address this challenge, including introducing a new development methodology to promote incremental development principles, and establishing an account management office that works with business partners to ensure detailed business cases are prepared prior to approval. They also integrated more rapid prototyping into the planning stages as a way to gather requirements and test assumptions early and cheaply. Organizational changes associated with the transition from a traditional software methodology to an incremental development methodology require time and resources. Officials from the Office of the CIO at three agencies (EPA, GSA, and Labor) independently reported challenges related to organizational changes, such as staff adapting to the culture shift from being business customers to taking on a more active role as product owners and project managers in the software development process. For example: EPA officials stated that the agency had experienced challenges as staff transitioned from using waterfall development practices to Agile practices because there had been skepticism within the agency on whether an Agile approach could meet the requirements for agency systems. The officials stated that the CIO had established an office to provide support to project teams that needed assistance in adopting Agile approaches, created a community of practice group, and developed guides and other maturity models to provide guidance on the adoption of Agile methodologies. GSA officials explained that implementing incremental delivery has required a culture shift for the agency’s business customers who were accustomed to having a different set of roles and responsibilities in the traditional software development process than what is used in the incremental development process. The officials stated that they have worked to train their customers to better capture the vision of what needs to be built and to be more active product owners and managers in communicating with the development team. As a result, the officials in the GSA Office of the CIO stated that they are enabling the business customers to serve as better product owners. The officials further stated that, by implementing this change, project staffs have (1) defined and prioritized clearer requirements; (2) selected the proper technical tools to support business needs; (3) worked with the contracting office to develop better-defined contracting documents and make contract awards; (4) identified dependencies associated with development efforts; and (5) provided transparency on what work has been completed, what work is planned, and the challenges associated with the investments. Additionally, three agencies (Defense, Energy, and HHS) reported no challenges with implementing incremental development. However, officials from all three agencies discussed issues surrounding the use of incremental development, both as part of this review and as part of our prior work. In particular, Energy officials had told us that they had projects that failed to adequately employ incremental development practices, which required follow-up with program managers to identify corrective actions. Also, both Defense and HHS officials have reported facing management and organizational challenges, such as dependencies on integrating changes with other systems, which impacted the delivery of functionality every 6 months. Defense officials noted that many of the agency’s investments were complex and could not adhere to a 6-month delivery schedule. Federal investments may continue to encounter increased cost and schedule risks if agencies cannot adequately implement incremental development approaches. The discussion of challenges identified in this report—and the range of actions taken by the agencies to address them— is a valuable resource that could have the potential to help agencies that face similar concerns. Agencies Reported Using Information from the Incremental Certification Process to Improve Investment Management Oversight Although a number of agencies identified challenges in utilizing incremental development, officials in the Office of the CIO at 21 of the 24 agencies also reported that the CIO certification process was beneficial to their agencies because it had assisted them in overseeing the management of agency investments. For example, officials from 13 agencies reported that they used the information derived from the certification process to identify challenged development projects that could be using a more effective incremental development approach and officials from 2 agencies stated that the information helped them determine whether an investment should undergo a TechStat review. Table 3 lists the four benefits reported by federal agencies in utilizing the CIO certification process and the number of agencies that reported each activity, ranked by number of agencies reporting the challenge. Examples of the benefits agency officials identified from these investment management oversight activities are discussed following the table. More effective use of incremental development approaches. Officials from the Office of the CIO at 13 agencies (Defense, DHS, Education, Energy, EPA, GSA, Interior, NASA, NRC, SBA, SSA, Transportation, and USDA) stated that they review the information about the investment’s use of incremental development to identify projects that could be implementing a more effective incremental development approach. For example, Energy, GSA, and SBA officials stated that they review projects not using adequate incremental development in order to identify necessary corrective actions, such as: (1) breaking out projects into shorter duration activities; (2) implementing the use of investment reviews, whereby funds are released incrementally upon completion of clear success criteria; (3) developing major IT investment business cases that outline project plans for incremental development; and (4) monitoring new and existing investments to ensure delivery of capabilities within schedule and cost thresholds. In addition, DHS, NASA, NRC, and SSA officials reported that the CIO uses the information to make corrections to projects that are not adequately implementing incremental development through such actions as the CIO’s office: (1) working with project team officials to convert project activities to an incremental approach; (2) requiring any deviations from approved releases of software development products to be approved by the CIO; (3) requiring projects that deviate from the use of adequate incremental development principles to be approved by the CIO; and (4) determining which investments must use incremental development, and requiring the projects to do so. Provide oversight of IT investments. Officials from the Office of the CIO at seven agencies (Commerce, Interior, Labor, OPM, NSF, State, and VA) stated that they use the information to provide oversight of IT investments. In particular, Interior and NSF officials reported that their CIOs use the information obtained during the performance measurement baseline approval process to make decisions regarding the agency’s major IT investments. For Interior, officials stated that the types of decisions the CIO may make include, but are not limited to, accelerating delivery, reducing scope, or halting or terminating an IT project. For NSF, officials stated that the decisions could result in changes to program objectives or scope of individual projects under the program, redirection of resources, changes to planned levels of expenditure, or recommendations for corrective actions based on the evaluation. In addition, Commerce officials stated that investment data are reviewed by the CIO on a monthly basis and, based on the status, can undergo further scrutiny at a review board meeting or other CIO review process. Labor officials noted that its capital planning team updates the CIO’s rating and explanation for each major IT investment in the agency’s capital planning and investment control system, and submits the rating information to the IT Dashboard each month. Improve incremental development processes. Officials from the Office of the CIO at five agencies (DHS, EPA, HUD, Justice, and USDA) stated that they leveraged the information to improve their incremental development processes. For instance, USDA officials reported that they leveraged the results of the certification process to build an incremental development community of practice. DHS officials stated that they developed coaching and other assistance to help convert projects to an incremental process. Lastly, Justice officials stated that they utilized the results of the certification process to: (1) develop best practices and lessons learned on using incremental development, (2) establish additional training, and (3) establish mentoring programs or other familiarization with incremental techniques to support business improvement. Determine if a TechStat is warranted. Officials from the Office of the CIO at two agencies (Labor and SBA) stated that they use the results of the certification process to determine whether an investment should undergo a TechStat review. In particular, Labor officials stated that if an investment is rated as high risk for 3 consecutive months during the review process, then a TechStat is initiated. In addition, SBA officials noted that, as part of their certification process, the Office of the CIO portfolio management team meets with the CIO to determine if any IT investments should have a Techstat review. Given the significant size of the federal government’s annual investment in IT and the often disappointing results from IT development efforts, finding innovative ways to improve the quality and timeliness of agencies’ IT investments may help improve these development efforts. The discussion of benefits identified with using the certification process—and the range of management oversight activities taken by the agencies— may have the potential to help agencies improve their management and oversight of IT acquisitions. Most Agencies Lack Detailed CIO Certification Policies and OMB Has Improved Related Reporting Guidance Of the 24 agencies in our review, only 4 had clearly defined processes and policies to ensure that the CIO will certify that major IT investments are adequately implementing incremental development. The remaining 20 agencies either did not include details such as the role of the CIO in the certification process or how certification would be documented, or had not yet finalized a policy. OMB’s fiscal year 2018 guidance was not clear regarding what actions agencies should take to demonstrate compliance with FITARA’s certification requirement. However, OMB issued its new fiscal year 2019 guidance in August 2017, which addressed the weaknesses we identified. Only 4 of 24 Agencies Have Clearly Defined a Policy for CIO Certification of Incremental Development A provision in FITARA, enacted in December 2014, states that, in its annual IT capital planning guidance, OMB is to require agency CIOs to certify that IT investments are adequately implementing incremental development. Subsequent OMB guidance on the law’s implementation, issued in June 2015, directed agency CIOs to define processes and policies for their agencies which ensure that they certify that IT resources are adequately implementing incremental development. As part of the guidance, OMB defined adequate incremental development as the development of software or services, with planned or actual delivery of new or modified technical functionality to users that occurs at least every 6 months. OMB’s guidance allows agencies the flexibility to define the processes that CIOs use for ensuring the certification of adequate incremental development. For example, CIOs can rely on internal governance processes, such as investment and capital planning processes, to evaluate agency investments for adequate use of incremental development. In addition, agency CIOs are to use OMB’s definition of adequate incremental development when developing their certification processes and determining whether to certify that their investments met these criteria. While OMB’s guidance is not specific on what elements should be included in these certification policies and processes, GAO’s Information Technology Investment Management framework notes that policies and procedures should be clearly defined, including the role of appropriate stakeholders, and have appropriate artifacts to document decisions made. Although OMB’s requirement has been in place since June 2015, only 4 of the 24 agencies we reviewed (Commerce, DHS, Energy, and Transportation) have clearly defined processes and policies intended to ensure that their CIOs certify that major IT investments are adequately implementing incremental development. Specifically, all 4 agencies’ policies contained all the elements that we evaluated in the agency guidance: descriptions of the role of the CIO in the process; how the CIO’s certification will be documented; and definitions of incremental development and time frames for delivering functionality consistent with OMB guidance. However, the remaining 20 agencies did not have clearly defined processes and policies in place because their documentation either did not describe the CIOs’ role in the certification process or how certification would be documented, define incremental development and provide delivery time frames consistent with OMB guidance; or the policy had not yet been finalized. The results of our analysis of agencies’ policies is shown in figure 2, while additional details regarding the status of the 24 agencies’ incremental policies are provided in appendix III. The four agencies that had clearly defined policies for certification took a variety of approaches to defining how the CIOs would conduct the review and certification of major IT investments, determining how certification would be documented, and ensuring OMB’s guidance regarding the definition of adequate incremental development and delivery time frames was followed. Specifically: Commerce’s capital planning guidance requires bureau CIOs or other accountable officials to review project documentation regarding project deliverables and issue an e-mail or other time-stamped document that certifies the adequate implementation of incremental development. In addition, Commerce guidance adheres to OMB’s guidance requiring delivery time frames every 6 months or less and sets forth a definition of adequate incremental development that is consistent with OMB guidance. DHS’s technical investment review guidance states that the CIO is to conduct a review of each investment using an investment review checklist that includes information provided by project managers as to whether the investments have used incremental development adequately. The CIO is to certify whether the project is implementing incremental delivery at least every 6 months and document this certification in the checklist. DHS guidance also includes a definition of adequate incremental development and time frames for delivering functionality that are consistent with OMB guidance. Energy’s capital planning guidance states that the CIO is to review and certify each investment’s adequate use of incremental development as part of monthly investment review board meetings and during the monthly review of the IT Dashboard data. The status of this certification is documented in the agency’s monthly investment summary spreadsheet. In addition, Energy’s guidance adheres to OMB’s definition of adequate incremental development and its associated delivery time frames for its incremental development activities. Transportation’s investment management guidance states that the CIO is to conduct a review of the investment as part of the investment review board process; this board is co-chaired by the agency CIO. The CIO is to certify adequate incremental development in the signed investment decision review document. In addition, Transportation’s guidance adheres to OMB’s definition of adequate incremental development and delivery time frames. However, the remaining 20 agencies did not have clearly defined policies and processes in place to ensure CIOs are certifying each major IT investment’s adequate incremental development. In particular, while officials from the Office of the CIO at 11 agencies asserted that they had a policy for CIO certification, these policies lacked details, such as a description of the role of the CIO in the process, a description of how certification would be documented, and definitions of incremental development and delivery time frames consistent with OMB guidance. Table 4 details our evaluation of the certification policies provided to us by the 11 agencies. Agency officials in the Office of the CIO at each of the 11 agencies provided a variety of reasons for why their policies lacked details regarding the role of the CIO in the process and how certification was documented, or did not include definitions for incremental development and delivery time frames. For example, State officials reported that updating their policies to comply with FITARA was not seen as a priority until Congress conducted its own evaluation of incremental development in May 2016. They stated that their new policy is currently in the process of being finalized but no time frames for finalization were provided. However, we could not determine whether the guidance is expected to address the issues we identified because State provided us excerpts of its new draft policy and the new proposed guidance that did not include any details in the areas we identified. In addition, GSA officials stated that they had used existing governance bodies and processes to determine whether the investment would be certified. The officials stated that they did not see a reason to create a separate policy for CIO certification, since the agency always looks at using incremental development for new projects and the agency certifies the investment in the major IT business case. Further, OPM officials stated that their agency had been on a path to address the FITARA requirements, but progress was slowed due to the lack of a budget for fiscal year 2017. The officials stated that they intend to update the agency’s policies, but had no firm plans for doing so pending the availability of budgetary resources. Lastly, NSF officials stated that they have not seen the need to have a policy on CIO certification for a number of reasons. NSF reported that it is a small agency with few large IT investments, and many of those are legacy systems in operations and maintenance, rather than development. Therefore, according to the officials, the agency has not had many occasions for the CIO to need to certify adequate incremental development for major IT investments. Second, the officials stated that the NSF CIO is actively involved in the investment review process and did not feel a policy was needed to describe these activities. Third, NSF officials stated that it is their belief that policies are generally only required to correct something which is not working. Lastly, NSF officials stated that the agency’s definition of an Agile sprint was its definition of incremental development. However, sprints are not released directly to users, and therefore, the definition is not consistent with OMB guidance. However, the officials said they might reconsider developing a policy, but did not provide a time frame for doing so. Finally, 9 agencies had not yet finalized a CIO certification policy. Office of the CIO officials in each of these agencies reported that they had relied on existing IT governance processes and budget mechanisms, or created new targeted IT reviews to determine the CIO certification for fiscal year 2017 that was reported on the IT Dashboard. For example, HHS officials reported that the agency used existing project and investment milestone reviews as part of its enterprise performance lifecycle to determine whether the investment would be certified as having adequate incremental development. SBA officials told us that the agency’s portfolio management team met with investment managers during the monthly update process for the IT Dashboard, while USAID officials noted that the agency’s CIO reviews the incremental development status of all major investment software development projects on a monthly basis. Further, Justice officials reported that the IT Investment Oversight Manager’s staff reviewed the major business cases and requested justification for software development investments that were not: (1) using an iterative or Agile methodology, (2) expected to have a production release containing usable functionality every 6 months, or (3) showing an actual or planned date for deployment production within a 6-month time frame. In addition, while six of these agencies reported plans to finalize a policy for CIO certification by December 2017, one agency reported its policy would be finalized in 2018, and two agencies did not provide a time frame for finalizing a policy. Figure 3 below shows the agencies’ reported time frames for finalizing a policy on CIO certification of incremental development. Officials from each agency’s Office of the CIO provided a variety of reasons for why they had not yet developed or finalized policies for CIO certification of adequate incremental development. For example, EPA officials stated that the agency has been focusing on standing up the programs and structures needed to support incremental development and, thus, had not prioritized developing a policy. In addition, EPA officials stated that they had not developed a definition of functionality or time frames, but that their guidance points to industry standards. SBA officials stated that, since the majority of the agency’s investments were in operations and maintenance, they did not see the need to have policies or procedures for incremental development. In addition, HUD, NASA, and USAID officials reported that their agencies were in the process of finalizing policies, but had experienced delays due to the number of stakeholder comments or limited staff resources. Lastly, Defense officials stated that they had included information in their fiscal year 2018 budget submission guidance for component CIOs to certify adequate incremental development and were working to incorporate this process into their Financial Management Regulations, which were to be finalized in the first quarter of fiscal year 2018. However, the officials stated that the agency’s process is driven by its efforts to comply with whatever process OMB requires in the annual capital planning guidance and, thus, they would not have a separate certification policy from the budget guidance. Additionally, Defense officials reported that, for their agency’s investments, delivery every 12 to 18 months was more appropriate than the 6 months that OMB requires. Nevertheless, while Defense officials may believe that 12 to 18 month delivery cycles may be more appropriate for their work, OMB’s guidance requires agencies to deliver functionality at least every 6 months and does not allow for exceptions. We previously recommended that Defense establish a policy on the CIO certification of incremental development. Until this guidance is finalized, Defense may not be able to ensure incremental development practices are adequately implemented at the agency. We therefore continue to believe the recommendation is appropriate. Annual CIO certification of incremental development is critical to ensuring that agency CIOs exercise the proper authority and oversight over their agencies’ major IT investments. Having appropriate authority and oversight helps to create IT systems that add value and are aligned with agencies’ missions, while reducing the risks associated with low-value and wasteful investments. In the absence of clearly defined policies, agencies continue to run the risk of failing to deliver major investments in a cost-effective and efficient manner. We have previously made recommendations to Defense, Education, HHS, and Treasury to establish CIO certification policies, but as noted in this report, these agencies still have not yet finalized their guidance to clearly detail their agencies’ processes for certification. Therefore, we continue to believe these recommendations are appropriate. Agencies that lacked finalized policies may not be able to meet their reported time frames for finalizing their certification policies, since agency officials have noted that their approval processes are quite lengthy, and in some cases, the proposed dates for completion have changed several times. In addition, several policies were still being developed. Therefore, we cannot be assured that these documents will fully address the areas we noted. Until the 20 agencies update or finalize processes and policies for CIO certification, including defining the role of the CIO in the process, describing how certification will be documented, and including definitions of incremental development and delivery time frames consistent with OMB guidance, they will not be able to fully ensure adequate implementation of, or benefit from, incremental development practices. As a result, the agencies increase the risk that federal government resources will not be used in the most effective and efficient manner. OMB Has Improved Its IT Capital Planning Guidance to Ensure CIO Certification Reporting Clearly Specifies Agency Responsibilities FITARA states that OMB is to require in an agency’s annual IT capital planning guidance that each covered agency CIO certify that IT investments are adequately implementing incremental development, as defined in capital planning guidance issued by OMB. However, since the law was enacted in December 2014, OMB has taken three different approaches to address this reporting requirement. Of the approaches, one did not clearly and consistently provide agencies with the direction needed to effectively implement this important provision and report the status of certification. As previously noted, OMB’s fiscal year 2017 IT capital planning guidance (issued in June 2015) required each major IT investment to respond to a question in the associated major IT business case regarding whether the CIO certified the adequate implementation of incremental development with either a yes, no, or not applicable. This reporting approach required that agency CIOs provide an explicit statement regarding the certification of adequate implementation of incremental development for each major IT investment. Further, this approach allowed for the status of CIO certification of each investment to be publicly reported on the IT Dashboard via the investment’s major IT business case. However, OMB’s capital planning guidance for fiscal year 2018 (issued in June 2016) lacked clarity regarding how agencies were to address the requirement certifying adequate incremental development. While the 2018 guidance states that agency CIOs are to provide the certifications needed to demonstrate compliance with FITARA, there is no specific reference to the provision requiring CIO certification of adequate incremental development. As a result of this change, OMB placed the burden on agencies to know and understand how to demonstrate compliance with FITARA’s incremental development provision. Further, because of the lack of clarity in the guidance as to what agencies were to provide, OMB could not demonstrate how the fiscal year 2018 guidance ensured that agencies provided the certifications specifically called for in the law. OMB staff explained that the changes to the fiscal year 2018 capital planning guidance were made with the intent to rely on agencies’ reported responses on the IT Dashboard regarding the use of incremental development by an investment’s projects, rather than relying on an agency’s response to the yes, no, or not applicable question about the status of an investment’s certification of incremental development. Providing a clear and consistent approach for agencies to follow in reporting the status of certification is critical to ensure that agencies are able to comply with this key FITARA provision and to ensure that CIOs are held accountable for the performance of their major IT investments. OMB staff from the Office of E-Government and Information Technology stated that the fiscal year 2019 guidance would be responsive to the issues we raised. Accordingly, in August 2017, OMB issued its fiscal year 2019 guidance, which addressed the weaknesses we identified in the previous fiscal year’s guidance. Specifically, the revised guidance requires agency CIOs to make an explicit statement regarding the extent to which the CIO is able to certify the use of incremental development, and to include a copy of that statement in the agency’s public congressional budget justification materials. As part of the statement, an agency CIO must also identify which specific bureaus or offices are using incremental development on all of their investments. Agency CIO certification of the use of adequate incremental development for major IT investments is critical to ensuring that agencies are making the best effort possible to create IT systems that add value while reducing the risks associated with low-value and wasteful investments. These changes to OMB’s fiscal year 2019 guidance provide a key improvement for ensuring that agency CIOs have a consistent approach to follow in providing the certifications specifically called for in the law. Conclusions One of the aims of FITARA was to encourage the use of incremental development throughout the federal government and, as of August 2016, more than half of the 24 agencies’ IT investments had been certified as adequately implementing incremental development, as required by FITARA and defined in OMB guidance. However, a number of responses for agency investments were incorrectly reported and it will be critical that agencies continue to improve the accuracy of investment data reported on the IT Dashboard. While we have previously made recommendations to numerous agencies to improve the accuracy of reporting on the IT Dashboard, issues with reporting remain, reinforcing the need for agencies to ensure that accurate data are made available for the oversight and management of their investments. In addition, while OMB issued guidance in June 2015, requiring agency CIOs to define policies and processes for CIO certification, as of August 2017, only 4 of 24 agencies had established policies that clearly define these processes. At this point, over 2 years since the law’s enactment, it is critical that agencies take action to put in place appropriate incremental certification polices to ensure CIOs exercise the proper authority and oversight over major IT investments, as required by law. Otherwise, agencies run the risk of not realizing the benefits of incremental development, as well as not implementing FITARA’s requirement for incremental development. While we previously made recommendations to Defense, Education, HHS, and Treasury to establish CIO certification policies, these agencies have still not yet finalized their guidance, and therefore, we continue to believe these recommendations are appropriate. Further, OMB has taken three different approaches to addressing FITARA’s reporting requirement for CIO certification and one did not clearly and consistently provide agencies with the direction needed to effectively implement this important provision and report the status of certification. OMB’s fiscal year 2017 capital planning guidance was helpful to agencies, in that it clearly directed agencies on how to publicly report their certifications. This also helped Congress in its oversight of agencies’ FITARA compliance. In contrast, OMB’s fiscal year 2018 capital planning guidance was a step backward, and OMB could not demonstrate how the guidance ensured that agencies provided the certifications specifically called for in the law. Going forward, the changes in guidance that OMB has implemented for fiscal year 2019 recognize the importance of providing clear direction to CIOs and how critical it is for agencies to create IT systems that add value while reducing the risks associated with low-value and wasteful investments. Recommendations for Executive Action We are making a total of 19 recommendations to 17 departments and agencies in our review. Specifically: The Secretary of Energy should ensure that the CIO of Energy reports major IT investment information related to incremental development accurately in accordance with OMB guidance. (Recommendation 1) The Secretary of Agriculture should ensure that the CIO of USDA reports major IT investment information related to incremental development accurately in accordance with OMB guidance. (Recommendation 2) The Commissioner of the Social Security Administration should ensure that the CIO of SSA reports major IT investment information related to incremental development accurately in accordance with OMB guidance. (Recommendation 3) The Secretary of Housing and Urban Development should ensure that the CIO of HUD establishes an agency-wide policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes: a description of the CIO’s role in the certification process; a description of how CIO certification will be documented; and a definition of incremental development and time frames for delivering functionality, consistent with OMB guidance. (Recommendation 4) The Secretary of the Interior should ensure that the CIO of Interior updates the agency’s policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes: a description of the CIO’s role in the certification process; a description of how CIO certification will be documented; and a definition of incremental development, consistent with OMB guidance. (Recommendation 5) The Attorney General of the United States should ensure that the CIO of Justice establishes an agency-wide policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes: a description of the CIO’s role in the certification process; a description of how CIO certification will be documented; and a definition of incremental development and time frames for delivering functionality, consistent with OMB guidance. (Recommendation 6) The Secretary of Labor should ensure that the CIO of Labor updates the agency’s policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes a description of the CIO’s role in the certification process and a description of how CIO certification will be documented. (Recommendation 7) The Secretary of State should ensure that the CIO of State updates the agency’s policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes: a description of the CIO’s role in the certification process; a description of how CIO certification will be documented; and a definition of incremental development and time frames for delivering functionality, consistent with OMB guidance. (Recommendation 8) The Secretary of Agriculture should ensure that the CIO of USDA establishes an agency-wide policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes: a description of the CIO’s role in the certification process; a description of how CIO certification will be documented; and a definition of incremental development and time frames for delivering functionality, consistent with OMB guidance. (Recommendation 9) The Secretary of Veterans Affairs should ensure that the CIO of VA updates the agency’s policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes a description of the CIO’s role in the certification process and a description of how CIO certification will be documented. (Recommendation 10) The Administrator of EPA should ensure that the CIO of EPA establishes an agency-wide policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes: a description of the CIO’s role in the certification process; a description of how CIO certification will be documented; and a definition of incremental development and time frames for delivering functionality, consistent with OMB guidance. (Recommendation 11) The Administrator of GSA should ensure that the CIO of GSA updates the agency’s policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes a description of the CIO’s role in the certification process and a description of how CIO certification will be documented. (Recommendation 12) The Administrator of NASA should ensure that the CIO of NASA establishes an agency-wide policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes: a description of the CIO’s role in the certification process; a description of how CIO certification will be documented; and a definition of incremental development and time frames for delivering functionality, consistent with OMB guidance. (Recommendation 13) The Director of the NSF should ensure that the CIO of NSF updates the agency’s policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes: a description of the CIO’s role in the certification process; a description of how CIO certification will be documented; and a definition of incremental development and time frames for delivering functionality, consistent with OMB guidance. (Recommendation 14) The Chairman of NRC should ensure that the CIO of NRC establishes an agency-wide policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes a description of the CIO’s role in the certification process and a description of how CIO certification will be documented. (Recommendation 15) The Director of OPM should ensure that the CIO of OPM updates the agency’s policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes a description of the CIO’s role in the certification process and a description of how CIO certification will be documented. (Recommendation 16) The Administrator of SBA should ensure that the CIO of SBA establishes an agency-wide policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes: a description of the CIO’s role in the certification process; a description of how CIO certification will be documented; and a definition of incremental development and time frames for delivering functionality, consistent with OMB guidance. (Recommendation 17) The Commissioner of the Social Security Administration should ensure that the CIO of SSA updates the agency’s policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes a description of the CIO’s role in the certification process and a description of how CIO certification will be documented. (Recommendation 18) The Administrator of USAID should ensure that the CIO of USAID establishes an agency-wide policy and process for the CIO’s certification of major IT investments’ adequate use of incremental development, in accordance with OMB’s guidance on the implementation of FITARA, and confirm that it includes: a description of the CIO’s role in the certification process; a description of how CIO certification will be documented; and a definition of incremental development and time frames for delivering functionality, consistent with OMB guidance. (Recommendation 19) Agency Comments and Our Evaluation We received comments on a draft of this report from OMB and the 24 agencies that we reviewed. Of the 17 agencies to which we made recommendations, 11 agencies agreed with our recommendations, 1 agency partially agreed, and 5 agencies did not state whether they agreed or disagreed with the recommendations. In addition, of the 7 agencies and OMB to which we did not make recommendations, 2 agencies agreed with the report and 5 agencies stated that they had no comments on the report. OMB did not agree with certain findings in the report. In addition, OMB and multiple agencies provided technical comments on the report, which we incorporated as appropriate. The following discusses the comments received from each agency to which we made a recommendation. In written comments, Energy concurred with our recommendation to ensure that the CIO reports major IT investment information related to incremental development accurately in accordance with OMB guidance, and described actions it has taken to address the recommendation. Specifically, the agency stated that its Office of the CIO reviews the accuracy of Energy’s major IT investment project reporting related to incremental development as part of monthly IT Dashboard and Investment Review Board meetings. By taking these actions, the agency considered the recommendation closed. As noted earlier in our report, we identified issues with the accuracy of Energy’s reported data related to the certification of incremental development. If Energy consistently and effectively implements its reviews of IT Dashboard data, as described, these actions should help to improve the accuracy of reported incremental development data on the IT Dashboard. We plan to continue to monitor the agency’s reporting of its incremental data on the IT Dashboard and accordingly, consider our recommendation to currently remain open. Energy’s comments are reprinted in appendix IV. In written comments, HUD concurred with our recommendation to establish an agency-wide policy and process for CIO certification of adequate incremental development and stated that it would provide more definitive information and timelines on how it plans to address the recommendation once our final report is issued. HUD’s comments are reprinted in appendix V. In written comments, Interior stated that the agency concurred with our recommendation to update the agency’s policy and process for CIO certification of adequate incremental development and described planned actions to implement it. Specifically, the agency reported that it is committed to updating its existing policy to include a description of the CIO’s role in the incremental development certification process, a description of how the CIO’s certification is documented, and a definition of incremental development, consistent with OMB’s guidance. Interior’s comments are reprinted in appendix VI. In an e-mail received on September 15, 2017, an audit liaison specialist in Justice’s Audit Liaison Group in the Internal Review and Evaluation Office stated that the agency agreed with our recommendation to establish an agency-wide policy and process for CIO certification of adequate incremental development and described planned actions to implement it. Specifically, the official stated that Justice will amend existing policy and processes to implement this recommendation. In addition, the official stated that Justice is fully supportive of incremental development and has drafted documentation, including guidance on an incremental system development life cycle. In an e-mail received on September 5, 2017, an administrative officer in Labor’s Office of the Assistant Secretary for Administration and Management stated that the agency had no comments on the report. In written comments, State did not say whether the agency agreed or disagreed with our recommendation to update the agency’s policy and process for CIO certification of adequate incremental development, but described ongoing actions to implement it. Specifically, the agency reported that it has developed an incremental development policy that addresses the recommendation we noted in our report. The agency added that the policy is currently in the process of being approved. State’s comments are reprinted in appendix VII. In an e-mail received on September 1, 2017, a senior advisor in the USDA Office of the CIO’s Enterprise Management office stated that the agency concurred with our findings and recommendations to report major IT investment incremental development information accurately and to establish an agency-wide policy and process for CIO certification of adequate incremental development, and had no further comments. In written comments, VA partially concurred with our recommendation to update the agency’s policy and process for CIO certification of adequate incremental development, stating that, while the agency does not currently have a policy in place outlining the CIO certification process, the agency CIO does direct that all investments utilize Agile and incremental delivery. The agency stated that it would take action to address our recommendation by drafting a policy that outlines the CIO’s role in the certification process and describes how certification will be documented. The agency added that the policy is targeted for completion by November 2017. If implemented as planned, these actions should address the intent of our recommendation. VA’s comments are reprinted in appendix VIII. In written comments, EPA stated that the agency generally agreed with our recommendation to establish an agency-wide policy and process for CIO certification of adequate incremental development, and presentation of facts in the report. The agency also noted that the policy developed in response to our recommendation is to address FITARA issues above and beyond the certification of incremental development. In addition, the agency noted a technical correction to a sentence in our report related to EPA’s use of information from certification. We have incorporated changes to the draft, as appropriate, to address this comment. EPA’s comments are reprinted in appendix IX. In written comments, GSA agreed with our recommendation to update the agency’s policy and process for CIO certification of adequate incremental development and reported that it would develop and implement a plan to fully address it. GSA’s comments are reprinted in appendix X. In written comments, NASA concurred with the recommendation to establish an agency-wide policy and process for CIO certification of adequate incremental development and described ongoing actions to implement it. Specifically, the agency stated that it is currently updating its policies to address the incremental development requirement. In this regard, NASA Policy Directive 2800.1 is to include a responsibility for the Office of the CIO to certify that IT resources are adequately implementing incremental development. In addition, NASA Policy Directive 7120.7 is being updated to include a definition of incremental development and processes for ensuing that the CIO certifies incremental development. According to the agency, these policies are estimated to be completed by March 2018. NASA’s comments are reprinted in XI. In an e-mail received on September 14, 2017, a senior advisor in NSF’s Office of the Director/Office of Integrative Activities stated that the agency had no comments on our report. In written comments, NRC stated that it was in general agreement with the findings in our report. The agency did not state whether it agreed or disagreed with our recommendation to establish an agency- wide policy and process for CIO certification of adequate incremental development, but described the planned action to implement the recommendation. Specifically, the agency reported that it plans to establish agency-wide, formalized processes and procedures for the CIO to approve the incremental development of major IT investments by December 31, 2017. NRC’s comments are reprinted in appendix XII. In written comments, OPM concurred with the recommendation to update the agency’s policy and process for CIO certification of adequate incremental development and described planned actions to implement it. Specifically, the agency reported that it intends to update its policies and processes to include a description of the CIO’s role in the certification process and a description of how certification will be documented. OPM’s comments are reprinted in appendix XIII. In an e-mail received on September 11, 2017, a program manager in SBA’s Office of Congressional and Legislative Affairs stated that the agency concurred with our recommendation to establish an agency- wide policy and process for CIO certification of adequate incremental development, and had no further comments. In written comments, SSA agreed with our two recommendations to report major IT investment incremental development information accurately and establish an agency-wide policy and process for CIO certification of adequate incremental development, and described planned actions being taken or planned to implement them. Specifically, the agency reported that it had implemented two new processes to support incremental development certification. According to the agency, each IT investment program manager is to answer a series of questions about the investment’s status and also certify whether their investment adequately implements incremental development. This information is to be used in the CIO’s ongoing investment evaluation process for reporting investment information on the IT Dashboard. SSA reported that these new processes are to be defined in an upcoming revision to the agency’s Capital Planning and Investment Control Guide. SSA’s comments are reprinted in appendix XIV. In written comments, USAID did not state whether it agreed or disagreed with our recommendation to establish an agency-wide policy and process for CIO certification of adequate incremental development, but described ongoing actions to implement the recommendation. Specifically, the agency reported that it is in the process of establishing an agency-wide policy and process for the CIO’s certification of adequate incremental development. It estimates that this policy will be implemented by August 31, 2018. USAID’s comments are reprinted in appendix XV. In addition to the aforementioned comments, the seven agencies and OMB to which we did not make recommendations provided the following responses. In written comments, Commerce stated that the agency concurred with the report as written. Commerce’s comments are reprinted in appendix XVI. In an e-mail received on September 7, 2017, a GAO Affairs staff member in Defense’s Executive Services Directorate stated that the agency had no formal comments on the report. In an e-mail received on September 8, 2017, a staff member in Education’s Office of the Secretary/Executive Secretariat stated that the agency had no comments on the report. In an e-mail received on September 11, 2017, an audit liaison in HHS’s Office of the Assistant Secretary for Legislation stated that the agency had no comments on the report. In an e-mail received on September 11, 2017, a program analyst in DHS’s GAO-Office of Inspector General’s Liaison Office stated that the agency would not be sending a management response letter. In an e-mail received on September 8, 2017, the Director of Audit Relations and Program Improvement in Transportation’s Office of the Secretary stated that the agency would not be providing a written management response. In an e-mail received on September 15, 2017, a supervisory IT specialist/GAO-Office of Inspector General liaison in Treasury’s Office of the CIO stated that the agency generally agreed with the report. The agency also provided comments related to various challenges discussed in the report. Specifically, the official described Treasury’s efforts to address challenges noted in the report related to project staff lacking the necessary skills for implementing incremental development practices and programs not receiving sufficient funding. In this regard, the official stated that the agency continues to develop knowledge, skills, and abilities for project managers and IT specialists and continues to provide specialized programming training to its IT staff in order to move to more modern programming languages and IT tools as part of system modernization efforts. In addition, the official stated that, to address challenges related to programs receiving sufficient funding, Treasury continues to adjust planned and ongoing projects to align with the availability of funds and external mandates. In an e-mail received on September 19, 2017, an OMB Assistant General Counsel stated that the agency generally disagreed with the tone, tenor, and conclusions of law reflected in aspects of our report. Among the concerns was that we had asserted that OMB’s prior year’s guidance to agencies on CIO certification of incremental development was not in compliance with OMB’s statutory obligations under FITARA. As our report states, FITARA mandates OMB to include in its annual IT capital planning guidance, a requirement that CIOs certify that investments are adequately implementing incremental development as defined in the guidance. We reported that OMB had issued guidance for fiscal years 2017, 2018, and 2019. However, we noted that the fiscal year 2018 guidance differed from the guidance issued in the other two fiscal years in that it did not clearly establish how agency CIOs were to demonstrate compliance with FITARA’s certification of adequate incremental development provision. Instead, the fiscal year 2018 guidance placed the burden on agencies to know and understand how to implement the FITARA requirement. Thus, while we concluded that OMB’s fiscal year 2018 guidance was not clear on how agencies were to certify adequate incremental development, we did not assert that this guidance failed to comply with FITARA. Accordingly, we did not make a conclusion of law regarding OMB’s guidance, as the e-mail stated. We continue to believe that our assessment of the fiscal year 2018 guidance is correct. OMB also stated that it disagreed with our conclusion that OMB could not demonstrate compliance with FITARA. However, our report did not make the conclusion that is stated in OMB’s response. As noted above, our report pointed out that OMB’s fiscal year 2018 guidance lacked clarity in terms of specifically stating what information agencies were to provide OMB in order to be compliant with FITARA’s requirement that agency CIOs certify incremental development. Therefore, we concluded that OMB could not demonstrate how the fiscal year 2018 guidance ensured that agencies provided the certifications specifically called for in the law. As such, we continue to believe that our conclusion is appropriate. Further, OMB stated that our conclusion was predicated on OMB’s reluctance to share agency pre-decisional budget information. It is up to OMB to demonstrate that its fiscal year 2018 guidance ensured agency compliance with FITARA. Though OMB asserted that our conclusion was based on OMB’s reluctance to share agency pre- decisional budget information, our conclusion was instead based on the fact that OMB provided no documentary evidence to establish how agencies complied with the FITARA certification requirement for fiscal year 2018. Consequently, we believe our assessment that OMB could not demonstrate how the fiscal year 2018 guidance ensured that agencies provided the certifications specifically called for in the law is accurate. In a subsequent e-mail to us on October 4, 2017, the OMB Assistant General Counsel provided additional comments related to the disagreements described above. Specifically, OMB stated that our report’s “focus on the use of the term ‘certification’ was confusing in that appears to reference the term ‘certify’ [found in the FITARA provision on the adequate use of incremental development], and also seems to be a reference to the requirement that CIOs ‘approve’ and define development processes.” In our report, we discuss FITARA’s requirement that OMB annually issue capital planning guidance requiring agency CIOs to certify that IT investments are adequately implementing incremental development. We analyzed the guidance that OMB has issued to meet this requirement over the past 3 years, and we evaluated agencies’ progress in implementing that guidance. In doing so, we noted that OMB had also issued supplementary FITARA implementation guidance in June 2015 that required agencies to define policies and processes to ensure that the CIO certifies that IT resources are adequately implementing incremental development. Throughout our discussion, we clearly delineate between the incremental development certification provided to OMB by an agency’s CIO and the agency’s policies and processes that support and inform that certification. As such, we believe we have used the term “certification” appropriately and consistently throughout our report. We are sending copies of this report to interested congressional committees, the Director of the Office of Management and Budget, the Secretaries and agency heads of the departments and agencies in this report, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XVII. Appendix I: Objectives, Scope, and Methodology Our objectives for this engagement were to determine (1) the number of investments certified by agencies as implementing adequate incremental development and any reported challenges that impact the agencies’ incremental delivery of functionality; and (2) whether agencies are establishing policies and processes for chief information officer (CIO) certification of incremental development in accordance with the Federal Information Technology Acquisition Reform Act provisions (commonly referred to as FITARA) enacted as a part of the Carl Levin and Howard P. ‘Buck’ McKeon National Defense Authorization Act for Fiscal Year 2015. For our first objective, we obtained and analyzed major information technology (IT) investment data reported by agencies on the IT Dashboard as of August 31, 2016, for fiscal year 2017, which was the first year that the Office of Management and Budget (OMB) required the 24 covered agencies to report the status of CIO certification of incremental development for each investment. We chose this date because it was the final day updated fiscal year 2017 data from the agencies would be publicly available until the release of the President’s fiscal year 2018 budget submission. Initially, we analyzed the fiscal year 2017 data of major IT software development investments that were planning to allocate at least 50 percent of their funding to development, modernization, and enhancement activities. We then reviewed agency responses to the question regarding CIO certification of adequate incremental development and eliminated any investment where the agency’s rationale for choosing “not applicable” was due to the investment not undertaking software development activities. In doing so, we identified a total of 166 investments from 21 agencies. Three agencies (National Aeronautics and Space Administration, National Science Foundation, and U.S. Nuclear Regulatory Commission) out of the 24 in our review did not have any investments that met these criteria for fiscal year 2017. For the 21 agencies with major IT investments to review, we then determined the total number of investments that agencies reported were certified by the CIO for adequate incremental development. We also reviewed and summarized agency responses reported on the IT Dashboard for investments that did not have CIO certification. To help determine the reliability of the reported agency CIO certification data on the IT Dashboard, we presented the results of our analysis of CIO certification responses to officials from each agency’s Office of the CIO that were involved in investment management and software development activities and solicited their input and explanations for the results. Two agencies each provided an update on one of their investments, which we have incorporated as appropriate. We determined that the data were sufficiently reliable for the purpose of this report. In order to identify the challenges impacting the agencies’ incremental delivery of functionality, we developed a list of common challenges based on our prior work, in which eight agencies reported that the following eight challenges inhibited their delivery of functionality: 1. project staff were over-utilized or lacked the necessary skills and 2. programs did not receive sufficient funding or received funding later 3. projects experienced management and organizational challenges that introduced delays; 4. development work was slowed by inefficient governance and 5. project characteristics made rapid delivery of functionality infeasible or 6. development schedules were impeded by procurement delays; 7. programs did not have stable, prioritized requirements; and 8. incremental development was impeded by select technologies. We sent the list of challenges to each of the 24 agencies and asked officials from the Office of the CIO at each agency involved with investment management and software development activities to identify their top three challenges from this list that impacted their ability to deliver incremental functionality for major IT investments. We also asked agency officials to identify any challenges that were not included in the list, but which were also among their top three challenges. Finally, we asked agencies to explain what actions were taken to address the reported challenges and describe the extent to which the challenges were overcome. Because of the open-ended nature of the agencies’ responses to our questions, we conducted a content analysis of the information we received in order to identify common challenges that impact agencies’ ability to deliver incremental functionality. In doing so, team members individually reviewed the challenges reported by agencies and assigned them to various categories. Team members then compared categorization schemes, discussed the differences, and reached agreement on the final list of challenges by totaling the number of times each challenge was mentioned. For those challenges that were prompted by the list we provided to agencies, we reported challenges that were identified by five or more agencies. Three agencies also identified a new challenge that was not on our list, which we reported due to the number of agencies reporting it as a challenge. Three of the 24 agencies in our review (Departments of Defense, Energy, and Health and Human Services) reported that they had no challenges with implementing incremental development. We also asked the agencies in our review how the CIO utilized the information obtained during the process of certifying investments’ adequate incremental development to make decisions regarding the agency’s major IT investments. Because of the wide variety of responses we received from agencies, we conducted a content analysis of the information in order to identify ways the CIOs used the information. In doing so, team members individually reviewed agencies’ responses and assigned them to various categories. Team members then compared their categorization schemes, discussed the differences, and reached agreement on the final characterization of ways in which agencies benefited from the certification process. For our second objective, we analyzed the 24 agencies’ policies and processes governing the CIO certification of adequate incremental development to determine whether those policies and processes were consistent with FITARA. The provision states that OMB is to require in its annual IT capital planning guidance that agency CIOs covered by the law certify that IT investments are adequately implementing incremental development. To assess this, we reviewed guidance issued by OMB on the implementation of FITARA, and assessed agencies’ documentation of incremental development certification policies and processes against GAO’s IT investment management framework. This framework states that an organization’s policies and procedures should be clearly defined, in that they provide details regarding the role of appropriate stakeholders and the artifacts to document decisions made. Because of the wide variety of responses and documents we received from agencies related to their incremental development certification processes, we conducted a content analysis of the information in order to determine compliance with OMB’s guidance. In doing so, team members individually reviewed agencies’ responses and documents and assigned them to various categories and sub-categories. Team members then compared their categorization schemes, discussed the differences, and reached agreement on the final characterization of compliance with OMB guidance. In cases where agencies provided multiple policies or documents, we followed up to clarify which portions were considered by the agency to support the CIO certification requirement. In analyzing whether the agencies’ policies on CIO certification met FITARA, OMB, and GAO criteria, we assessed whether the policies clearly defined the role of the CIO in the certification of adequate incremental development, and described how CIO certification was documented. We also reviewed agencies’ incremental development policies and processes to identify the agencies’ definitions of incremental development and time frames for delivering functionality to determine whether they were consistent with OMB guidance. Agencies found to not have a policy where the CIO process was clearly defined were evaluated as such for one of two reasons: either the agency’s formal policy did not completely address our assessment criteria or the agency’s policy had not yet been finalized. For agencies that told us they had not yet finalized a policy for certification, we asked them to explain the process, if any, used by the agency to certify major IT investments for fiscal year 2017. In addition, we interviewed staff from OMB’s Office of E-Government and Information Technology regarding its guidance to agencies related to FITARA’s incremental development certification provision. We conducted this performance audit from July 2016 to November 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Federal Agency Major IT Investments’ Reported Chief Information Officer Certification of Incremental Development on the IT Dashboard for Fiscal Year 2017 Table 5 lists the 166 major information technology (IT) software development investments primarily in development, as reported on the IT Dashboard as of August 31, 2016, and the agency’s reported response to the question in the major IT business case regarding whether the agency’s Chief Information Officer certified the adequate use of incremental development for the investment for fiscal year 2017. All 166 investments reported in the major IT business case that the investment included software development. Appendix III: Analysis of Federal Agency Chief Information Officer Incremental Development Certification Policies Table 6 shows our analysis regarding whether the agency had policies and processes that clearly defined the Chief Information Officer (CIO) certification process for the adequate use of incremental development, including: (1) describing the CIO’s role in the certification process; (2) describing how CIO certification is to be documented; (3) having a definition of incremental development in the policy consistent with Office of Management and Budget (OMB) guidance; and (4) having time frames for delivering functionality in the policy consistent OMB guidance. Appendix V: Comments from the Department of Housing and Urban Development Appendix VI: Comments from the Department of the Interior Appendix VII: Comments from the Department of State Appendix VIII: Comments from the Department of Veterans Affairs Appendix IX: Comments from the Environmental Protection Agency Appendix X: Comments from the General Services Administration Appendix XI: Comments from the National Aeronautics and Space Administration The report number GAO-17-556 has been changed to GAO-18-148. Appendix XII: Comments from the U.S. Nuclear Regulatory Commission Appendix XIII: Comments from the Office of Personnel Management Appendix XIV: Comments from the Social Security Administration The report number GAO-17-556 has been changed to GAO-18-148. Appendix XV: Comments from the U.S. Agency for International Development The report number GAO-17-556 has been changed to GAO-18-148. Appendix XVI: Comments from the Department of Commerce Appendix XVII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, the following staff made key contributions to this report: Dave Hinchman (Assistant Director), Chris Businsky, Rebecca Eyler, Justin Fisher, Valerie Hopkins (Analyst in Charge), Sandra Kerr, James MacAulay, Jamelyn Payan, Priscilla Smith, and Andrew Stavisky.
Why GAO Did This Study Investments in federal IT too often result in failed projects that incur cost overruns and schedule slippages. Recognizing the severity of issues related to government-wide IT management, Congress enacted federal IT acquisition reform legislation in December 2014. Among other things, the law states that OMB require in its annual IT capital planning guidance that CIOs certify that IT investments are adequately implementing incremental development. GAO was asked to review agencies' use of incremental development. This report addresses the number of investments certified by agency CIOs as implementing adequate incremental development and any reported challenges, and whether agencies' CIO certification policies and processes were in accordance with FITARA. GAO analyzed data for major IT investments in development, as reported by 24 agencies, and identified their reported challenges and use of certification information. GAO also reviewed the 24 agencies' policies and processes for the CIO certification of incremental development and interviewed OMB staff. What GAO Found Agencies reported that 62 percent of major information technology (IT) software development investments were certified by the agency Chief Information Officer (CIO) for implementing adequate incremental development in fiscal year 2017, as required by the Federal IT Acquisition Reform Act (FITARA) as of August 2016. However, a number of responses for the remaining investments were incorrectly reported due to agency error. Officials from 21 of the 24 agencies in GAO's review reported that challenges hindered their ability to implement incremental development, which included: (1) inefficient governance processes; (2) procurement delays; and (3) organizational changes associated with transitioning from a traditional software methodology that takes years to deliver a product, to incremental development, which delivers products in shorter time frames. Nevertheless, agencies reported that the certification process was beneficial because they used the information from the process to assist with identifying investments that could more effectively use an incremental approach, and using lessons learned to improve the agencies' incremental processes. As of August 2017, only 4 of the 24 agencies had clearly defined CIO incremental development certification policies and processes that contained: descriptions of the role of the CIO in the process; how the CIO's certification will be documented; and included definitions of incremental development and time frames for delivering functionality consistent with Office of Management and Budget (OMB) guidance (see figure). In addition, OMB's fiscal year 2018 capital planning guidance did not establish how agency CIOs are to make explicit statements to demonstrate compliance with FITARA's incremental provisions, while the 2017 guidance did. However, OMB's fiscal year 2019 guidance provides clear direction on reporting incremental certification and is a positive step in addressing this issue. What GAO Recommends GAO is making 19 recommendations to 17 agencies, including 3 to improve reporting accuracy and 16 to update or establish certification policies. Eleven agencies agreed with GAO's recommendations, 1 partially agreed, and 5 did not state whether they agreed or disagreed. OMB disagreed with several of GAO's conclusions, which GAO continues to believe are valid, as discussed in the report.
gao_GAO-18-697T
gao_GAO-18-697T_0
Available Data Indicate Native American Youth Involvement in Justice Systems Declined from 2010 through 2016 and Differed in Some Ways from That of Non-Native American Youth In our September 2018 report, we found that from 2010 through 2016 the number of Native American youth in federal and state and local justice systems declined across all phases of the justice process—arrest, adjudication, and confinement—according to our analysis of available data. At the federal level, arrests by federal agencies dropped from 60 Native American youth in 2010 to 20 in 2016, and at the state and local level, arrests of Native American youth declined by almost 40 percent from 18,295 arrested in 2010 to 11,002 in 2016. Our analysis also found that the vast majority of these Native American youth came into contact with state and local justice systems, not the federal system. For example, from 2010 through 2016, there were 105,487 total arrests of Native American youth reported by state and local law enforcement agencies (LEAs). In contrast, there were 246 Native American youth held in federal custody by the U.S. Marshals Service due to arrest by federal LEAs during the same period. We also found a number of similarities between Native American and non-Native American youth in state and local justice systems. For example, the offenses that Native American youth and non-Native American youth were arrested, adjudicated, and confined for were generally similar. In contrast, our analysis also showed a number of differences between Native American and non-Native American youth in the federal justice system. For example, our analysis showed variation in the types of offenses committed by each group. From fiscal years 2010 through 2016, the majority of Native American youth in the federal justice system were arrested, adjudicated, or confined for offenses against a person, with the top two specific offenses being assault and sex offenses. In contrast, the majority of involvement of non-Native American youth in the federal system during the same period was due to public order or drug and alcohol offenses at all three stages, with the top two specific offenses being drug and immigration related. Our September 2018 report contains additional information on the differences between Native American and non-Native American youth involved with the federal justice system. Further, we found that the percent of Native American youth involved in most state and local systems was generally similar to their representation in the youth populations in those states. For example, our analysis found that the majority (about 75 percent) of Native American youth arrested by state and local LEAs from calendar years 2010 through 2016 were located in 10 states: Alaska, Arizona, Minnesota, Montana, New Mexico, North Dakota, Oklahoma, South Dakota, Washington, and Wisconsin. These 10 states had among the highest percent of Native Americans in their states’ overall youth populations, according to 2016 U.S. Census estimates we reviewed. In 2016, the largest number of arrests by state and local LEAs occurred in Arizona and South Dakota. In contrast, we found that representation of Native American youth arrested, referred for adjudication, and confined at the federal level during the period reviewed was greater (13 to 19 percent) than their representation in the nationwide youth population (1.6 percent). DOJ officials told us that the population of Native Americans in the federal justice system has historically been higher than their share in the nationwide population, and they attributed this and other differences shown by our analysis to federal government jurisdiction over certain crimes in Indian country, as well as the absence of general federal government jurisdiction over non-Native American youth. According to DOJ officials, this jurisdiction requires the federal government to prosecute offenses that would commonly be prosecuted by states if committed outside of Indian country. According to DOJ officials, a small handful of federal criminal statutes apply to all juveniles, such as immigration and drug statutes, but the federal government has been granted greater jurisdiction over Native American youth than non-Native American youth by federal laws that apply to crimes committed in Indian Country, such as the Major Crimes Act. For example, one DOJ official noted that the Major Crimes Act gives the federal government exclusive jurisdiction over crimes such as burglary and sex offenses committed in Indian country. This differs from the treatment of non-Native American youth, who are not prosecuted in the federal system for the same types of offenses, because the federal government does not have jurisdiction over those youth for such offenses. Non-Native American youth are instead subject to the general juvenile delinquency jurisdiction of state and local courts. Additionally, DOJ officials stated that tribal justice systems are often underfunded and do not have the capacity to handle Native American youths’ cases. Therefore, they stated that when both federal and tribal justice systems have jurisdiction, the federal system might be the only system in which the youth’s case may be adjudicated. For these reasons, the percentage of Native American youth offenders in the federal justice system is higher than non-Native American juveniles in accordance with population size, according to DOJ officials. Representatives from four of the five Native American organizations we interviewed, whose mission and scope of work focus on Native American juvenile justice issues and that have a national or geographically specific perspective, noted that federal jurisdiction is a key contributor to the higher percentage of Native American youth involved at the federal justice level. Additionally, representatives from all five organizations noted, similarly to DOJ officials, that federal jurisdiction over crimes in Indian country is typically for more serious offenses (specifically under the Major Crimes Act), such as offenses against a person. Comprehensive data from tribal justice systems on the involvement of Native American youth were not available. However, we identified and reviewed a few data sources that provided insights about the arrest, adjudication, and confinement of Native American youth by tribal justice systems. See appendix II for a summary of our analysis of data from these sources. DOJ and HHS Offered at Least 122 Grant Programs; Tribal Governments or Native American Organizations Were Eligible for Almost All but in a Sample of Applications We Reviewed, Applied Primarily for Programs Specifying Native Americans In our September 2018 report, we identified 122 discretionary grants and cooperative agreements (grant programs) offered by DOJ and HHS from fiscal years 2015 through 2017 that could help prevent or address delinquency among Native American youth. DOJ and HHS made approximately $1.2 billion in first-year awards through the 122 programs over the period, of which the agencies awarded about $207.7 million to tribal governments or Native American organizations. A list of the 122 programs, which focus on a range of issues such as violence or trauma, justice system reform, alcohol and substance abuse, and reentry and recidivism, can be found in our September 2018 report. The 122 DOJ and HHS grant programs we identified included 27 programs that specified tribes or Native Americans as a primary beneficiary and 95 programs that did not specify these populations but could include them as beneficiaries. For example, the Department of Justice’s Office of Juvenile Justice and Delinquency Prevention offered the Defending Childhood American Indian/Alaska Native Policy Initiative: Supporting Trauma-Informed Juvenile Justice Systems for Tribes program for funding in fiscal year 2016. The goal of this program— increasing the capacity of federally recognized tribes’ juvenile justice and related systems to improve the life outcomes of youth who are at risk or who are involved in the justice system and to reduce youth exposure to violence—explicitly focused on tribal communities. On the other hand, the Sober Truth on Preventing Underage Drinking Act grant program, which HHS’s Substance Abuse and Mental Health Services Administration offered for funding in fiscal year 2016 to prevent and reduce alcohol use among youth and young adults, is an example of a program that did not specify tribes or Native Americans as a primary beneficiary but could include them as beneficiaries. We found that tribal governments and Native American organizations were eligible for almost all of the grant programs we identified. Specifically, they were eligible to apply for 70 of 73 DOJ programs and 48 of 49 HHS programs. However, although tribal governments and Native American organizations were eligible to apply for almost all of the programs, we found in a non-generalizable sample of applications we reviewed that they applied primarily for the programs that specified tribes or Native Americans as a primary beneficiary. For example, we reviewed applications for 18 DOJ grant programs and found that tribal governments and Native American organizations accounted for over 99 percent of the applications for the 5 grant programs within the sample that specified tribes or Native Americans as a primary beneficiary. However, tribal governments and Native American organizations accounted for about 1 percent of the applications for the 13 programs in the sample that did not specify tribes or Native Americans as a primary beneficiary. We interviewed officials from DOJ’s Office of Justice Programs (OJP) and seven HHS operating divisions to obtain their perspectives on why tribal governments and Native American organizations might not apply for grant programs that do not specify them as a primary beneficiary. They identified various reasons, including that tribal governments and Native American organizations might not be aware that they are eligible to apply for certain grant programs; might believe that their applications to grant programs that do not specify tribes or Native Americans as a primary beneficiary will not be competitive with other applications; or might prefer to apply for those grant programs that specify tribes or Native Americans as a primary beneficiary. We also interviewed representatives from 10 tribal governments and Native American organizations, who provided perspectives on whether or not a grant program’s focus on tribes or Native Americans as a primary beneficiary affected their decision to apply for the program. Officials from 6 of 10 tribal governments and Native American organizations indicated that they would consider any grant program that met the needs of their communities, while the remaining 4 indicated that a grant program’s focus or lack thereof on tribes or Native Americans could affect their ability to apply for it. Officials from the 10 tribal governments and Native American organizations also identified various federal practices they found helpful or challenging when applying for grant programs related to preventing or addressing delinquency among Native American youth. When asked what federal practices, if any, were particularly helpful when applying to receive federal funding, they most frequently responded that they found it particularly helpful to be able to call or meet with federal officials if they had questions about or needed help on their applications. Regarding the biggest challenges, they cited short application deadlines, difficulties collecting data for grant program applications, and a scarcity of grant writers and other personnel needed to complete a quality application. In addition, DOJ OJP and HHS officials provided perspectives on why some tribal governments and Native American organizations might be more successful in applying for federal funding than others. The officials stated, among other things, that larger and better-resourced tribal governments and Native American organizations were more successful at applying for federal funding and that previously successful grant program applicants were more likely to be successful again. More detailed information on the perspectives from tribal governments, Native American organizations, and agency officials regarding the factors they believe affect the ability of tribal governments and Native American organizations to apply successfully for federal grant programs can be found in our September 2018 report. Chairman Hoeven, Vice Chairman Udall, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. Appendix I: Data Sources for Federal, State and Local, and Tribal Justice Systems by Phase of the Justice Process For our September 2018 report, we obtained and analyzed record-level and summary data from federal, state and local, and tribal justice systems from 2010 through 2016. Figure 1 illustrates the data sources we included in our report for each phase of the justice process (arrest, adjudication, and confinement) in each justice system (federal, state and local, and tribal). Generally, state and local entities include those managed by states, counties, or municipalities. Appendix II: GAO Findings Regarding American Indian and Alaska Native Youth Involvement with Tribal Justice Systems Comprehensive data from tribal justice systems on the involvement of American Indian and Alaska Native (Native American) youth were not available. However, in our September 2018 report, we identified and reviewed a few data sources that can provide certain insights about the arrest, adjudication, and confinement of Native American youth by tribal justice systems. The following is a summary of our analysis of data from these sources. Arrests. Although comprehensive data on the number of tribal law enforcement agency (LEA) arrests were not available, we obtained and reviewed admission records from three juvenile detention centers in Indian country managed by the Department of the Interior’s Bureau of Indian Affairs (BIA). Based on those records, at least 388 Native American tribal youth were admitted to these three facilities in 2016, as shown in table 1. In the Northern Cheyenne facility, for which we obtained records for 5 years, the number of youth admitted increased yearly between 2012 and 2016, from 14 to 204. According to BIA officials, this growth in the number of youth admitted to the Northern Cheyenne facility likely reflects an increase in admissions of Native American youth from surrounding tribes. Specifically, because the Northern Cheyenne facility is centrally located, the officials said that the facility admits youth from other tribes, which have grown accustomed to sending their youth to the facility. BIA officials also noted that the Northern Cheyenne facility services an area where there is a high rate of delinquency among youth, and because the facility works well with Native American youth struggling with delinquency issues, many tribes elect to send their delinquent youth to the facility. Further, since 2012, the Northern Cheyenne facility increased its bed space and staff, thus increasing its capacity to admit more youth, according to BIA officials. Even though comprehensive tribal arrest data were not available, we reported in September 2018 that the Department of Justice’s (DOJ) Bureau of Justice Statistics (BJS) was undertaking an effort to increase collection of arrest data from tribal LEAs. Specifically, this data collection activity is the Census of Tribal Law Enforcement Agencies. This collection activity, which BJS plans to conduct in 2019, is to capture information including tribal LEA workloads and arrests, tribal LEA access to and participation in regional and national justice database systems, and tribal LEA reporting of crime data into FBI databases. Adjudication. Comprehensive data were not available to describe the extent to which tribal courts processed Native American youth or found them guilty. However, BJS concluded a tribal court data collection effort— the National Survey of Tribal Court Systems—in 2015. Through this survey, BJS gathered information from more than 300 tribal courts and other tribal judicial entities on their criminal, civil, domestic violence, and youth caseloads, and pretrial and probation programs, among other things. DOJ officials told us that BJS has analyzed the data, and plans to release results in the future. Confinement. According to data published by BJS, the number of youth in Indian country jails declined from 190 in 2014 to 170 in 2016 (about an 11 percent decrease). Appendix III: GAO Contact and Staff Acknowledgments GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Gretta L. Goodwin, Director, Homeland Security and Justice at (202) 512-8777 or goodwing@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Tonnye’ Conner-White, Assistant Director; Steven Rocker, Analyst-in- Charge; Haley Dunn; Angelina Torres; Taylor Matheson; Anne Akin; Paul Hobart; Jamarla Edwards; Claire Peachey; Eric Hauswirth; Heidi Neilson; Amanda Miller; and Elizabeth Dretsch. Key contributors to the previous work on which this testimony is based are listed in our September 2018 report. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study This testimony summarizes the information contained in GAO's September 2018 report, entitled Native American Youth: Involvement in Justice Systems and Information on Grants to Help Address Juvenile Delinquency ( GAO-18-591 ). What GAO Found GAO's analysis of available data found that the number of American Indian and Alaska Native (Native American) youth in federal and state and local justice systems declined across all phases of the justice process—arrest, adjudication, and confinement—from 2010 through 2016. During this period, state and local arrests of Native American youth declined by almost 40 percent from 18,295 in 2010 to 11,002 in 2016. The vast majority of Native American youth came into contact with state and local justice systems rather than the federal system. However, more Native American youth were involved in the federal system than their percentage in the nationwide population (1.6 percent). For example, of all youth arrested by federal entities during the period, 18 percent were Native American. According to Department of Justice (DOJ) officials, this is due to federal jurisdiction over certain crimes involving Native Americans. Comprehensive data on Native American youth involvement in tribal justice systems were not available for analysis. GAO's analysis showed several differences between Native American and non-Native American youth in the federal justice system. For example, the majority of Native American youths' involvement was for offenses against a person, such as assault and sex offenses. In contrast, the majority of non-Native American youths' involvement was for public order offenses (e.g., immigration violations) or drug or alcohol offenses. On the other hand, in state and local justice systems, the involvement of Native American and non-Native American youth showed many similarities, such as similar offenses for each group. DOJ and the Department of Health and Human Services (HHS) offered at least 122 discretionary grants and cooperative agreements (grant programs) from fiscal years 2015 through 2017 that could be used to address juvenile delinquency among Native American youth. DOJ and HHS made approximately $1.2 billion in first-year awards to grantees during the period, of which the agencies awarded approximately $207.7 million to tribal governments or Native American organizations. Officials from the agencies, tribal governments, and Native American organizations identified factors they believe affect success in applying for grant programs. For example, some tribal governments and Native American organizations found being able to call or meet with federal officials during the application process helpful but found that short application deadlines are a challenge.
gao_GAO-19-161
gao_GAO-19-161_0
Background Trucking Industry In 2016, commercial trucks transported about 70 percent of all U.S. freight, and over 250,000 heavy trucks were sold in the same year. These trucks operate within a diverse industry that can be distinguished in several ways: Long-haul vs. local-haul. Long-haul trucking operations are so named because the drivers frequently drive hundreds of miles for a single route and can be on the road for days or weeks at a time. For these operations, freight is usually shipped from a single customer and may fill an entire trailer by either space or weight. Long-haul trucking also includes “less-than-truckload” freight shipments, or freight combined from multiple customers. In comparison, local-haul trucking operations may involve delivering packages and shipments between a customer and a freight company’s drop-off point, where they are combined with other shipments in preparation to move them over longer distances. This type of operation also includes local cement trucks, as well as moving shipping containers at ports and moving freight a short distance from a train that has transported it long-distance to near its destination. For-hire vs. private (in-house). Different types of companies—or carriers—engage in long-haul and local trucking and are known either as “for-hire” (those that transport goods for others) or “private” (those that transport their own goods in their own trucks). For instance, J.B. Hunt is a for-hire carrier that transports goods for clients, while Walmart is a private carrier that uses its in-house fleet of trucks to transport its own goods between its distribution centers and its stores. Carrier size. In addition, carriers vary in size, with fleets ranging from one truck to tens of thousands of trucks. For example, a person might own and drive one for-hire truck; these are known as “owner- operators.” By contrast, the largest for-hire trucking companies in the country can have fleets of over 20,000 tractors and even more trailers. Operating costs. Driver compensation represents either the largest or second-largest cost component for truck carriers, depending on the price of fuel; each typically accounts for about one-third of total operating costs. Other operating costs include purchasing truck tractors and trailers, as well as repair and maintenance of the trucks and trailers, and insurance. Truck Drivers BLS data indicate that in 2017, the United States had nearly 1.9 million truck drivers categorized as “heavy and tractor-trailer truck drivers,” who operate trucks over 26,000 pounds. This category includes many different kinds of drivers, including long-haul and local-haul, along with cement or garbage truck drivers and drivers of specialty loads, such as trucks transporting cars, logs, or livestock. The number of heavy and tractor-trailer truck drivers has increased over the last 5 years, from fewer than 1.6 million in 2012, and is projected to increase to about 2 million drivers by 2026. The trucking industry has also had high annual driver turnover, according to industry reports—approaching 100 percent for large, truckload carriers, though it can be less for small, truckload carriers. This turnover includes drivers who move to other carriers and others who leave the field altogether or retire. Some companies that experience lower turnover rates are able to provide drivers with predictable schedules and coordinate around the various obligations the drivers may have. Firms must balance the costs of scheduling drivers to return home more frequently with the costs of high turnover rates. Industry reports have noted that companies find it difficult to hire and retain sufficient numbers of long-haul drivers, even with wages reportedly rising for many drivers. Heavy and tractor-trailer truck drivers make more on average—$44,500 in 2017—than other types of drivers, according to BLS data. Many drivers, including most drivers working in long-haul trucking, are compensated on a per-mile basis rather than a per-hour basis. The per-mile rate varies from employer to employer and may depend on the type of cargo and the experience of the driver. Some long-haul truck drivers are paid a share of the revenue from shipping. Truck Driver Training In order to operate certain commercial vehicles, including heavy trucks and tractor-trailers, drivers must obtain a state-issued commercial driver’s license (CDL). DOT administers the federal CDL program through the Federal Motor Carrier Safety Administration by setting federal standards for knowledge and driving skills tests, among other requirements. CDL applicants must have a state motor vehicle driver’s license and must be at least 21 years old to operate in interstate commerce. Prior to receiving a CDL, applicants must first pass the knowledge test and meet other federal requirements, after which they are eligible to pursue a commercial learner’s permit. After receiving the learner’s permit, applicants must wait at least 14 days before taking the skills test. During this period, applicants may train on their own with a CDL holder, with a truck driver training school—a private school or public program run through a community college, for example—or with a motor carrier to prepare for the skills test. Applicants must pass all three parts of the skills test—pre- trip inspection, basic control skills, and an on-the-road driving test—in the type of vehicle they intend to operate with their license. Apart from the CDL requirements, some truck driving jobs (such as those that involve handling hazardous materials) require additional endorsements, and some employers require on-the-job training. DOL and other federal agencies administer programs that can be used to provide training for truck drivers. For example, DOL administers federal employment and training programs, such as those funded through the Workforce Innovation and Opportunity Act (WIOA), which provide training dollars that can be used by prospective truck drivers, among others. Likewise, the Department of Education provides federal student aid funds that can be used at eligible accredited trucking schools, and DOT and the Department of Veterans Affairs both operate programs that can assist veterans interested in becoming truck drivers. Federal Regulation of Trucking Federal regulation of trucking is focused primarily on interstate trucking activity; states can have separate regulations related to intrastate motor carriers. DOT is the lead federal agency responsible for overall vehicle safety, including commercial truck safety. The agency also regulates other aspects of commercial trucking, such as the maximum number of hours truck drivers are allowed to drive. For example, under current hours of service regulations, a truck driver may drive a maximum of 11 total hours within a 14-hour window after coming on duty. In addition, DOT regulates CDL standards and the maximum weight of trucks allowed on the Interstate Highway System, among other things. Until recently, DOT’s National Highway Traffic Safety Administration led automated vehicles policy with a focus on passenger vehicles. However, DOT’s October 2018 federal automated vehicles policy was developed by the Office of the Secretary of Transportation and includes several different modes of transportation, including automated commercial trucks. Automated Trucks Automated vehicles can perform certain driving tasks without human input. They encompass diverse automated technologies ranging from relatively simple driver assistance systems to self-driving vehicles. Certain automated features, like adaptive cruise control, can adjust vehicle speed in relation to other objects on the road and are currently available on various truck models. DOT has adopted a framework for automated driving developed by the Society of Automotive Engineers International, which categorizes driving automation into 6 levels (see fig. 1). Commercial trucks with Level 0 and 1 technologies, as outlined in figure 1, are already available for private ownership and are currently used on public roadways. Level 0 encompasses conventional trucks where a human driver controls all aspects of driving and technologies can warn drivers of safety hazards, such as lane departure warning, but do not take control away from the driver and are not considered automated. Level 1 technologies incorporate automatic control over one major driving function, such as steering or speed, and examples include adaptive cruise control and automatic emergency braking. The Society of Automotive Engineers International categorizes vehicles with Level 3, 4, and 5 technologies as Automated Driving Systems. At Level 3, the system can take full control of the vehicle in certain conditions. However, a human driver must maintain situational awareness at all times to ensure the vehicle is functioning safely. At Level 4, automation controls all aspects of driving in certain driving conditions and environments, such as on highways in good weather. In these particular driving conditions and environments, a human driver would not be required to take over the driving task from the automated vehicle and the system would ensure the vehicle is functioning safely. At Level 5, the vehicle can operate fully, in any condition or environment, without a human driver or occupant. There are various automated vehicle technologies that could help guide a vehicle capable of driving itself, including cameras and other sensors (see fig. 2). Widespread Deployment of Platooning and Self- Driving Long-Haul Trucks Is Likely Years Away, and Several Factors Will Affect Timeframes Platooning and Self- Driving Trucks Are Being Developed, Generally for Long-Haul Trucking According to stakeholders we spoke with and literature we reviewed, automated trucks, including self-driving trucks, are being developed, generally for long-haul trucking. Specifically, we found there could be various types of automation for long-haul trucks, including platooning, self-driving for part of a route, and self-driving for an entire route. Platooning. Technology developers and researchers told us there is ongoing development and testing of truck platoons, which involve one or more trucks following closely behind a lead truck, linked by wireless—or vehicle-to-vehicle—communication (see fig. 3). In a platoon, the driver in the lead truck controls the braking and acceleration for all of the connected trucks in the platoon, while the driver in each following truck controls its own steering. Several stakeholders we interviewed and three studies we reviewed identified potential benefits from platooning, including fuel savings and increased safety, for example, due to the trucks’ faster reaction times for braking. Self-driving for part of a route. Most of the technology developers we spoke with said they were developing automated trucks that will be self-driving for part of a long-haul route, such as exit-to-exit on highways (see fig. 4). Representatives from one developer explained that their truck uses self- driving software installed on the truck. The software instructs the truck what to do, such as to steer or brake. In addition, cameras and other sensors on the truck’s exterior provide the self-driving software with a view of the truck’s surroundings to inform the software’s instructions. For example, Light Detection and Ranging (LIDAR) sensors use lasers to map a truck’s surroundings (see fig. 5). Such trucks would operate with no driver intervention under favorable conditions, such as on highways in good weather. Two developers said that in their business models a driver would be in the truck for the first and last portions of the route to assist with picking up and dropping off trailers at hubs outside urban areas. Alternatively, one developer said a remote driver—one not in the truck but operating controls from another location—would drive the first and last portions of a route. Stakeholders identified potential benefits of self-driving for part of a route, such as increased safety, labor cost savings, and addressing what they said is a truck driver shortage. Research funded by industry also suggests that an automated truck could improve productivity by, for example, continuing to drive to a destination while a human in the truck conducts other work or rests. In addition, one study noted that the most likely scenario for widespread adoption of automated trucks is the one in which trucks are capable of self-driving from exit-to-exit. Self-driving for an entire route. None of the technology developers we interviewed told us they are planning to develop automated trucks that are self-driving for an entire route (see fig. 6). Such trucks would be able to drive under all weather and environmental conditions. A person would not be expected to operate these trucks at any time. The potential benefits of these kinds of trucks are similar to those of trucks that are self-driving for part of a route, with higher potential labor savings because a person would not need to drive the first and last portions of a route. Widespread Deployment of Automated Trucks May Be Years to Decades Away, Depending on Technological, Operational, and Other Factors Anticipated Timeframes Stakeholders we spoke with generally indicated that it will be years to decades before the widespread deployment of automated commercial trucks (see text box). However, many stakeholders also noted the uncertainty of predicting a specific timeframe for particular technologies. Platooning. Many stakeholders said that platooning will likely deploy within the next 5 years and will be the first automated trucking technology to be widely available. Notably, one company that is developing platooning technology said it could begin deployment in 2019. In addition, DOT officials told us that truck platoons are currently being tested, but that it would be difficult to estimate when there might be widespread adoption of platooning technology. Self-driving for part of a route. Automated trucks that are self- driving for part of a route may become available for commercial use within the next 5 to 10 years, according to several stakeholders, including technology developers. While such trucks may begin appearing on roads in that timeframe, other stakeholders, including two researchers, said widespread deployment may take more than 10 years. DOT officials noted that multiple variables make it difficult to develop a precise estimate for the deployment and widespread adoption of trucks that are self-driving for part of a route. Self-driving for an entire route. Although none of the technology developers told us they are developing trucks that would be self- driving for an entire route, other stakeholders we spoke with said such trucks could become available in more than a decade. However, most stakeholders either did not provide a timeframe for, or said they did not know, when such trucks might become available. Similarly, at a listening session in August 2018, DOT officials told attendees that it will be decades before large trucking operations replace their fleets of conventional trucks with trucks that self-drive for an entire route. One Stakeholder’s Description of Anticipated Timeframes for Overall Automated Truck Adoption One researcher described an anticipated timeframe for automated truck adoption in which there is an initial, long period of development and testing, which would include making technological adjustments. This period would then be followed by a period of automated truck adoption—i.e., when such trucks replace human drivers. At that point, technology developers and truck manufacturers would also encounter scenarios in which it may not be desirable to use an automated truck, such as for the transport of hazardous materials, according to the researcher. Such scenarios would limit the extent to which automated trucks could replace human drivers. Factors That May Affect Timing Stakeholders we interviewed and the literature we examined identified technological, operational, infrastructure, legal, and other factors that may affect automated truck development and deployment. Stakeholders and literature identified several technology-related limitations that may affect the timing of automated truck deployment. Specifically, several stakeholders and a study noted that automated trucks may require simpler operating environments, such as highways, in the near term because they are less complex for the technology to navigate than roads in an urban setting, for example. Even so, a highway presents its own challenges, several stakeholders said. For instance, a developer, a manufacturer, and a researcher we spoke with told us that Light Detection and Ranging (LIDAR)—a costly and complex technology—may not be as useful at higher speeds due to its limited range and its inability to process information about the surrounding environment as quickly as needed at these speeds. Further, one manufacturer told us that LIDAR is not as durable as it needs to be for commercial trucking—for example, able to withstand dirt and debris. Stakeholders also discussed the need to have backup systems built into trucks’ automated systems in case of technology failures, including the ability to guide the truck to a safe stop. Stakeholders identified several operational factors that may pose challenges for the deployment of automated trucks. For example, several stakeholders said that there may be challenges with self-driving trucks with no person inside when responding to a tire blowout or other mechanical problems. Likewise, several stakeholders said there must be ways for a self-driving truck to respond to required safety inspections and communicate with inspectors. Representatives from a safety organization noted that a truck could potentially communicate a unique identification number through an electronic device. This number would give the inspector information about the truck, such as safety information from the sensors on automated trucks. Additionally, several stakeholders said platooning may not be practical for logistical reasons, for instance, if trucks are not traveling on the same routes or if cargo is not ready to depart at the same time. In addition, according to stakeholders we spoke with and literature we reviewed, the lead truck in a platoon will save less on fuel than the following trucks. If trucking fleets adopt platooning systems that work on commercial trucks across different companies—i.e., systems that are interoperable—distributing fuel savings in a manner agreeable to all parties involved may be challenging. Representatives from two fleet owners and one industry association we spoke with raised concerns about platooning across different companies, including that companies might not partner with other fleets to platoon trucks because they would be primarily concerned with their own fuel savings, not with saving fuel for their competitors. In addition to these operational factors, stakeholders noted that automated trucks may be prohibitively expensive for some smaller fleet owners, including owner-operators, particularly when these trucks are first deployed. Several stakeholders and relevant literature noted that certain infrastructure factors may affect the development, testing, and deployment of automated trucks. For example, a few stakeholders said if one truck picks up or drops off trailers for another truck at a location near highways, land acquisition near these highways may be an issue. Representatives from a developer that planned to acquire land for its business model said the land acquisition could take 5 to 10 years. The representatives explained that they found enabling direct access to freeways is more difficult than simply acquiring vacant land. They planned to partner with states to create hubs on under-utilized land with existing freeway access by, for example, repurposing abandoned rest stops. In addition to land acquisition, two technology developers and a study identified the need for widely available data connectivity and the related ability to use connected vehicle technologies as an infrastructure challenge. Connected technologies allow vehicles to communicate with other vehicles (vehicle-to-vehicle), roadway infrastructure (vehicle-to- infrastructure), and personal communication devices. Connectivity has potential implications for, among other things, the maps self-driving trucks use to navigate routes and obstacles, as well as the ability for trucks in a platoon to communicate with one another effectively. However, because the ability for vehicles to communicate with infrastructure is not ubiquitous, two of the developers we spoke with are not taking into account connected infrastructure as they develop and test their automated trucks. Two stakeholders also expressed concern about platooning trucks and the stress they could place on bridges, for example, that were not designed to hold the weight of two or more heavy trucks at once. In addition, stakeholders noted that automated trucks may encounter difficulties with things like road work or construction zones. This may be because the truck relies on pre-built maps, in addition to sensors, that would potentially be outdated or might not reflect current road conditions, including any recent or temporary changes. Several legal factors may affect the timing of development, testing, and deployment for automated trucks, according to our stakeholder interviews and literature review. Many stakeholders expressed concern about the possibility of a “patchwork” of state laws related to automated trucks that could affect interstate trucking, with some saying they would like to see a shared national framework. For example, one technology developer said that this emerging patchwork can make it difficult for an automated truck to travel across the country without a driver, because some states specifically prohibit self-driving vehicles, including trucks. However, this same developer said that some states are less restrictive regarding the need for a driver in a self-driving truck, and that others have ambiguous regulations. Several stakeholders we spoke with and two studies we reviewed noted that liability issues may arise and become more complex for automated trucks. This may be because, for example, more parties may become involved. One of these stakeholders—a fleet owner—said that these parties could include the software developer, the truck manufacturer, the owner of the truck, and, if applicable, the truck driver. These issues could be addressed under the current liability system, and courts would decide the various liability issues on a case-by-case basis. In addition, several stakeholders have requested that DOT clarify whether existing regulations require that human drivers always be present in automated trucks, particularly those capable of Level 4 and 5 driving automation, in which at least some of the driving is done by the automated truck. Two technology developers have requested that DOT confirm that regulations that apply to human drivers do not apply to automated trucks, and one of these developers also requested confirmation that a truck capable of at least Level 4 automation is allowed to operate without a human on board, which could permit testing without a person in the truck. In Preparing for the Future of Transportation: Automated Vehicles 3.0, DOT’s automated vehicles voluntary guidance, the agency laid out its approach to its automated vehicles policy. DOT’s guidance stated that, going forward, DOT will interpret and, consistent with all applicable notice and comment requirements, adapt the definitions of “driver” and “operator” to recognize that such terms do not refer exclusively to a human, but may include an automated system. In the same guidance document, DOT also noted that regulations will no longer assume that the driver of a commercial truck is always human or that a human is necessarily present inside of a truck during its operation. A few stakeholders also said that DOT may have to clarify the hours of service rules if a human driver is in an automated truck that is self-driving for part or all of a route. This is because under current hours of service regulations, a human driver may drive a maximum of 11 total hours within a 14-hour window after coming on duty. However, if a truck self-drives for at least part of a route, it is unclear if a human driver would need to comply with the existing hours of service requirements and, if not, how the driver would account for worked time. For example, if the human driver is not actively engaged in the driving task, whether monitoring the automated driving system or even sleeping, there could be a question about whether that time would be counted toward “driving,” according to the requirements. For a list of potential legal factors identified by stakeholders or in literature that may affect timing for the development and deployment of automated commercial trucks, and related DOT information, see appendix II. Stakeholders and relevant literature identified several other factors, such as public perception and cybersecurity, that could affect timing for the development and deployment of automated trucks. Several stakeholders we interviewed and a study we reviewed noted that public acceptance concerning the safety of platooning and self-driving trucks may pose a challenge to the deployment of these trucks. One researcher we spoke with said interactions between truck platoons and cars may be problematic, because drivers may need to speed in order to change lanes around the platoons of trucks following each other closely. Similarly, other stakeholders told us that it may be difficult for the public to accept large automated commercial trucks. Two of these stakeholders said this is particularly true for a heavy truck without a human driver on board— implying that vehicle size and weight play roles in the public’s acceptance of these types of automated vehicles. Several stakeholders also expressed concerns about cybersecurity and automated trucks’ reliance on wireless communication and self-driving software. They said connectivity could leave automated trucks vulnerable to cyberattacks. Workforce Changes Due to Automated Trucking Will Depend in Part on the Role of Future Drivers or Operators, and Will Take Time to Develop Workforce Effects of Automated Trucking Could Include Changes to Employment Levels, Wages, Retention, and Skills Predicting workforce changes in light of future automated trucking is inherently challenging, as it is based on uncertainties about how the trucking industry will respond to new technologies that face operational, regulatory, and other factors that could affect deployment. Many of the stakeholders we interviewed declined to predict various possible workforce effects, because they said to do so was too speculative. However, stakeholders we spoke with and literature we reviewed presented two main scenarios for the future trucking workforce: one in which trucks would be self-driving for part of a route, without a driver or operator, and the other in which trucks would require a driver or operator in the truck for the entire route. An operator would monitor truck operations and may not always function as a traditional driver. Because most stakeholders agreed that the prospect of using fully self-driving trucks for an entire route is either unlikely or at least several decades into the future—and no developer we spoke with was planning to develop a fully self-driving truck—we do not discuss the workforce effects of that scenario in this report. Potential Effects If Truck Has No Driver or Operator for Part of Route Technology developers we spoke with generally envisioned trucks that are self-driving for part of a route, which they said would potentially lead to significant workforce changes. Several technology developers and researchers, along with two studies, said trucks that are self-driving for part of a route could decrease the number of long-haul drivers, and perhaps decrease wages and affect retention as well. Additionally, any displaced drivers may need new skills if they change jobs, according to several stakeholders we spoke with and studies we reviewed. Employment levels: Technology developers we interviewed generally predicted the number of long-haul jobs would decrease with the adoption of trucks that are self-driving for part of a route. Drivers constitute a significant operational cost, so part of the reported economic rationale for self-driving trucks is to employ fewer drivers, allowing companies to transport the same amount of freight—or more—at lower labor costs. Several studies have analyzed the potential number of driving jobs that might be eliminated in this scenario, but the studies specifically noted the speculative, long-term nature of those estimates and the inability to identify the number of current long-haul truck drivers whose jobs could be lost sometime in the future. Estimates in the studies we reviewed ranged from under 300,000 driver jobs lost to over 900,000 jobs lost—out of a total of nearly 1.9 million heavy and tractor-trailer truck driver jobs, according to BLS data—and in each case over periods of 10 to 20 years or more. Although long-haul jobs would decrease in this scenario, local-haul jobs could increase and offset those losses, according to a study and several stakeholders, including two technology developers. The study, for example, said that automated trucking would drive long-haul trucking costs down, leading more companies to use trucking to ship goods. As a result, demand for trucking could increase, leading to an increased demand for local-haul truck drivers on either end of the long-haul routes, two studies noted. Several stakeholders we spoke with agreed that any decrease in long- haul jobs would likely not affect many current drivers because most will have voluntarily left driving for a different job or retired by the time self-driving trucks are widely deployed. According to the Census Bureau’s American Community Survey data, the average age of truck and sales delivery drivers from 2012 through 2016 was 46. Many stakeholders also said that trucking fleets are currently having difficulty hiring and retaining qualified drivers, and two technology developers said automation could help move goods in an environment in which it is difficult to find workers. Technology developers also told us they are focusing the initial development of automated trucking technology in the southwest United States because of its good weather and long highways. As a result, any future job losses could first occur there. Additionally, BLS data show that the estimated concentration of truck driving jobs varies in different areas of the country (see fig. 7). One study noted that trucking job losses in more regionally concentrated occupations are likely to pose more challenges for workers, because more workers with similar skills in the same labor markets will be out of work at the same time, and thus the whole local economy will be more likely to suffer. Wages: If the truck is self-driving for parts of a route, wages for long- haul drivers could decrease because there would be lower demand for—or greater supply of—such drivers, according to several stakeholders. Moreover, one study noted that average long-haul wages could decrease because the jobs most likely to be automated include those that tend to be unionized and have higher wages and benefits, such as jobs at parcel delivery companies and some private carriers. Similarly, drivers changing occupations might face significant wage reductions in new occupations that do not require retraining, according to a researcher and one study. Wages for local-haul drivers—generally lower than for long-haul drivers—could decrease as well, because transitioning long-haul drivers could increase competition for those jobs, according to two studies. One technology developer presented a different perspective, saying that wages for local-haul drivers could increase from current levels due to increased overall demand for trucking. Retention: Overall, retention of truck drivers could improve if the long-haul portion of the route becomes self-driving, lessening time drivers spend away from home—a key reason long-haul drivers leave the profession, according to many stakeholders. However, retention may depend on several factors, including wages, time at home, and other working conditions, making it more difficult to predict self-driving trucks’ effect on retention. Skills: Long-haul drivers have skills that would transfer to local-haul routes, so additional training may not be needed for those who move to local-haul routes. However, displaced long-haul drivers seeking to move to a different occupation or industry may need additional training, according to several stakeholders and two studies. From 2012-2016, the highest level of education attainment for almost 65 percent of truck and sales delivery drivers was high school or its equivalent. Potential Effects If Driver or Operator Remains in Truck Most officials from truck driver training schools, organizations representing truck drivers, and workforce development boards envisioned automated trucks as continuing to need either a driver or some kind of operator in the truck, with several noting that drivers may need to do non- driving tasks. Automated trucking with an operator in the truck would have a more limited effect on the numbers of truck drivers, but would still result in workforce changes, according to several stakeholders. As with the driverless scenario, many stakeholders said future developments were so uncertain that they could not predict how automated trucking would affect various aspects of the workforce, such as wages or retention. Employment levels: Under this scenario, automated trucking would have a more limited effect on employment levels. Several stakeholders noted, for example, that a person would still be needed in the truck to manage emergencies, repair flat tires, and secure cargo, among other duties. (See text box.) For example, one study noted that even for trucking jobs identified as the most likely to be automated, driving may represent only about half of drivers’ total work time. Additionally, particular kinds of long-haul trucking may present different non-driving tasks that could make automating those driving jobs more difficult. Wages: If the truck has an operator, several stakeholders said that wages might increase if increased skills are needed to operate more sophisticated equipment. However, several other stakeholders said wages might not change significantly or could decrease with fewer driving tasks. Two studies noted that wage changes were difficult to predict and could be affected by specific policy interventions. Truck Drivers: Responsible for More than Just Driving Truck drivers have many responsibilities other than driving a truck. Non-driving tasks for heavy and tractor-trailer truck drivers can include: checking vehicles to ensure that mechanical, safety, and emergency equipment is in good working order; loading or unloading trucks, including checking contents for any damage; inspecting loads to ensure that cargo is secure; and performing basic vehicle maintenance tasks, such as adding fuel or radiator fluid; performing minor repairs; or removing debris from loaded trailers. Retention: Many stakeholders said new technology could help the trucking industry bring in and retain more people—such as women and younger workers—if it could, for example, make truck driving safer, less stressful, and less physically demanding. Others cautioned that automated technology may not decrease truck operators’ time away from home, because they would still have to be in the truck for the entirety of long-haul routes. One stakeholder, who was also a truck driver, said that many truck drivers enjoy driving, so automating aspects of that task would not necessarily entice those drivers to stay in the job. Two other stakeholders noted that some drivers may not want to learn how the new technology works and could leave the field rather than drive automated trucks. Skills: Future truck operators may need new skills to work with automated technology that assists rather than replaces them, many stakeholders noted. For example, operators may need to adapt to technology that takes over a number of the standard driving functions, such as braking, staying in a designated lane, and keeping a safe distance from other vehicles. Operators may also need to understand how to monitor software and hardware used to automate the driving function and how to make appropriate use of advanced safety systems. Furthermore, officials from many truck driver training schools and workforce development boards said additional certification beyond the standard CDL may be needed in order to demonstrate an understanding of how to operate the technology in automated trucks. In some instances, the skills needed may vary across trucking companies and trucks, requiring further on-the-job training. New Trucking-Related Jobs Regardless of their vision for how automated trucking might materialize, many stakeholders said there could be new trucking-related occupations, such as specialized technicians, mechanics, and engineers, which will accompany the deployment of automated trucks. For example, one study noted that these jobs could include producing the technology used by automated trucks, in addition to jobs created as a result of potential greater spending on other consumer goods and services, in the event that automated trucking decreases overall industry transportation costs. Another study noted that autonomous trucks, e-commerce, and economic growth are together poised to create many new trucking jobs. However, new jobs may be located in different geographical areas than any jobs lost, and as noted above, may require different skills than the prior jobs. One study noted this development could potentially leave lower-skilled workers competing for jobs that pay little and have few opportunities for advancement. Stakeholders Said the Anticipated Timeframe for Automated Trucking’s Effects on the Workforce Provides an Opportunity for a Federal Response While many stakeholders we spoke with and several studies we reviewed stated that the potential workforce effects of automated trucking were difficult to predict, they generally agreed that any effect would not occur for at least 5 to 10 years. Several stakeholders and two studies said this time horizon provides an opportunity for federal agencies and workers to prepare for potential workforce changes. One of these studies noted that trucking policy is complex; any changes could take a long time to fully materialize. That same study suggested that now is the appropriate time for policy research and debate. The other study and several stakeholders stated that potential workforce effects are not set in stone, and that public policy could influence specific workforce outcomes. That study said that with advance planning, the federal government and other stakeholders could realize the possible benefits of automated trucks and other vehicles while mitigating potential workforce effects and other costs. DOT and DOL Could Take Additional Steps to Fully Consider Automated Trucking’s Potential Workforce Effects, as Technology Evolves DOT Has Gathered Stakeholder Perspectives to Inform Potential Regulatory Changes, and DOL Has Incorporated Technology Changes into Employment Projections DOT and DOL have both taken some steps to prepare for the potential workforce effects of automated trucking. DOT has held events to obtain stakeholder perspectives on automated vehicles policy, including how it affects commercial long-haul trucks. For example, DOT had public listening sessions in 2017 and 2018 to solicit information on the design, development, testing, and integration of Automated Driving Systems, and requests for comment to inform potential rulemaking efforts for the Federal Motor Carrier Safety Regulations. DOT officials said their role during these discussions was to hear stakeholder concerns. They also said that their ongoing goal is to identify barriers in their regulations to safe deployment of automated driving technology. Stakeholders have raised concerns about the potential workforce effects of automated trucks at DOT’s listening sessions. For example, after participants questioned potential job losses at a listening session in August 2018, DOT officials said that automation may eventually change the role of a truck driver from driver to technician and that any changes would probably not be immediate. DOL officials said they have participated in some of DOT’s listening sessions. For its part, DOL has taken steps to study how automated trucking may affect the near-term demand for truck drivers as part of their standard, biennial employment projections for all occupations. DOL officials said they consulted experts and economic studies prior to publishing their most recent projections, covering 2016 to 2026, and included information on possible effects of automation in projections for heavy and tractor- trailer truck drivers. The projections state that the demand for these drivers is expected to grow by 5.8 percent between 2016 and 2026, with an average of over 200,000 job openings each year, of which 10,000 are projected to be new jobs. DOL’s analysis anticipated that automation will not reduce the number of drivers by 2026. DOL officials said that they expect automation to assist drivers rather than displace them in the near term. Unlike estimates developed by other researchers, these numbers do not include potential job losses after 2026, though DOL officials noted that the agency’s next projections, for 2018 to 2028, will incorporate information on how automated trucking technology has evolved since the 2016-2026 projections. Additionally, officials said the agency is transitioning to annual updates of projections to more quickly incorporate developing information. Congress has directed DOT to consult with DOL to study the workforce impacts of automated trucking technology. Specifically, the Explanatory Statement accompanying the Consolidated Appropriations Act, 2018 instructs the Secretary of Transportation to consult with the Secretary of Labor to conduct a comprehensive analysis of the effect of advanced driver-assistance systems and highly automated vehicle technology on drivers and operators of commercial vehicles, including commercial trucks. Congress directed DOT to include stakeholder outreach in its analysis and provide information on workers who may be displaced as a result of such technology, as well as minimum and recommended training requirements for operating vehicles with these systems. DOL officials told us that they have begun collaborating with DOT on this study by consulting with organized labor and other stakeholders. In October 2018, DOT issued a request for information to solicit comments on the scope of this analysis and detailed several potential research questions, including which commercial drivers are likely to be affected and what skills might be needed to operate new vehicles or transition to new jobs. DOT also announced that it is planning to coordinate with the Departments of Commerce and Health and Human Services, in addition to consulting with DOL to conduct this analysis. The Explanatory Statement directs DOT to conduct this analysis by March 23, 2019, and DOT officials told us they expect to meet this deadline and report on the analysis by that date. DOL and DOT Do Not Have Plans to Gather and Share Information about the Potential Workforce Effects of Automated Trucking as Technology Evolves Convening Key Groups of Stakeholders on an Ongoing Basis to Gather Information DOL and DOT have taken some steps to convene stakeholders to inform DOT’s analysis of automated trucking in advance of March 2019. However, DOL and DOT have not made plans to continue collaborating to convene key groups of stakeholders as the technology evolves to gather information about potential workforce effects of automated trucking. Insofar as automated trucking technology is still evolving, convening stakeholders solely to inform the March 2019 analysis will not provide agency officials with sufficient information about important developments that may occur after the analysis is completed. This analysis will be an important step. However, DOT must complete it before potential workforce effects can be more fully predicted. After its completion, developers will likely continue to test their technologies, and issues related to operational and other factors that will affect the deployment of automated trucks may change or be resolved. For the agencies to more fully understand these developments and clarify the range of associated workforce effects, they would need to collaborate and to continue to gather information in the future, for example by continuing to convene key groups of stakeholders as the technology evolves. The majority of stakeholders we spoke with, including representatives from local workforce development boards, truck driver training schools, technology developers, and groups representing truck drivers, told us it would be helpful for federal agencies to play a convening role so that DOL and DOT can better anticipate and understand any potential workforce changes. Several stakeholders also said that convening stakeholders would enable DOL and DOT to surface different parties’ concerns. Additionally, our recent report on emerging technologies found that federal agencies can play an important role in convening stakeholders to gather information in areas where technology is still under development, including information on the research plans of industry stakeholders and ways to address national needs. Continuing to convene stakeholders could also help agencies to identify any information or data gaps that may need to be addressed to understand the potential workforce effects of automated trucking. DOL officials said that because the technology is still advancing, the related workforce effects, including the magnitude of any job losses, are uncertain. They also said they do not have information to identify the number of long-haul truck drivers, whose jobs may be the most likely to be affected by automation. Specifically, the occupational code DOL uses to classify heavy and tractor-trailer truck drivers captures drivers who operate any type of heavy truck. Along with long-haul drivers, this code includes other drivers whose jobs may be harder to automate, such as tow truck operators. Experts who participated in the National Science Foundation-sponsored workshop on the potential workforce effects of automated trucking also identified information gaps. They noted that more information is needed in several areas, including a better understanding of current truck drivers’ skills beyond driving, how those skills might translate to other occupational areas, and new jobs and skills that will be required with the deployment of automated trucks. DOL officials said that the agency provides information on knowledge, skills, and abilities for various driver occupations, as well as detailed work activities, on its Occupational Information Network (O*NET). However, that information is based on surveys to current workers and therefore does not include what skills future drivers may need as automated technology evolves. DOL officials told us they do not typically convene stakeholders on an industry-specific basis. They also said that state and local workforce development boards are best positioned to identify and respond to changes in their local economy and employment needs, because these boards include members from the local business community who know which industries are growing in their local labor markets. However, there are close to 1.9 million heavy and tractor-trailer truck drivers across the country, making the trucking industry an important segment of the national workforce. In addition, one of DOL’s objectives in its fiscal year 2018-2022 strategic plan is to provide timely, accurate, and relevant information on labor market activity, working conditions, and price changes. While DOL officials said they consider the agency’s national labor statistics as the primary tool in understanding macroeconomic changes, they acknowledged that gathering information from local boards and other stakeholders may complement those statistics. DOL officials said they may consider continuing to convene stakeholders to learn more about automated trucking if they find that their current efforts with DOT provide fruitful information, but they currently do not have plans to do so. If DOL waits until the effects of automated trucking on the workforce are widespread enough to affect multiple local economies, the agency will have missed the opportunity to proactively gather information that could help it anticipate large-scale workforce changes in this important industry before they take effect. DOT officials told us they have likewise not made plans to work with DOL to convene stakeholders on an ongoing basis to gather information. Rather, they said they have concentrated on developing the analysis described by the Explanatory Statement accompanying the Consolidated Appropriations Act, 2018 and they do not plan to update that analysis after it is completed. Nonetheless, one of the objectives outlined by DOT in its fiscal year 2018-2022 strategic plan is to promote economic competitiveness by supporting the development of appropriately skilled transportation workers (including truck drivers who transport freight) and strategies to meet emerging workforce challenges. Working with DOL to gather and analyze information from stakeholders as technology continues to develop could assist DOT in meeting this goal. DOT has previously collaborated with DOL on transportation workforce issues. For example, in 2015, DOT and DOL worked with the Department of Education on a blueprint for aligning investments in transportation, including trucking, with career pathways. The report highlighted potential future growth areas in the transportation industry and identified potential jobs that may be in demand through 2022. Unless DOL and DOT continue to gather information from stakeholders as automated trucking technology evolves, they may be unable to fully anticipate the emerging workforce challenges that may result. DOT’s prior efforts to convene stakeholders to address automated vehicles could serve as a model for gathering information from stakeholders about automated trucking. For example, DOT held a series of meetings across the country to gather information, identify key issues, and support the transportation community to integrate automated vehicles onto roads for its National Dialogue on Highway Automation. Further, analyzing information from ongoing meetings with stakeholders could help DOT as it considers potential workforce-related regulatory changes that might be affected by automated truck technologies, such as the requirements to obtain a commercial driver’s license or the maximum number of hours commercial truck drivers are permitted to work. Sharing Information DOL has not provided information to stakeholders about the potential workforce effects of automated trucking technology, including how the skills needed to operate a truck may change in the future. DOL officials told us they have not done so, in part, because they do not yet know how skills and training needed to be a truck driver might change, if at all. Representatives from all of the truck driver training schools and training associations we interviewed said they expect drivers to need new skills to operate or maintain automated trucks, and that future truck drivers may need an additional certification or endorsement to their commercial driver’s license. However, in the absence of specific information about future skill changes, they all said they did not know what specific adjustments would be needed to their curriculum. Additionally, nearly all stakeholders we spoke with—including representatives of technology developers, truck driver training schools, and local workforce development boards—told us that federal agencies can help prepare the future workforce by sharing information with stakeholders about impending workforce changes. In particular, some workforce officials we spoke with said they would benefit from information about technology developers’ plans that would affect future demand or skills for truck drivers. Furthermore, DOL officials told us that heavy and tractor-trailer truck driving was the most common type of occupational training funded through the WIOA Adult and Dislocated Worker programs between April 2017 and March 2018, the most recent period for which data are available. Specifically, local workforce development boards provided funding from these programs to roughly 17,000 individuals for heavy and tractor-trailer truck driver training during that year, or about 15 percent of all individuals who received training services that began within that timeframe. This was more than twice as many individuals as those who received funding for nursing assistant training, the second most frequently funded type of training through these programs. As previously noted, one of DOL’s strategic objectives is to provide timely and accurate labor market information. In addition, according to Standards for Internal Control in the Federal Government, an agency’s management should externally communicate the necessary quality information to achieve the entity’s objective. This includes communicating quality information so that external parties can help the entity address related risks. Additionally, our work has shown that federal agencies can play an important role in sharing information. We have noted that such information sharing is important to help maintain U.S. competiveness. DOT’s strategic plan highlights the agency’s concern that the lack of credentialed workers, combined with projected retirements, threaten to cause significant worker shortages, and that the introduction of innovations and new technologies adds additional complexity for workforce development. Consulting with DOT to provide stakeholders with information about how automated technology could affect the number of trucking jobs and the skills needed to drive or operate commercial trucks would better position local workforce development boards, truck driver training schools, and others to adequately prepare the workforce for future needs. Responding to Potential Job Losses DOL officials said that existing employment and training programs administered by the agency, usually through grants, are generally designed to respond to economic changes that may result in job losses, including any that may result from automated trucking. In addition, DOL officials said that the agency has several resources to support state and local workforce areas to respond to mass layoffs and help workers upgrade their skills. For example, Rapid Response, which is carried out by states and local workforce development agencies, can provide services to employees after a layoff, including career counseling, job search assistance, and information about unemployment insurance and training opportunities. Additionally, under WIOA, local workforce development boards can use up to 20 percent of their Adult and Dislocated Worker allocations to help fund the cost of providing incumbent worker training designed to help avert potential layoffs or increase the skill levels of employees. While these programs may help mitigate any future job losses due to automated trucking, DOL would be better positioned to help local economies leverage them effectively if the agency continued to convene stakeholders, building on its efforts to gather and share good information on when and how those workforce effects are likely to materialize as technology evolves. Conclusions Automated and self-driving technology for commercial trucks could make the industry safer and more efficient, but it also introduces significant uncertainties for the trucking workforce that DOL and DOT, in consultation with other federal agencies and stakeholders, can help navigate. For example, there is uncertainty about the widespread deployment of self-driving trucks as well as what the resulting effects will be on employment levels, wages, and needed skills. Although technology companies generally envision self-driving trucks being used for long-haul routes—which could result in fewer long-haul trucking jobs—other stakeholders argued that a truck will always need a driver or operator. Stakeholders we interviewed also lacked consensus about what automated trucking might mean for wages and what new skills will be needed to drive or operate automated trucks. Federal agencies have an opportunity to prepare truck drivers for the possible workforce effects of automated trucking. Many stakeholders noted that the effects would be gradual, giving the government time to act, but studies note the effects could eventually be significant, possibly affecting hundreds of thousands of truck driving jobs. DOT is taking an important step toward learning about these workforce effects by consulting with DOL and other stakeholders to inform DOT’s analysis of these developments. However, these agencies have not made plans to continue to convene stakeholders to gather information on an ongoing basis or update their analysis as the technology evolves and the effects become more apparent. Doing so could allow DOL and DOT the foresight to consider whether additional policy changes are needed to prepare for any possible future workforce effects. Similarly, DOL’s publication of routine employment projections and current driver skills and tasks provide useful information. However, DOL has not shared information on what skills drivers might require in the future with other key stakeholders, including technology developers, industry experts, truck driver representatives, training schools, local workforce development boards, and other relevant federal agencies. As a result, those stakeholders may miss an opportunity to better anticipate and plan for changes that may arise from automated trucking technology, including potential labor displacement, wage changes, and the need for new skills. Recommendations for Executive Action We are making the following four recommendations, including two for the Department of Labor and two for the Department of Transportation: 1. The Secretary of Labor should collaborate with the Secretary of Transportation to continue to convene key groups of stakeholders to gather information on potential workforce changes that may result from automated trucking as the technology evolves, including analyzing needed skills and identifying any information or data gaps, to allow the agencies to fully consider how to respond to any changes. These stakeholders could include, for example, representatives of other relevant federal agencies, technology developers, the trucking industry, organizations that represent truck drivers, truck driver training schools, state workforce agencies, and local workforce development boards. (Recommendation 1) 2. The Secretary of Transportation should collaborate with the Secretary of Labor to continue to convene key groups of stakeholders to gather information on potential workforce changes that may result from automated trucking as the technology evolves, including analyzing needed skills and identifying any information or data gaps, to allow the agencies to fully consider how to respond to any changes. These stakeholders could include, for example, representatives of other relevant federal agencies, technology developers, the trucking industry, organizations that represent truck drivers, truck driver training schools, state workforce agencies, and local workforce development boards. (Recommendation 2) 3. The Secretary of Transportation should consult with the Secretary of Labor to further analyze the potential effects of automated trucking technology on drivers to inform potential workforce-related regulatory changes, such as the requirements to obtain a commercial driver’s license or hours of service requirements (e.g., the maximum hours commercial truck drivers are permitted to work). This could include leveraging the analysis described by the Explanatory Statement accompanying the Consolidated Appropriations Act, 2018 once it is complete, as well as information the department obtains from stakeholders as the technology evolves. (Recommendation 3) 4. The Secretary of Labor should consult with the Secretary of Transportation to share information with key stakeholders on the potential effects of automated trucking on the workforce as the technology evolves. These stakeholders could include, for example, representatives of other relevant federal agencies, technology developers, the trucking industry, organizations that represent truck drivers, truck driver training schools, state workforce agencies, and local workforce development boards. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report for review and comment to the Departments of Education, Labor (DOL), Transportation (DOT), and Veterans Affairs. We received formal written comments from DOL and DOT, which are reproduced in appendices III and IV, respectively. In addition, DOL and DOT provided technical comments, which we have incorporated as appropriate. The Departments of Education and Veterans Affairs did not have comments on our report. In its written comments, DOL agreed with our recommendations and noted several efforts that it said will help the agency assess and provide information on the potential workforce effects of evolving technologies, such as automated trucking. For example, DOL noted that the agency’s employment projections incorporate expert interviews and other information to identify shifts in industry employment. DOL is also currently consulting with DOT to study these workforce effects, and agreed to consider what other information and stakeholder meetings remain necessary after that study—due in March 2019—is completed. Likewise, DOL agreed to share related information as the technology evolves, and the agency noted it currently publishes employment projections and other occupational information. While useful, these efforts alone will not allow DOL to sufficiently anticipate the future workforce effects of automated trucking. For instance, the broad employment projections do not provide estimates specifically for the long-haul truck drivers who could be affected by automated trucking first. Further, DOL’s occupational information is based on surveys of current workers, so it does not include the skills future drivers will need as automated trucking evolves. Therefore, we continue to believe that convening stakeholders and sharing information about potential workforce effects in the future will position DOL to better understand and inform key stakeholders of these changes. In its written comments, DOT agreed with our recommendations. DOT noted two of its current efforts related to automated trucking technology, namely its October 2018 automated vehicles voluntary guidance, Preparing for the Future of Transportation: Automated Vehicles 3.0, and its forthcoming Congressionally-directed research on the impact of automated vehicle technologies on the workforce. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Education, Labor, Transportation, and Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact us at (202) 512-7215 or brownbarnesc@gao.gov or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology Our objectives were to examine: (1) what is known about how and when automated vehicle technologies could affect commercial trucks; (2) what is known about how the adoption of automated trucks could affect the commercial trucking workforce; and (3) the extent to which the Department of Transportation (DOT) and Department of Labor (DOL) are preparing to assist drivers whose jobs may be affected by automated trucking. For all the objectives, we reviewed relevant federal laws and regulations as well documentation from DOT and DOL. To determine the extent to which federal agencies are preparing to assist current and future drivers, we compared DOT and DOL’s efforts against their strategic plans as well as Standards for Internal Control in the Federal Government. Additionally, we: Conducted Interviews: We interviewed officials from several federal agencies to obtain relevant information about our objectives, including the Departments of Education, Labor, Transportation, and Veterans Affairs, as well as the National Science Foundation. To obtain information about all of our objectives, we also interviewed other selected stakeholders. We used our initial research and interviews to develop a list of stakeholder categories that would provide informed perspectives, which when taken as a whole, provided a balanced perspective to answer our objectives. We selected stakeholders who had a range of perspectives regarding the timing for adoption of automated trucking technology, and how this adoption could affect the truck driving workforce. We used the following criteria to select interviewees: 1. authored a report, article, book, or paper regarding automated trucking technology or its potential workforce effects; 2. participated in panels, hearings, or roundtables regarding automated trucking or its potential workforce effects; or 3. was recommended by at least one of our interviewees. We interviewed organized labor representatives; researchers; and representatives from three truck manufacturers and three companies operating their own trucking fleet; two national industry organizations; one national safety organization; four truck driver training schools; an association of state and local workforce organizations; and four local workforce development boards. We selected the schools in part based on recommendations from an association of truck driver training schools, and included two accredited and two non-accredited schools in our selection. We selected three of the workforce development boards due to the prevalence of trucking jobs in their areas and the other board because it was in an area that several stakeholders suggested could be early to adopt automated trucking technology. Additionally, we visited California, where we interviewed representatives of four automated truck technology developers and a manufacturer, and viewed demonstrations of automated trucking technology. We selected California because it had the largest number of technology developers that we identified through our research efforts. We asked all of these stakeholders a core set of questions, as well as tailored questions based on their expertise. Some of the questions we asked stakeholders varied, and some stakeholders chose not to answer every question we asked because they either did not think they had sufficient knowledge about the specific question or did not want to make predictions about future industry developments. Therefore, we generally did not report the specific number of stakeholder responses in this report. The views of the stakeholders we interviewed are illustrative examples and may not be generalizable. For a full list of stakeholders we interviewed, see table 1. Analyzed federal data. To examine how the adoption of automated trucks could affect the current and future trucking workforce, we analyzed relevant data from the Bureau of Labor Statistics (BLS) and the Census Bureau on the current trucking workforce. Specifically, we examined BLS’s Occupational Employment Statistics to obtain employment level and wage data for heavy and tractor-trailer truck drivers (Standard Occupational Classification code 53-3032). The Occupational Employment Statistics survey is a federal-state cooperative program between the Bureau of Labor Statistics and State Workforce Agencies. The survey provides estimates regarding occupational employment and wage rates for the nation as a whole, by state, by metropolitan or nonmetropolitan area, and by industry or ownership. Data from self-employed persons are not included in the estimates. For our analysis of geographic concentration of heavy and tractor-trailer truck driving jobs, we carried out a one-sided test at the 0.05 percent level of significance of the null hypothesis that a region’s concentration is equal to or less than twice the national concentration versus the alternative hypothesis, that the region’s concentration is greater than twice the national concentration. We classified the results, excluding any unreliable areas (i.e., areas with a 95 percent confidence level margin of error for the estimated number of truck drivers that was larger than 30 percent of the estimate itself). We used Poisson tests because these are more appropriate for event occurrences in smaller populations or on a small number of cases. In addition, we analyzed data from the Census Bureau’s American Community Survey regarding the education level, sex, and age of current truck drivers and other drivers. The American Community Survey is an ongoing survey that collects information about the U.S. population such as jobs and occupations, educational attainment, income and earnings and other topics. According to the Census Bureau’s description of the American Community Survey, this survey uses a series of monthly samples to produce annually updated estimates for the same small areas (census tracts and block groups) formerly surveyed via the decennial census long-form sample. Based on our review of related documents and interviews with knowledgeable agency officials, we found the data to be reliable for our purposes. Synthesized literature. To explore how and when automated vehicle technologies could affect the current fleet of commercial trucks and gather information about the possible employment effects of this technology, we conducted a review of key research related to automated vehicle technologies for commercial trucks. We searched bibliographic databases for articles that were published between January 1, 2014 and May 22, 2018 and included key terms such as “autonomous”, “automated”, “driverless”, and “truck platoon” to describe the trucking technology. We also asked the researchers we interviewed to identify any studies that may be relevant to our work. Our search initially resulted in over 250 articles with potential relevance to our objectives. Two analysts reviewed the abstracts of these articles to determine if the articles in this initial search were germane to our objectives. We excluded any articles that were not relevant to our objectives or did not meet our standards for empirical analysis. We included articles that were published in peer review journals, by industry, or by government agencies, as well as articles that were recommended by researchers we interviewed. We identified a final list of 12 studies that met our criteria. Although we reviewed each study’s methodological approach, we did not independently assess the evidence in the articles or verify the analysis of the evidence that was used to come to the conclusions these studies reached. Appendix II: Potential Legal Factors That May Affect Timing of Automated Trucking Appendix III: Comments from the Department of Labor Appendix IV: Comments from the Department of Transportation Appendix V: GAO Contacts and Staff Acknowledgments GAO Contacts Cindy Brown Barnes or Susan Fleming, (202) 512-7215 or brownbarnesc@gao.gov or flemings@gao.gov. Staff Acknowledgments GAO staff who made major contributions to this report include Brandon Haller (Assistant Director), Rebecca Woiwode (Assistant Director), Drew Nelson (Analyst-in-Charge), MacKenzie Cooper, Marcia Fernandez, and Hedieh Fusfield. Additional assistance was provided by Susan Aschoff, David Ballard, James Bennett, Melinda Cordero, Patricia Donahue, Philip Farah, Camilo Flores Monckeberg, David Hooper, Angie Jacobs, Michael Kniss, Terence Lam, Ethan Levy, Sheila R. McCoy, Madhav Panwar, James Rebbe, Benjamin Sinoff, Pamela Snedden, Almeta Spencer, John Stambaugh, Walter Vance, Sonya Vartivarian, and Stephen C. Yoder.
Why GAO Did This Study Automated vehicle technology may eventually make commercial trucking more efficient and safer, but also has the potential to change the employment landscape for nearly 1.9 million heavy and tractor-trailer truck drivers, among others. GAO was asked to examine the potential workforce effects of automated trucking. This report addresses (1) what is known about how and when automated vehicle technologies could affect commercial trucks; (2) what is known about how the adoption of automated trucks could affect the commercial trucking workforce; and (3) the extent to which DOT and DOL are preparing to assist drivers whose jobs may be affected. GAO reviewed research since 2014 on automated trucking technology, viewed demonstrations of this technology, and analyzed federal data on the truck driver workforce. GAO also interviewed officials from DOT and DOL, as well as a range of stakeholders, including technology developers, companies operating their own trucking fleets, truck driver training schools, truck driver associations, and workforce development boards. What GAO Found Automated trucks, including self-driving trucks, are being developed for long-haul trucking operations, but widespread commercial deployment is likely years or decades away, according to stakeholders. Most technology developers said they were developing trucks that can travel without drivers for part of a route, and some stakeholders said such trucks may become available within 5 to 10 years. Various technologies, including sensors and cameras, could help guide a truck capable of driving itself (see figure). However, the adoption of this technology depends on factors such as technological limitations and public acceptance. Stakeholders GAO interviewed predicted two main scenarios for how the adoption of automated trucks could affect the trucking workforce, which varied depending on the future role of drivers or operators. Technology developers, among others, described one scenario in which self-driving trucks are used on highway portions of long-haul trips. Stakeholders noted this scenario would likely reduce the number of long-haul truck drivers needed and could decrease wages because of lower demand for such drivers. In contrast, groups representing truck drivers, among others, predicted a scenario in which a truck would have an operator at all times for complex driving and other non-driving tasks, and the number of drivers or operators would not change as significantly. However, stakeholders lacked consensus on the potential effect this scenario might have on wages and driver retention. Most stakeholders said automated trucking could create new jobs, and that any workforce effects would take time—providing an opportunity for a federal response, such as any needed policy changes. The Department of Transportation (DOT) is consulting with the Department of Labor (DOL) to conduct a congressionally-directed analysis of the workforce impacts of automated trucking by March 2019. As part of this analysis, DOT and DOL have coordinated to conduct stakeholder outreach. However, they do not currently plan to convene stakeholders on a regular basis to gather information because they have focused on completing this analysis first. Continuing to convene stakeholders could provide the agencies foresight about policy changes that may be needed to prepare for any workforce effects as this technology evolves. What GAO Recommends GAO is making four recommendations, including that both DOT and DOL should continue to convene key stakeholders as the automated trucking technology evolves to help the agencies analyze and respond to potential workforce changes that may result. DOT and DOL agreed with the recommendations.
gao_GAO-18-591
gao_GAO-18-591_0
Background Native American Population and Indian Country Over 4 million people in the United States identified as Native American based on 2016 United States Census estimates, of which 29 percent were youth. As of June 2018, there were 573 federally recognized Indian tribes. According to BIA, as of June 2018, there were approximately 497 Indian land areas in the United States administered as federal Indian reservations or other tribal lands (e.g., pueblos, villages, and communities). These land areas, which span more than 56 million acres and 37 states, and vary in size, can generally be referred to as Indian country. Indian country is in remote, rural locations, and also near urban areas. Native Americans live both inside and outside of these land areas, and Indian country may have a mixture of Native American and non- Native American residents. Jurisdiction over crime in Indian country differs according to several factors and affects how Native American youth become involved with justice systems, as discussed further below. Youth in State and Local, Federal, and Tribal Justice Systems Youth who commit offenses can enter one or more justice systems at the state and local, federal, and tribal levels. Although state and local, federal, and tribal justice systems have unique characteristics, they all generally proceed through certain phases, including arrest, prosecution and adjudication, and in some instances, placement and confinement in a detention facility. State and local. State and local justice systems have specific courts– often at the county or city level–with jurisdiction over youth alleged to have committed an act of juvenile delinquency or a crime. This jurisdiction can be conferred by the state’s laws and exercised by courts at the city, county, or municipal levels, and each state and local entity’s processing of youth is unique. There are more than 2,400 courts across the country with juvenile jurisdiction, and a majority of these are at the city, county, or municipal, i.e., local, level. Generally, a youth is either referred to juvenile court or released. Juvenile courts handle two types of petitions: delinquency or waiver. A delinquency petition is the official charging document filed in juvenile court by the state. A juvenile’s case may be dismissed, handled informally (without filing a petition for adjudication), or handled through adjudication by the court. In some more serious situations, the case can be handled by a criminal court. Juvenile cases that are handled informally or through adjudication can result in various outcomes, including probation, commitment to an institution or other residential facility, another sanction (e.g., community service), or dismissal. Federal. Unlike state systems, the federal justice system does not have a separate court with jurisdiction over juvenile cases. Youth that are proceeded against in federal court are generally adjudicated in a closed hearing before a U.S. district or magistrate judge and their cases are either declined or they can be adjudicated delinquent. Delinquent adjudications can result in outcomes such as probation, commitment to a correctional facility, or the requirement to pay restitution. Youth under the age of 18 who are confined in federal facilities, including Native American youth, are housed in juvenile facilities overseen by the Federal Bureau of Prisons (BOP), which contracts with other entities to manage those facilities. Tribal. Tribal justice systems vary. A number of tribes have tribal judicial systems, some with separate juvenile courts, and others rely on state courts or the federal system. As of April 2018, there were approximately 89 adult and juvenile jail facilities and detention centers in Indian country, according to BIA officials. In addition, DOI’s BIA directly manages some facilities, called juvenile detention centers, on tribal lands. Jurisdiction of Federal, State, and Tribal Justice Entities Outside and Inside Indian Country Outside Indian country. A state generally has jurisdiction to proceed against a youth who has committed a crime or act of juvenile delinquency outside of Indian country. This jurisdiction is generally exercised in each state by local courts (e.g., at the county and city levels). Federal law limits federal jurisdiction over youth if a state has jurisdiction over the youth and has a system of programs and services adequate for their needs. Since the passage of the Juvenile Justice and Delinquency Prevention Act in 1974, federal law has reflected an intent to support state and local community-level programs for the prevention and treatment of juvenile delinquency, and to avoid referral of juvenile cases out of the state and local systems while balancing against the need to protect the public from violent offenders. Consistent with this, the Federal Juvenile Delinquency Code provides that a youth alleged to have committed an act of juvenile delinquency, with certain exceptions, will not fall under federal jurisdiction unless (1) the juvenile court or other appropriate court of a state does not have jurisdiction over the youth, (2) the state does not have available programs and services adequate for the needs of the youth, or (3) the offense charged is a violent felony or an enumerated offense involving controlled substances and there is a substantial federal interest in the case or the offense to warrant the exercise of federal jurisdiction. Inside Indian country. For both youth and adults, the exercise of criminal jurisdiction in Indian country depends on several factors. These factors include the nature of the crime, the status of the alleged offender and victim—that is, whether they are Indian or not—and whether jurisdiction has been conferred on a particular entity by statute. Additionally, the Federal Juvenile Delinquency Code generally applies to all juveniles alleged to have committed an act of juvenile delinquency, whether inside or outside Indian country. As a general principle, the federal government recognizes Indian tribes as “distinct, independent political communities” that possess powers of self-government to regulate their “internal and social relations,” which includes enacting substantive law over internal matters and enforcing that law in their own forums. The federal government, however, has authority to regulate or modify the powers of self-government that tribes otherwise possess, and has exercised this authority to establish jurisdiction over certain crimes in Indian country. For example, the Major Crimes Act, as amended, provides the federal government with criminal jurisdiction over Indians in Indian Country charged with serious, felony-level offenses enumerated in the statute, such as murder, manslaughter, kidnapping, burglary, and robbery. The General Crimes Act, the Major Crimes Act, and Public Law 280, which are broadly summarized in table 1, are the three federal laws central to the exercise of criminal jurisdiction in Indian country. The exercise of criminal jurisdiction by state governments in Indian country is generally limited to two instances: when both the alleged offender and victim are non-Indian, or when a federal statute confers, or authorizes, a state to assume criminal jurisdiction over Indians in Indian country. Otherwise, only the federal and tribal governments have jurisdiction in Indian country. Table 2 summarizes aspects of federal, state, and tribal jurisdiction over crimes committed in Indian country. Federal Agencies Responsible for Investigation, Prosecution, and Confinement of Youth within the Federal Justice System Federal agencies that come into contact with youth alleged to have committed an act of juvenile delinquency are to do so in accordance with the Federal Juvenile Delinquency Code. When a youth enters the federal justice system, several components within DOJ and DOI, among others, have responsibility for investigating and prosecuting his or her crimes. DOJ’s Federal Bureau of Investigation (FBI) has investigative responsibilities, including in Indian country, where it works with tribes to investigate crime. The FBI refers criminal investigations to a United States Attorney’s Office for prosecution. In the course of the federal criminal justice process, a U.S. attorney is involved in the process of investigating, charging, and prosecuting an offender, among other responsibilities. Under the direction of the Attorney General, the United States Attorney’s Office may prosecute crimes committed in Indian country where federal jurisdiction exists, as discussed above. DOJ’s U.S. Marshals Service (USMS) also has a role in the federal criminal justice process. Its mission areas include fugitive apprehension and federal prisoner security and transportation, among other responsibilities. USMS has arrest jurisdiction for enforcing the federal process anywhere in the United States, including Indian country. DOJ’s BOP is responsible for the custody and care of federal inmates and offenders, including youth. BOP works in coordination with the federal courts to assist in locating a detention facility within the youth’s jurisdiction, where possible. Figure 1 describes the key DOJ entities and their respective responsibilities related to the federal criminal justice process. Within DOI, BIA is statutorily responsible for enforcing not only federal law in Indian country but also tribal law, with the consent of the tribe. However, in certain situations, a tribe may assume this function from DOI pursuant to a self-determination contract or self-governance compact. BIA supports tribes in their efforts to ensure public safety and administer justice within Indian country through, for example, providing uniformed police and criminal investigative services for a number of tribes. Other agencies and departments with roles in the federal criminal justice process for youth include federal courts, the Administrative Office of the U.S. Courts, and the U.S. Sentencing Commission. Federal courts have the authority to decide cases and sentence offenders, among other things. The Administrative Office of the U.S. Courts provides a broad range of support services to the federal courts, which are responsible for adjudicating the cases of youth in the federal justice system. The U.S. Sentencing Commission is an independent judicial branch agency responsible for, among other things, collection, preparation, and dissemination of information on sentences imposed across federal courts. Data on Youth Involvement in Justice Systems There is no single, centralized data source that contains data for youth involved in all justice systems and across all phases of the justice process. Rather, there are several disparate data sources at each level (federal, state and local, or tribal) and phase (arrest, prosecution, and confinement). Further, while some agencies, such as USMS and BOP, share a unique identifier for an individual within the federal data sources, there is no unique identifier across all federal and state and local data sources. For purposes of this review, and given privacy concerns related to juvenile data, we were unable to track individuals across all phases of the federal justice system or identify the number of unique youth who came into contact with federal, state and local, or tribal justice systems. In addition to there being no single database that houses all relevant data on youth in the tribal, state and local, and federal justice systems, each database also varies in how it defines Native American, as well as how it determines whether youth are Native American for purposes of the data source. For example, some agencies define Native American broadly, as an individual having origins in any of the indigenous peoples of North America, including Alaska Natives. In contrast, DOJ’s Executive Office for United States Attorneys (EOUSA), in its prosecution data, defines the term Indian based on statute and case law, which generally considers an Indian to have both a significant degree of Indian blood and a connection to a federally recognized tribe. In addition, BOP determines that a youth is Native American for purposes of its data by reviewing documentation including charging documents, while USMS relies on individuals self- reporting their race upon being taken into custody. See appendix II for additional information and descriptions of these differences. Federal Grant Programs That May Address Juvenile Delinquency Federal departments and agencies, including DOJ and HHS, provide funding through several types of mechanisms for Native American populations and tribal lands, including mandatory grant programs, compacts and contracts, discretionary grants, and cooperative agreements. As discussed above, our analysis focused on discretionary grants and cooperative agreements. Discretionary grants are competitive in nature, whereby the granting agency has discretion to choose one applicant over another. DOJ’s Office of Justice Programs (OJP) awards discretionary grants to states, tribal organizations, territories, localities, and organizations to address a variety of issues, including to help prevent and reduce juvenile delinquency and victimization and improve their youth justice systems. DOJ also provides grant funding for training and technical assistance to enhance and support tribal governments’ efforts to reduce crime and improve the function of criminal justice in Indian country. Cooperative agreements are similar to discretionary grants in that federal agencies generally award them to grantees based on merit and eligibility. However, in contrast to a discretionary grant, federal agencies generally use cooperative agreements when they anticipate that there will be substantial federal, programmatic involvement with the recipient during the performance of the financially-assisted activities, such as agency collaboration or participation in program activities. Task Force and Commission Reports Related to Native American Youth and Juvenile Justice Two reports focused on Native American youth exposure to violence and ways to address and mitigate the negative impact of this exposure when it occurs, as well as ways to develop knowledge and spread awareness about children’s exposure to violence. In addition, both reports discussed factors that indicate Native American youth are uniquely positioned in regards to their contact with the justice systems, and included recommendations specific to Native American youth interaction with justice systems at the federal, state, and tribal levels. Appendix III describes actions agencies reported taking related to selected recommendations from these reports. Available Data Indicate Native American Youth Involvement in Justice Systems Declined from 2010 through 2016 and Differed in Some Ways from That of Non-Native American Youth From 2010 through 2016, the number of Native American youth involved with state and local and federal justice systems declined, according to our analysis of available data. This decline occurred across all phases of the justice process: arrest, adjudication, and confinement in facilities. The involvement of these Native American youth in the state and local and federal justice systems was also concentrated in certain geographic areas. Further, the vast majority of these Native American youth came into contact with state and local justice systems, not the federal system. Analysis of available data also indicates that the percent of Native American youth involved in the federal justice system during the period reviewed was greater than their representation in the nationwide youth population. In contrast, the percent of Native American youth involved in most state and local justice systems was similar to their representation in youth populations in those states. Moreover, the involvement of Native American and non-Native American youth in the federal justice system showed several differences (in types of offenses, for example), while their involvement in state and local justice systems showed several similarities. DOJ officials and representatives of Native American organizations we interviewed attributed the greater percent of Native American youth involved in the federal justice system and the differences shown by our analysis to federal government jurisdiction over crimes in Indian country, as well as the absence of general federal government jurisdiction over non-Native American youth. Involvement of Native American Youth in the Justice Systems Declined from 2010 through 2016 The number of Native American youth involved with state and local and federal justice systems declined from 2010 through 2016 across all phases of the justice process—arrest, adjudication, and confinement in facilities, according to our analysis of available data. The majority of Native American youth involved with state and local justice systems were located in 11 of the 50 states, and all Native American youth involved with the federal justice system were located in 5 of the 12 federal circuits. Further, most Native American youth were involved in state and local justice systems rather than in the federal system. Comprehensive data from tribal justice systems on the involvement of Native American youth were not available. However, we identified and reviewed a few data sources that provided certain insights about the arrest, adjudication, and confinement of Native American youth by tribal justice systems. See appendix IV for a summary of our analysis of data from these sources. Arrests State and local and federal. Analysis of available data indicates that from calendar years 2010 through 2016, there were 105,487 arrests of Native American youth by state and local law enforcement agencies (LEAs), and over this period, arrests generally declined by 40 percent. As shown in table 3, arrests declined from 18,295 in 2010 to 11,002 in 2016. During the same period, there were 246 federal custodies of Native American youth due to arrest by federal LEAs; the number of federal custodies also generally declined during the period—from 60 in 2010 to 20 in 2016. According to available data, the majority (about 75 percent) of Native American youth arrested by state and local LEAs from calendar years 2010 through 2016 were located in 10 states: Alaska, Arizona, Minnesota, Montana, New Mexico, North Dakota, Oklahoma, South Dakota, Washington, and Wisconsin. All of these ten states had a higher than average percentage of Native Americans among the states’ overall youth populations, according to 2016 U.S. Census estimates we reviewed. For example, of all the states Alaska had the largest percentage of Native Americans among its youth population, at 19 percent in 2016. In contrast, the percent of Native American youth in the youth population in many (26) states was less than 1 percent. In 2016, the largest number of arrests by state and local LEAs occurred in Arizona and South Dakota, as shown in figure 2. All Native American youth in federal custody with USMS due to a federal LEA arrest from fiscal years 2010 through 2016 were located in 4 of the 12 federal circuits—the 2nd, 8th, 9th, and 10th circuits (see figure 3), according to our analysis of available data. These four circuits include 25 states. State and local. Available data show that from calendar year 2010 through calendar year 2014, state and local courts processed fewer cases involving Native American youth. For example, during the period, state and local courts received about 86,400 delinquency cases involving Native American youth, and the number of cases declined by about 19 percent from 19,200 in 2010 to 15,600 in 2014, as shown in table 4. The number of cases petitioned, or requested that a court adjudicate, and the number of cases adjudicated delinquent also declined, by about 20 percent and 26 percent, respectively. Among delinquency cases received during the period, state and local courts petitioned about half (49,000 cases, or 57 percent). Among all petitioned cases, about two-thirds (32,900 cases, or 67 percent) were adjudicated delinquent. Among youth found delinquent during the period, more than half—65 percent (21,300)—received probation, 24 percent (7,800) were placed in an institution or other residential facility, and 12 percent (3,800) received some other sanction, such as community service. Federal. Available data show that federal courts received 349 Native American youth suspects from fiscal years 2010 through 2014 (see table 4, above), and the annual number fluctuated over the period but declined slightly overall (59 in 2010 compared to 57 in 2014). Of the suspects received, federal courts declined to adjudicate 138 and adjudicated 167 youth as delinquent or guilty. The number of delinquent or guilty outcomes declined overall from 37 in 2010 to 20 in 2014. According to analysis of available data, all Native American youth referred to a United States Attorney from fiscal years 2010 through 2014 were located in 4 of the 12 federal circuits—the 6th, 8th, 9th, and 10th circuits, as shown in figure 4. These four circuits include 26 states. Annually, the number of referrals to each circuit was similar throughout the period. State and local. The number of Native American youth confined in state and local residential facilities declined by about 37 percent between 2011 and 2015, from at least 861 in 2011 to at least 544 in 2015, according to our analysis of data from the biennial Census of Juveniles in Residential Placement survey. The majority of Native American youth (approximately 65 percent) were confined in 9 states when the biennial survey was taken in 2011, 2013, and 2015. Generally, these states included Alaska, Arizona, Minnesota, Montana, North Dakota, Oklahoma, Oregon, South Dakota, and Washington (see figure 5 for 2015 census results). All of these states had a higher than average percentage of Native Americans among the states’ overall youth population in 2015. Federal. From fiscal years 2010 through 2016, a total of 138 Native American youth who had been sentenced were admitted to juvenile facilities overseen by BOP; this number declined over the period from 37 in 2010 to 6 in 2016, according to our analysis of available data. Court proceedings for these individuals had been finalized and the individuals were sentenced to a juvenile facility overseen by BOP. Agency and Organization Perspectives DOJ officials and representatives from five Native American organizations we interviewed provided various perspectives on the decline and geographic distribution of Native American youth in justice systems that our analysis showed. Specifically, DOJ officials noted that the number of youth involved in state and local, federal, and tribal systems has been declining for several years across all races, not just Native American youth. However, when asked about this decline, representatives from three of the five Native American organizations we interviewed stated that data on the number of Native American youth in justice systems, especially at the state level, is underreported and often inconsistent. Representatives from two of those organizations noted that when a youth comes into contact with state juvenile justice systems, states are not required to ask about Native American status, which results in inconsistent tracking and underreporting of Native American youth involved with state systems. Representatives from one of these organizations, which provides assistance in national policy areas, noted that states are not required to contact a youth’s identified tribe to confirm the youth’s tribal affiliation. These representatives also noted that some states may inquire about tribal affiliation when youth come into contact with the state’s justice system, but the states do not have a reliable process to identify Native American youth. In addition, these same representatives noted that Native American youth are often unlikely to share their ethnicity with state officials, or anyone outside of their community. Representatives from another organization noted that state court judges are not required to ask about Native American status, which could also potentially result in undercounting of Native American youth in state systems. Representatives from another organization which commented on the decline stated that because state and federal data only capture more serious offenses, lesser crimes handled at the tribal level often go unreported. Representatives from two of the organizations we interviewed did not question the decline in the number of Native American youth involved in federal and state and local systems, but noted that there has been a movement away from criminalizing youth in general. Rather, these representatives explained that there is more of a focus on restorative justice, diversion, and alternatives to incarceration, as well as a movement toward more trauma-informed care. Representatives from one of these two organizations noted that a number of states have worked out civil diversion agreements with local tribes, which provide opportunities for the tribe to practice restorative justice with delinquent youth instead of confining them. Regarding the distribution of Native American youth by state, representatives from four of the five organizations we interviewed noted that the number of youth involved with state justice systems is higher in those states with a larger Native American population, and thus were not surprised by the states our analysis showed to have the highest numbers of Native American youth involved in their state and local justice systems. These representatives also provided additional perspectives on why some states might have higher numbers of youth involved with their justice systems. For example, representatives from one organization noted that in certain states, not all tribes have tribal law enforcement, which could potentially lead to higher state involvement in Native American juvenile cases that might otherwise be handled by tribes. Representatives from another organization noted that some states have a reputation for more aggressively adjudicating delinquent Native American youth. Data Show that Representation of Native American Youth in the Federal Justice System Was Greater Than Their Representation in the Youth Population, but Their Representation in Most State and Local Justice Systems Was Comparable The percentage of youth who were Native American among those involved with the federal justice system from 2010 through 2016 was greater than the percent of Native American youth in the nationwide youth population, according to analysis of available data. In contrast, state-by- state analysis showed that the percent of youth who were Native American among those involved with state and local justice systems during the period was similar to many states’ Native American youth population. Federal justice system. The percent of youth arrested, referred for adjudication, and confined at the federal level from 2010 through 2016 who were Native American (13 to 19 percent) was greater than the percent of Native Americans in the nationwide youth population during the same period (1.6 percent). For example, the percent of youth in USMS custody and arrested by federal LEAs during the period who were Native American was 18 percent (246 Native American youth out of 1,358 total youth arrested from fiscal years 2010 through 2016), as shown in table 5. According to DOJ officials, the federal juvenile population of Native Americans has historically been higher than their representation in the nationwide population due to federal government jurisdiction over certain crimes in Indian country, which requires the federal government to prosecute offenses that would commonly be prosecuted by states if committed outside of Indian country. According to DOJ officials, a small handful of federal criminal statutes apply to all juveniles, such as immigration and drug statutes, but the federal government has been granted greater jurisdiction over Native American youth than non-Native American youth by federal laws that apply to crimes committed in Indian Country, such as the Major Crimes Act. For example, one DOJ official noted that the Major Crimes Act gives the federal government exclusive jurisdiction over crimes such as burglary and sex offenses committed in Indian country. This differs from the treatment of non-Native American youth, who are not prosecuted in the federal system for the same types of offenses, because the federal government does not have jurisdiction over those youth for such offenses. Non-Native American youth are instead subject to the general juvenile delinquency jurisdiction of state and local courts. Further, DOJ officials stated that a significant portion of Indian country is in states where Public Law 280 does not apply, and thus the federal government generally has criminal jurisdiction for major crimes in Indian Country. Additionally, DOJ officials stated that tribal justice systems are often underfunded and do not have the capacity to handle Native American youths’ cases. Therefore, when both federal and tribal justice systems have jurisdiction, they said that the federal system may be the only system in which the youth’s case may be adjudicated. For these reasons, the number of Native American youth offenders in the federal justice system is disproportionate to non-Native American juveniles in accordance with population size, according to DOJ officials. State and local justice systems. State-by-state analysis of arrest data showed some variation in the percentage of Native Americans among youth arrested by state and local LEAs from calendar years 2010 through 2016. For example, as figure 6 illustrates, in most states, the percentage of youth arrested by state and local LEAs in 2016 who were Native American was similar to the percent of Native American youth in the states’ population. However, in four states—Alaska, Montana, North Dakota, and South Dakota—the percentage of Native Americans among the youth arrested by state and local LEAs was at least 5 percentage points higher. In two states—New Mexico and Oklahoma—it was at least 4 percentage points lower. State-by-state analysis of state and local confinement data for 2015 showed a similar pattern. As figure 7 illustrates, in most states, the percent of youth confined at state and local facilities in 2015 who were Native American was similar to the percent of Native American youth in the states’ population. However, six states—Alaska, Minnesota, Montana, North Dakota, South Dakota, and Wyoming—the percentage of Native Americans among the youth confined in state and local facilities was at least 5 percentage points higher. In one state—New Mexico—it was 11 percentage points lower. Agency and organization perspectives. According to DOJ officials, as noted above, federal jurisdiction over crimes in Indian country results in a higher percentage of Native American youth (compared to non-Native American youth) involved with the federal justice system. In addition, a DOJ official noted that that certain states may have a higher percentage of Native Americans among youth confined in that state’s facilities if those Native American youth reside more in urban or other areas that are not Indian country, and are thus more likely subject to state and local jurisdiction. Conversely, the official said that for those states with lower Native American youth confined in state facilities compared to the Native American youth population in the state overall, the youth may reside more in Indian country, resulting in their contact with the federal judicial system more than the state or local justice systems. Representatives from four of the five Native American organizations we interviewed noted that federal jurisdiction is a key contributor to the higher percentage of Native American youth involved at the federal justice level. While Involvement Declined, Available Data Indicate Several Differences between Native American and Non- Native American Youth in the Federal Justice System Although the involvement of youth in the federal justice systems declined for both Native Americans and non-Native Americans from 2010 through 2016, analysis of available data indicates that there were several differences between the two groups in characteristics such as types of offenses charged. According to DOJ officials, some of these differences were due to federal jurisdiction over Indians for major crimes (such as person offenses) in Indian country as well as the absence of general federal government jurisdiction over non-Native American youth. Involvement in the Federal Justice System Declined for Both Groups Available data indicate that the involvement of youth in the different stages of the federal justice system declined for both Native Americans and non-Native Americans from fiscal years 2010 through 2016. For example, federal custodies due to arrests by federal LEAs declined for both groups, as shown in table 6; the number of suspects referred to federal courts declined for both groups (table 7); and BOP confinements declined for both groups (table 8). Native American and non-Native American youth were involved with the federal justice system for different offenses from fiscal years 2010 through 2016. We analyzed the types of offenses for all youth and grouped them into five broad categories—drug and alcohol, person, property, public order, and other. Analysis of available data indicates that the majority of Native American youth were involved with the federal justice system for offenses against a person. In contrast, the majority of involvement of non-Native American youth was due to public order or drug and alcohol offenses. Arrests. As figure 8 illustrates, out of the broad offense categories, 49 percent of Native American youth were arrested by a federal LEA and in USMS custody due to an offense against a person. In contrast, 5 percent of non-Native American youth were arrested by a federal LEA for person offenses during the period. Instead, most non-Native American youth were arrested by a federal LEA for public order or drug and alcohol offenses (70 percent total for both). The top two specific offenses among Native American youth were assault and sex offenses; the top two specific offenses among non-Native Americans were drug-related and immigration violations, according to analysis of available data. Federal data include youth in USMS custody after a federal arrest but may not capture all arrests by federal law enforcement agencies. USMS uses the race category “American Indian or Alaskan Native” and includes persons having origins in any of the indigenous peoples of North America, including Alaskan Natives. According to USMS officials, race is self- reported by the individual at the time of the custody intake. Non-Native American categories in USMS data are Asian, Black, and White. Referrals for adjudication. As figure 9 illustrates, most Native American youth referred to federal courts were referred for the broad category of offenses against a person (67 percent). However, most non-Native American youth were referred to federal courts for the broad categories of public order offenses or drug and alcohol offenses (44 and 31 percent, respectively). Among Native American youth, the top two specific offenses were sex offenses and assault. Among non-Native Americans, the top two specific offenses were drug-related and immigration violations. EOUSA defines the term Indian based on statute and case law, which generally considers an Indian to have both a significant degree of Indian blood and a connection to a federally recognized tribe. According to EOUSA officials, race is identified by the U.S. Attorney when reviewing documentation associated with the individual, such as tribal enrollment certifications. Confinement. As figure 10 illustrates, out of the five broad offense categories, 67 percent of Native American youth were sentenced and confined by the federal justice system from fiscal years 2010 through 2016 for an offense against a person; most non-Native American youth were confined by the federal justice system for drug and alcohol offenses (about 39 percent) or public order offenses (also 30 percent). The top two specific offenses among Native American youth were sex offenses and assault. The top two specific offenses among non-Native American youth were for drug-related and immigration violations. Agency and organization perspectives on variations in offenses. According to DOJ officials, the reason most Native American youth were arrested, adjudicated, and confined for person offenses was due to federal jurisdiction over Indians for major crimes (such as person offenses like burglary and sex offenses) in Indian country. Specifically, officials noted that Native American youth are arrested and confined in the federal system for more serious offenses because the Major Crimes Act confers jurisdiction on the federal government for person offenses. In contrast, agency officials also noted that the federal government does not have jurisdiction over the same types of offenses committed by non-Indian youth and therefore those youth cannot be arrested by federal agencies for person offenses. Rather, according to one DOJ official, the federal government only has general jurisdiction applying to both Native American and non-Native American youth in limited instances, such as for certain immigration and drug offenses. The jurisdictional structure present in Indian country requires the federal government to prosecute offenses that would otherwise be handled in state court outside of Indian country, according to DOJ officials. Representatives from all of the five Native American organizations we interviewed noted, similarly to DOJ officials, that federal jurisdiction over crimes in Indian country is typically for more serious offenses (specifically under the Major Crimes Act), such as person offenses. In contrast, as noted by one organization, youth engaged in property and substance abuse offenses are more typically brought into state custody. Two of the organizations’ representatives we met with noted in addition that alcohol abuse plays a role in person offenses, often co-occurring with these offenses. Outcomes Varied among Youth Referred for Federal Adjudication The distribution of outcomes among youth who were referred to federal prosecutors for adjudication in federal courts between fiscal years 2010 and 2016 was different for Native American and non-Native American youth. For example, as figure 11 shows, a larger percentage of referrals for adjudication involving Native American youth were declined by federal prosecutors compared to non-Native American cases—36 percent among Native American youth compared to 12 percent among non-Native American. Further, a smaller percentage of Native American than non- Native American referrals resulted in delinquent or guilty outcomes—42 percent among Native American youth compared to 63 percent among non-Native American. Length of sentence. Native American youth who were sentenced and confined by the federal justice system—in BOP’s custody—had longer sentences compared to non-Native American youth from fiscal years 2010 through 2016, according to analysis of available data. About half (52 percent) of the Native American youth confined during the period were sentenced for 13 to 36 months. Most non-Native American youth (62 percent) had shorter sentences of up to 12 months. According to DOJ officials, Native American youth had longer sentences due to federal government jurisdiction over major crimes in Indian country. As a result of its jurisdiction, officials said that the federal government arrests and incarcerates Native American youth for more serious crimes, such as sex offenses, which carry longer sentences. In contrast, non-Native American youth served sentences for crimes which carried shorter sentences, such as immigration and drug offenses, as noted above. The difference in sentence length may also be attributed to a number of additional variables that can affect the length of sentence, such as prior delinquent or criminal history and the nature and circumstances of the offense. Distance from residence. Among youth admitted and confined in the federal justice system from fiscal years 2010 through 2016, data show that Native American youth were in facilities closer to their residences or homes compared to non-Native American youth (see table 9). For example, on average, Native American youth who were under the supervision of the United States Probation Office were 296 miles closer to their residence or home compared to non-Native Americans. In addition, on average, Native American youth who were in BOP’s custody were 175 miles closer to their residence compared to non-Native Americans. Further, among both groups and on average, youth under the supervision of the United States Probation Office were closer to their residence or home compared to youth who were in BOP’s custody. Age category and gender of youth involved in the federal justice system from fiscal years 2010 through 2016 were similar among Native American and non-Native American youth. Specifically: Most youth arrested by federal LEAs and in USMS custody were male (89 and 91 percent, respectively) and 15 to 17 years old (86 and 92 percent, respectively). Most youth who came into contact with federal courts were 15 to 17 years old (80 and 88 percent, respectively). Most youth confined at federal facilities were male (89 and 96 percent, respectively) and 15 to 17 years old (93 and 99 percent, respectively). Available Data Indicate That There Were Several Similarities between Native American and Non- Native American Youth in State and Local Justice Systems Analysis of available data indicates that there were several similarities between Native American and non-Native American youth involvement with state and local justice systems over the period analyzed. Involvement in State and Local Justice Systems Declined for Both Groups, but Extent of Decline Varied The involvement of both Native American and non-Native American youth in state and local justice systems declined for arrests, referrals for adjudication, and confinements in recent years (see tables 10 through 12). However, the extent of the decline varied between the two groups. For example, as the tables show, the declines in arrests and referrals for adjudication were greater for Native American youth, while the decline in confinements was greater for non-Native American youth. The distribution of offenses for youth involved in state and local justice systems in recent years was similar among Native American and non- Native American youth. As noted above, we analyzed the types of offenses for all youth and grouped them into five broad categories—drug and alcohol, person, property, public order, and other. Arrests. Available data show that among youth arrested by state and local LEAs between calendar years 2010 through 2016, a similar percentage of Native American and non-Native American youth were arrested for the five broad offense category types. For example, as figure 12 illustrates, the largest percent of offenses among both groups during the period were in the broad category of offenses against property—with 25 percent among Native American youth and 28 percent among non- Native American youth. The next most common broad category of offense for Native Americans arrested by state and local LEAs was drug and alcohol offenses (23 percent); a smaller percent of non-Native Americans were arrested for drug and alcohol offenses (16 percent). The top four specific offenses among Native American youth arrested by state and local LEAs during the period were larceny/theft, alcohol, assault, and status offenses. Similarly, the top four specific offenses among non- Native American youth during the period were larceny/theft, assault, status offenses, and drugs. Adjudication. Generally, the offenses associated with delinquency cases received by state and local courts between calendar years 2010 and 2014 were similar for both Native American and non-Native American youth, according to analysis of available data. The largest percentage of offenses among delinquency cases for both groups was for the broad offense category of property offenses (38 and 36 percent). Confinement. Generally, Native American and non-Native American youth adjudicated and confined at state and local facilities were admitted for similar offenses, according to our analysis of DOJ biennial census data from 2011, 2013, and 2015. As figure 13 illustrates, in 2015, a similar percentage of youth, for both groups, were confined due to three broad categories of offenses—public order, person, and property. At least 29 percent and at most 32 percent of youth were confined for each category of offense. A much smaller percentage of youth, for both groups, were confined for the broad category of drug and alcohol offenses. Some of the most common specific offenses among both Native American and non-Native American youth in 2015 were assault, probation or parole violation, sex offenses, and burglary. The majority of Native American and non-Native American youth referred to state and local courts and confined at state and local facilities were male and 15 to 17 years old during the periods for which we obtained data. For example, table 13 illustrates the demographics of youth adjudicated and confined in state and local facilities. Outcomes of delinquency cases in state and local courts were generally similar for Native American youth and non-Native American youth between 2010 and 2014, according to analysis of available data. For example, more than half of all cases received by the courts for both groups were petitioned—formally processed—as table 14 illustrates. Facility types. Native American and non-Native American youth confined at state and local facilities were placed in similar types of facilities. As table 15 illustrates, the majority of youth for both groups were in private facilities at the time of DOJ’s 2015 biennial census. Time of confinement. Native American and non-Native American youth at state and local facilities had similar characteristics for the length of time they had been confined at the time of the 2015 biennial census. As table 16 illustrates, the majority of youth, for both groups, had been confined for more than 120 days. DOJ and HHS Offered at Least 122 Grant Programs; Tribal Governments or Native American Organizations Were Eligible for Almost All but in a Sample of Applications We Reviewed, Applied Primarily for Programs Specifying Native Americans We identified 122 discretionary grant programs across several issue areas such as violence or trauma, justice system reform, and alcohol and substance abuse that DOJ and HHS offered from fiscal years 2015 through 2017 that grantees could use to help prevent or address delinquency among Native American youth. DOJ and HHS awarded approximately $1.2 billion in first year awards during this period, about $207.7 million of which they collectively awarded to tribal governments and Native American organizations. Tribal governments and Native American organizations were eligible for almost all of these grant programs, but we found in a sample we reviewed that they primarily applied for those that specified tribes or Native Americans as a primary beneficiary. Additionally, officials from selected tribal governments, Native American organizations, DOJ, and HHS stated that certain factors affect tribal governments and Native American organizations’ ability to apply successfully for grant programs that awardees could use to help prevent or address delinquency among Native American youth. DOJ and HHS Offered at Least 122 Grant Programs That Could Be Used to Help Prevent or Address Delinquency among Native American Youth We identified 122 discretionary grants and cooperative agreements (grant programs) for which DOJ and HHS offered funding from fiscal years 2015 through 2017 that grantees could use to help prevent or address delinquency among Native American youth. See appendix V for a list of these programs. DOJ and HHS awarded approximately $1.2 billion in first-year awards to grantees through the 122 programs over the period, as shown in figure 14. Of the $1.2 billion, HHS and DOJ collectively awarded $207.7 million to tribal governments and Native American organizations. HHS awarded $106.5 million and DOJ awarded $101.2 million. As previously discussed, tribal governments and Native American organizations also received other federal funding that could help prevent or address delinquency among Native American youth. The DOJ and HHS grant programs we identified included 27 programs that specified tribes or Native Americans as a primary beneficiary and 95 programs that did not specify this but that could include tribes or Native Americans as beneficiaries. For example, the Cooperative Agreements for Tribal Behavioral Health, which HHS’s Substance Abuse and Mental Health Services Administration (SAMHSA) offered in fiscal years 2016 and 2017, is a grant program that specified tribes or Native Americans as a primary beneficiary. Its purpose is to prevent and reduce suicidal behavior and substance use, reduce the impact of trauma, and promote mental health among Native American youth. On the other hand, the Sober Truth on Preventing Underage Drinking Act grant program, which SAMHSA offered in fiscal year 2016 to prevent and reduce alcohol use among youth and young adults, is an example of a program that did not specify tribes or Native Americans as a primary beneficiary but could nonetheless benefit them. As previously discussed, available data indicate that alcohol offenses constitute the second-highest specific offense for which Native American youth were arrested by state and local LEAs from calendar years 2010 through 2016. Within DOJ’s OJP, an example of a grant program that specified tribes or Native Americans as a primary beneficiary is the Defending Childhood American Indian/Alaska Native Policy Initiative: Supporting Trauma- Informed Juvenile Justice Systems for Tribes program. This grant program was offered by OJP’s Office of Juvenile Justice and Delinquency Prevention (OJJDP) for funding in fiscal year 2016. The goal of the grant program is to increase the capacity of federally recognized tribes’ juvenile justice and related systems to improve the life outcomes of youth who are at risk or who are involved in the justice system and to reduce youth exposure to violence. Another grant program, the Youth with Sexual Behavior Problems Program, which OJJDP offered from fiscal years 2015 through 2017, is an example of a grant program that did not specify tribes or Native Americans as a primary beneficiary but that could nonetheless benefit them. As previously discussed, available data indicate that the second-highest specific offense for which Native American youth were arrested by federal LEAs from 2010 through 2016 was sex offenses. This grant program provided services for youth sexual offenders, their victims, and the parents and caregivers of the offending youth and victims. The 27 grant programs that specified tribes or Native Americans as a primary beneficiary awarded a total of $250.2 million over the fiscal year 2015 through 2017 period, while the 95 programs that did not were awarded $944.4 million (see fig.15). Of the 122 grant programs we identified, tribal governments and Native American organizations received funding primarily from the 27 grant programs that specified tribes or Native Americans as a primary beneficiary. Of the $250.2 million in awards from these 27 grant programs, tribal governments and Native American organizations received $193.2 million, or about 77 percent of the total. Alternatively, of the $944.4 million in awards from the 95 grant programs that did not specify tribes or Native Americans as a primary beneficiary, tribal governments and Native American organizations received $14.5 million, or 1.5 percent of the total. The 122 grant programs focused on one or more issue areas in their funding opportunity announcements relevant to helping prevent or address delinquency among Native American youth. The most common issue areas were violence or trauma (34 programs), justice system reform (25 programs), and alcohol and substance abuse (22 programs). Table 17 lists the issue areas and the number of DOJ and HHS grant programs that focus on each issue area. Violence or trauma. Thirty-four of the 122 grant programs supported activities such as researching, preventing, addressing, or providing services related to youth violence or trauma. For example, the purpose of the Communities Addressing Childhood Trauma grant program, administered by HHS’s Office of Minority Health, is to test the effectiveness of activities that seek to promote healthy behaviors among minority or disadvantaged youth who have experienced childhood trauma and are thus at risk for poor health and life outcomes. Another example is DOJ’s Coordinated Tribal Assistance Solicitation’s (CTAS) Tribal Youth Program. One of the priority areas of this grant program is preventing, intervening, and treating children exposed to violence through the development and implementation of trauma-informed practices in pertinent programs and services. DOJ’s Comprehensive Anti-gang Strategies and Programs grant supports evidence-based strategies in communities trying to reduce and control gang-related crime and violence through coordinating prevention, intervention, enforcement, and reentry programs. As mentioned earlier in the report, available data indicate the top specific offense for which Native American youth were arrested by federal LEAs from 2010 through 2016 was assault. Justice system reform. Twenty-five of the 122 grant programs supported activities such as researching and analyzing the effectiveness of efforts to reform the youth justice system and enhancing the capacity of justice system institutions with which youth could come into contact. For example, one goal of the Tribal Civil and Criminal Legal Assistance Grants, Training, and Technical Assistance grant program, administered by DOJ’s Bureau of Justice Assistance, is to enhance tribal court systems and improve access to them, as well as to provide training and technical assistance related to tribal justice systems. Another example is DOJ’s National Girls Initiative grant program. The goal of this program is to support the engagement of stakeholders such as youth justice specialists, law enforcement officers, advocates, and youth defenders to improve the justice system and its responses to girls and young women. Alcohol and substance abuse. Twenty-two of the 122 grant programs supported activities such as preventing or reducing youth consumption of alcohol and drugs. For example, the stated purpose of DOJ’s CTAS Juvenile Healing to Wellness Courts grant program is to support tribes seeking to establish new courts within their existing judicial institutions to respond to alcohol and substance use issues among youth and young adults. (See text box below for an example of the activities a grantee planned to implement with this grant program.) As previously discussed, one of the top offenses we observed of Native American youth arrested by state and local LEAs is drug and alcohol offenses. Department of Justice (DOJ) Coordinated Tribal Assistance Solicitation (CTAS) Juvenile Healing to Wellness Court Grantee: Confederated Tribes of Coos, Lower Umpqua and Siuslaw Indians In fiscal year 2015, the Confederated Tribes of Coos, Lower Umpqua and Siuslaw Indians, a federally recognized tribe located within the state of Oregon, received funding from the DOJ CTAS Juvenile Healing to Wellness Court grant program. Tribal officials told GAO that they are in the process of growing their healing to wellness court and aim to use this grant program to reduce the criminal penalties for substance abuse in their community. Moreover, they said that the “peace-giving court” would look at solutions such as treatment and restorative justice rather than focus on criminal fines and incarceration. As of October 2017, tribal officials said they had three court employees and were planning to use some of the program funding to hire a liaison between other court systems to refer tribal members to their tribal court. Mental and emotional health. Sixteen of the 122 grant programs supported activities such as improving the mental health and wellness of youth. For example, HHS’s Planning and Developing Infrastructure to Improve the Mental Health and Wellness of Children, Youth and Families in American Indian/Alaska Natives Communities grant program focuses on increasing the capacity and effectiveness of mental health systems serving tribal and urban Indian communities by designing a coordinated network of community-based services and supports that address the needs of Native American youth and their families. (See text box below for an example of the activities a grantee planned to implement with this grant program.) Department of Health and Human Services (HHS) Planning and Developing Infrastructure to Improve the Mental Health and Wellness of Children, Youth and Families in American Indian/Alaska Natives Communities Grantee: Native Health of Phoenix In fiscal year 2017, Native Health of Phoenix—an urban Indian community health center with a mission to increase the health and well-being of Native American and other residents in the Phoenix, Arizona metropolitan area—received funding from the HHS Planning and Developing Infrastructure to Improve the Mental Health and Wellness of Children, Youth and Families in American Indian/Alaska Natives Communities grant program. Native Health of Phoenix explained that the grant program would allow the organization to work on trauma-informed care, provide counseling services through role models (with a particular interest in using Native American veterans as mentors), and possibly expand the age group served by an existing program, Wellness Warriors, which currently focuses on promoting healthy living for 7- to 12-year-old youth and their families. Reentry and recidivism. Twelve of the 122 grant programs supported activities such as facilitating youths’ successful reintegration into their communities and reducing the likelihood of subsequent contact with the criminal justice system. For example, the objective of the Second Chance Act Technology-Based Career Training Program for Incarcerated Adults and Juveniles, administered by DOJ’s Bureau of Justice Assistance, is to provide career training programs for incarcerated adults and youth in the 6 to 36 months before their release and to connect them with follow-up services after their release. Another example is DOJ’s Second Chance Act Strengthening Relationships Between Young Fathers, Young Mothers, and Their Children grant program offered funding in fiscal year 2016. The goal of this grant program is to reduce recidivism and support responsible parenting practices of young fathers and mothers who were transitioning from detention, out-of-home placement, or incarceration back to their families and communities. Mentoring. Eleven of the 122 grant programs supported activities such as providing mentoring services to at-risk or high-risk youth and researching or evaluating the impact of various mentoring programs and practices on youth outcomes. For example, DOJ’s Mentoring for Youth: Underserved Populations grant program supports the implementation and delivery of various mentoring services for youth with disabilities, youth in foster care, and lesbian, gay, bisexual, transgender, and questioning youth. Another example is HHS’s Native Youth Initiative for Leadership, Empowerment, and Development grant program. One area of interest in the program includes peer role model development where young Native American adults (18 to 24 years old) serve as role models for mid- adolescents (15 to 17 years old), who in turn serve as role models for even younger members (younger than 15 years old) in their communities. Suicide prevention. Seven of the 122 grant programs supported activities such as preventing or reducing the risk of suicidal thoughts or behavior and self-harm among youth. For example, one purpose of the Substance Abuse and Suicide Prevention Program, formerly known as the Methamphetamine and Suicide Prevention Initiative grant program, administered by HHS’s Indian Health Service, is to support early intervention strategies and positive youth development to reduce the risk for suicidal behavior and substance abuse among Native American youth. (See text box below for an example of the activities a grantee planned to implement with this grant program.) Department of Health and Human Services (HHS) Substance Abuse and Suicide Prevention Program Grantee: Fairbanks Native Association In fiscal year 2016, the Fairbanks Native Association, whose officials describe it as a Native American non-profit organization that provides social services, education, and behavioral health services to residents of the Fairbanks and North Pole communities as well as other residents of Alaska, received funding from HHS’s Indian Health Service’s Substance Abuse and Suicide Prevention Program (formerly known as the Methamphetamine and Suicide Prevention Initiative grant program). According to Fairbanks Native Association officials, one of the evidence-based practices they implemented for the Substance Abuse and Suicide Prevention Program was Coping and Support Training (CAST). CAST is a 12-lesson skills training program used by schools, community centers, and other organizations for middle and high school-aged youth whose program features include building self-esteem and creating a crisis response plan for responding to a range of suicide-risk behavior, among other activities. Justice system data and analysis. Seven of the 122 grant programs supported activities such as collecting, improving the collection of, or analyzing data related to the youth or tribal justice systems. For example, DOJ’s Annual Survey of Jails in Indian country, 2016-2019 grant program funded the collection of information from all known correctional facilities operated by tribal governments or the Bureau of Indian Affairs. Some of the information the program sought to collect included the number of adults and youth held, the gender of the inmates, and average daily population, among other data. Runaway and homeless youth. Six of the 122 grant programs supported activities such as providing services to youth who have run away from home or who are experiencing homelessness. For example, the primary goal of HHS’s Transitional Living Program and Maternity Group Homes grant program is to help runaway and homeless youth establish sustainable living and well-being for them and, if applicable, their dependent children through the provision of shelter and other services. Cultural identity. Four of the 122 grant programs supported activities such as promoting and preserving Native American cultural traditions to and for tribal youth. For example, the purpose of HHS’s Native American Language Preservation and Maintenance grant program is to ensure the survival and vitality of Native American languages. Other. Six of the 122 grant programs supported activities in other issue areas above such as school safety, tribal justice infrastructure, and social and economic development. Tribal Governments and Native American Organizations Were Eligible for Almost All Grant Programs We Identified, But in a Sample We Reviewed, Applied Primarily for Those Specifying Native Americans Tribal Governments or Native American Organizations Were Eligible for Almost All Grant Programs We Identified Tribal governments or Native American organizations were eligible for almost all of the 122 DOJ and HHS grant programs we identified from fiscal years 2015 through 2017 that grantees could use to prevent or address delinquency among Native American youth: they were eligible for 70 of 73 DOJ programs and 48 of 49 HHS programs. For the 3 DOJ grant programs for which these entities were not eligible to apply, DOJ officials explained that tribal governments or Native American organizations were not eligible for the Smart on Juvenile Justice: Reducing Out-of-Home Placement grant program because the funding stream that supports the program—unallocated funds from Title II of the Juvenile Justice and Delinquency Prevention Act—can only be awarded to states that are in compliance with the four core requirements of the act. For the other 2 grant programs, DOJ OJP officials explained that because the focus of these programs is statewide or countywide, eligibility under this program was limited to states and local units of government that have developed a statewide or countywide plan to reduce recidivism and improve outcomes for youth in contact with the juvenile justice system. These officials added that tribal governments would not have the capacity to respond to the requirements of these programs as designed since tribal juvenile justice systems operate differently than states and counties. The one HHS program that neither tribal governments nor Native American organizations were eligible to apply for was the Preventing Teen Dating and Youth Violence by Addressing Shared Risk and Protective Factors program, administered by the Centers for Disease Control and Prevention (CDC). CDC officials explained that this grant program was limited to funding to local, city, and county public health departments with a demonstrated high burden of violence and the highest capacity to prevent teen dating violence and youth violence based on research findings on teen dating violence and youth violence prevention, as well as lessons learned from their previous investments in these areas. These officials also said that CDC encourages local, city, and county public health departments to work with tribal populations in the area. Tribal Governments and Native American Organizations Generally Applied for Grant Programs that Specified Tribes or Native Americans as a Primary Beneficiary in Sample We Reviewed Although tribal governments and Native American organizations were eligible for almost all of the DOJ and HHS grant programs we identified, we found in a non-generalizable sample of applications we reviewed that these organizations applied primarily for grant programs that specified tribes or Native Americans as a primary beneficiary. Specifically, for the applications we reviewed for 18 DOJ grant programs, tribal governments and Native American organizations accounted for over 99 percent of the applications for the 5 grant programs within the sample that specified tribes or Native Americans as a primary beneficiary and approximately 1 percent of the applications in the 13 DOJ grant programs that did not specify them as a primary beneficiary. See figure 16. In our review of applications for 19 HHS grant programs, tribal governments and Native American organizations accounted for 90 percent of the applications for the 6 grant programs in the sample that specified tribes or Native Americans as a primary beneficiary. However, they accounted for only 2 percent of the applications for the 13 HHS grant programs in our sample that did not specify tribes or Native Americans as a primary beneficiary. See figure 17. DOJ and HHS officials identified various reasons why tribal governments and Native American organizations might not apply for grant programs that do not specify tribes or Native Americans as a primary beneficiary: Tribal governments and Native American organizations might not be aware that they are eligible to apply for certain grant programs. Tribal governments and Native American organizations might believe that that their applications to a grant program that do not specify tribes or Native Americans as a primary beneficiary will not be competitive with other applications. For example, DOJ OJP officials told us that tribes may have concerns about devoting resources to preparing applications for such grant programs because they may not end up being successful. Tribal governments and Native American organizations might prefer to apply for those grant programs that specify tribes or Native Americans as a primary beneficiary. For example, DOJ OJP officials stated that tribes might be familiar and comfortable with applying for the CTAS, a single application for the majority of DOJ’s tribal grant programs. In addition, HHS CDC officials stated that more tribes apply and successfully compete for grant programs that specify tribes or Native Americans as a primary beneficiary because they are designed specifically for tribal populations, thus allowing for “culturally- appropriate activities,” which may include healing and religious practices that promote wellness, language integration that promote cultural sustainability and identity, and traditional storytelling that promotes life lessons and teachings. Officials from 10 tribal governments and Native American organizations also provided perspectives on whether or not a grant program’s focus on tribes or Native Americans as a primary beneficiary affected their decision to apply for the program. Officials from 6 of 10 of the tribal governments and Native American organizations indicated that they would consider any grant program that met the needs of their communities, although officials from 3 of these 6 indicated a preference in some instances for grant programs that focused on tribes or Native Americans. Officials from the remaining 4 of 10 tribal governments and Native American organizations indicated that a grant program’s focus or lack thereof on tribes or Native Americans could affect their ability to apply for it. For example, officials from one federally recognized Oregon tribe explained that their tribe does not apply for grant programs that do not specify tribes or Native Americans as a primary beneficiary because their applications are not typically competitive in a state or nationwide applicant pool. Instead, they said that their tribe applies for funding specific to their community because they are more likely to succeed with those applications. These officials also said that a benefit of applying for grant programs that specify tribes or Native Americans as a primary beneficiary is that technical assistance provided to grant recipients is tailored to tribes. Officials from another federally recognized tribe in Oklahoma noted that their tribe prefers to apply for grant programs that specify tribes or Native Americans as primary beneficiaries due to the limited resources they have available to prepare grant applications, as well as the high level of competition for nationwide federal grant programs. Finally, officials from a tribal nonprofit corporation in Alaska that represents several federally recognized tribes explained that although their decision to apply for any federal grant program depends on the needs of their community, grant programs that specify tribes or Native Americans as a primary beneficiary understand the challenges of tribal communities, particularly living in rural environments and having to travel vast distances to implement grant program funding. Officials from Tribal Governments, Native American Organizations, and Agencies Noted Factors that Affect Successful Application for Grant Programs Officials from tribal governments and Native American organizations that applied for federal grant programs that could help prevent or address delinquency among Native American youth, as well as DOJ and HHS officials, identified various factors they believe affect the ability of tribal governments and Native American organizations to successfully apply for federal grant programs. For example, some tribal governments and Native American organizations found being able to call or meet with federal officials during the application process to be helpful but that short application deadlines are a challenge. Additionally, a non-generalizable sample of DOJ and HHS summary statements that provide peer review comments for unsuccessful applications that tribal governments and Native American organizations submitted for these grant programs noted various weaknesses within these unsuccessful applications. Perspectives from tribal governments and Native American organizations. We collected perspectives from a non-generalizable sample of 10 tribal governments and Native American organizations on what federal practices they find helpful or challenging when applying for grant programs related to preventing or addressing delinquency among Native American youth. Regarding helpful federal practices during the application process, the tribal governments and Native American organizations most frequently responded that they found being able to call or meet with federal officials if they had questions about or need help on their application particularly helpful. For example, representatives from one federally recognized tribe in Nevada explained some agencies have help desks that provide a systematic walkthrough of technical issues applicants might encounter when applying for grant programs. In addition, officials from a tribal nonprofit corporation in Alaska that represents several federally recognized tribes stated attending grantee meetings and having face-to-face contact with agency officials to ask questions was very useful when applying for a particular HHS award. Officials from 9 of the 10 tribal governments and Native American organizations provided the following perspectives on the biggest challenges they have faced when applying to receive federal grant program funding. The window available for applying for federal grant programs is too short. Six of 9 tribal governments and Native American organizations noted this as a challenge. For example, officials from a federally recognized tribe based in the Southwest said that the tribe’s biggest challenge is a short turnaround, usually 4 to 8 weeks, from a grant program’s funding opportunity announcement to its deadline. Similarly, officials from a federally recognized tribe in Oklahoma suggested that federal agencies provide longer application periods for grant programs. These officials added that more time would allow the tribes to coordinate amongst themselves better, prepare stronger applications, and obtain the necessary tribal approvals for a grant program. Collecting data for grant program applications is difficult. Four of 9 tribal governments and Native American organizations we spoke with noted this as a challenge. For example, a representative from a federally recognized tribe in Nevada stated that the tribe needs accurate data for its grant applications to describe the tribe and its needs, yet the tribe does not currently have quality data on issues such as substance abuse or youth employment. In addition, officials from a tribal nonprofit corporation in Alaska that represents several federally recognized tribal governments told us that the biggest challenge in preparing a CTAS application is collecting data specific to their tribes’ region. These officials explained that for reports on juvenile justice, their tribes’ region is sometimes grouped with another area, which makes it difficult to extrapolate data specific to their tribes. According to these officials, due to the challenges in obtaining these data, preparing grant applications to address gaps and for services needed is difficult. Scarcity of grant writers and other personnel makes it difficult to complete a quality application. Four of 9 tribal governments and Native American organizations noted this as a challenge. For example, officials from a federally recognized tribe in Oklahoma said that not having a grant writer is a significant challenge for the tribe when applying for federal grant programs. These officials mentioned that additional training sessions on grant writing and feedback from grant reviewers would help the tribe prepare stronger applications. In addition, representatives from a federally recognized tribe in Oregon stated that they encounter challenges with the research and evaluation requirements of some grant programs because hiring someone to fulfill this role can take 2 to 3 months and the number of qualified individuals in their service area is limited. Perspectives from DOJ and HHS officials. We also obtained perspectives from officials from DOJ OJP and seven HHS operating divisions on reasons why some tribal governments and Native American organizations might be more successful than others in applying for federal funding, as well as the challenges these entities face when applying for federal funding. According to DOJ and HHS officials, some of the reasons why some tribal governments and Native American organizations might be more successful than others are in applying for federal funding include the following: Larger and better-resourced tribal governments and Native American organizations are more successful at applying for federal funding. For example, DOJ OJP officials explained that larger tribes with more resources are more successful at applying successfully for grant programs because they are able to hire grant writers to assist with applications. In addition, officials from HHS’s SAMHSA noted that successful applicants are usually larger tribes that have ample resources and experienced staff to write proposals for federal funding. HHS Centers for Disease Control officials stated that larger and better-resourced tribes with sufficient public health infrastructure and capacity tend to apply more and to be more competitive when they do. Tribal governments and Native American organizations that have received federal funding before are more likely to be successful again. Specifically regarding the CTAS program, DOJ OJP officials explained that once tribes are successful at one CTAS application, they are typically successful on subsequent CTAS submissions because they use the successful application as a template. In addition, officials from the HHS’s Indian Health Service explained that tribes that are repeat grantees might be more likely to submit applications to even more grants because they are well-versed in the process. Moreover, officials from SAMHSA explained that tribes that have previously received federal funding might be better equipped to document their experience in a specific area in subsequent grant applications. According to agency officials, one of the biggest organizational challenges that tribal governments and Native American organizations encounter when applying to receive federal grant program funding is obtaining and retaining staff. For example, officials from HHS’s National Institutes of Health stated that the limited scientific and grant writing staff, as well as high staff turnover within tribes pose the biggest challenges they face when applying for federal funding. Officials from HHS’s CDC and Administration for Children and Families operating divisions also identified limited grant writing staff as one of the biggest challenges that tribal governments and Native American organizations face when applying to receive federal funding from grant programs. Moreover, officials from HHS’s SAMHSA explained that tribes have difficulty finding qualified staff to live and work in the remote areas where many tribes are located. Finally, DOJ OJP officials explained that some tribes might not have sufficient resources more generally to put together a competitive application due to specific tribal government structures and justice systems being relatively new compared to state and local governments. Review of summary statements on unsuccessful applications. We reviewed a sample of 29 DOJ summary statements from fiscal years 2015 through 2017 that provided peer review comments for unsuccessful applications that tribal governments and Native American organizations submitted for the grant programs we identified. These summary statements most frequently cited the following overall weaknesses within the unsuccessful applications from tribal governments and Native American organizations: Application contained unclear or insufficient details on how the applicant would implement or achieve outcomes of the proposed program (19 of 29 peer review summary statements); Application contained unclear or insufficient details on how the applicant would measure the success or ensure the sustainability of the proposed program (15 of 29 peer review summary statements); Application contained unclear or insufficient details on the budget of the proposed program (14 of 29 peer review summary statements); Applicant submitted a poorly written or organized application (12 of 29 peer review summary statements); Application contained unclear or insufficient data/statistical information to support the proposed program (12 of 29 peer review summary statements); and Application contained unclear or insufficient details on the goals and objectives of the proposed program (11 of 29 peer review summary statements). We also reviewed a sample of 30 HHS peer review summary statements from fiscal years 2015 through 2017 provided to tribal governments and Native American organizations that unsuccessfully submitted applications for the grant programs we identified. Specifically, all of these statements contained a section that evaluated the strengths and weaknesses of the applicant’s proposed approach or plan for implementing the grant program funding. These 30 statements most frequently cited the following weaknesses in that section: Insufficient details regarding activities or strategies of proposed approach or plan (24 of 30 peer review summary statements); Insufficient details on the goals or objectives of the proposed approach or plan (12 of 30 peer review summary statements); Insufficient details on the potential partners or stakeholders involved in the proposed approach or plan (12 of 30 peer review summary statements); Insufficient linkages between various elements in proposal or plan (11 of 30 peer review summary statements); Insufficient details on the project timeline presented within the proposed approach or plan (9 of 30 peer review summary statements); and Insufficient details on how the applicant organization would staff the proposed approach or plan (8 of 30 peer review summary statements). We asked officials from the tribal governments and Native American organizations from which we collected perspectives how useful they found the feedback federal agencies provided through peer review comments or other means on unsuccessful grant program applications since fiscal year 2015. Some tribal governments and Native American organizations found the feedback useful while others noted that feedback was sometimes not particularly helpful. For example, officials from a tribal university affiliated with a federally recognized tribe based in the Southwest noted that they have received helpful feedback on unsuccessful applications through e-mail correspondence. However, officials from a tribal nonprofit corporation in Alaska that represents several federally recognized tribes noted that the peer review feedback they received was inconsistent year to year. Meanwhile, officials from a federally recognized tribe in Oklahoma noted that they have found the peer review feedback to be helpful overall and that they use the feedback to improve their weaknesses and reinforce their strengths when submitting future applications. Agency Comments We provided a draft of this report to DOJ, HHS, DOI, the Administrative Office of the United States Courts, the U.S. Sentencing Commission, and the Department of Education for review and comment. DOJ, DOI, and the Administrative Office of the United States Courts provided technical comments that we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Attorney General, Secretary of Health and Human Services, Secretary of the Interior, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Gretta L. Goodwin at (202) 512-8777 or GoodwinG@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report addresses (1) what available data show about the number and characteristics of Native American youth in the federal, state and local, and tribal justice systems; and (2) what discretionary grant programs federal agencies fund that could help prevent or address delinquency among Native American youth, and the extent to which tribal governments and Native American organizations have access to them. To address the first objective, we obtained and analyzed record-level and summary data from federal, state and local, and tribal justice systems from 2010 through 2016. Figure 18 illustrates the data sources we included in our report for each phase of the justice process (arrest, adjudication, and confinement) in each justice system (federal, state and local, and tribal). Generally, state and local entities include those managed by states, counties, or municipalities. As figure 18 illustrates, we utilized a number of data sources. When analyzing the data, certain characteristics and a number of methodological decisions were applicable to multiple data sources: Generally, state and local data we obtained were maintained by calendar year. In contrast, federal data were maintained by fiscal year. For purposes of this report, we refer where appropriate to calendar years or fiscal years in presenting the results of our analysis. Generally, the record-level and summary data we analyzed included information about youth who had come into contact with the justice systems, such as their age, race, gender, type of offense, and the year they came into contact with the justice system. For purposes of our analysis, we defined youth to include persons who were under 18 years of age at the time of arrest, adjudication, or confinement, unless otherwise noted. In many instances, the agencies calculated the youth’s age for us and placed the record in one of the following age categories: under 13, 13-14, and 15-17. For purposes of our analysis, we identified Native American youth as defined by each data source and identified by the agencies providing the data. For example, the Department of Justice (DOJ) Federal Bureau of Investigation’s (FBI) Uniform Crime Reporting (UCR) Summary Reporting System (SRS) data uses the race category “American Indian or Alaska Native” and includes persons having origins in any of the original peoples of North and South America (including Central America) and who maintain tribal affiliation or community attachment. In comparison, the Executive Office for United States Attorneys (EOUSA), in its prosecution data, defines the term Indian based on statute and case law, which generally considers an Indian to have both a significant degree of Indian blood and a connection to a federally recognized tribe. If a record did not contain race information we did not include the record in any our analysis. In regard to type of offense, unless otherwise noted, we obtained and analyzed information about the lead or most serious offense associated with the youth who came into contact with the federal or state and local justice systems. The data sources contained hundreds of specific offenses, such as simple assault, illegal entry, and rape. To assist our analysis of the data, we took the following steps: 1. We categorized specific offenses for all data sources into 1 of 22 offense categories, such as assault, immigration, and sex offense. To determine the 22 categories we considered categories used in our prior work and consulted FBI’s UCR offense codes. The placement of specific offenses into offense categories was carried out by an analyst, reviewed by additional analysts, and confirmed by an attorney. 2. We grouped the offense categories into five broad categories— drug and alcohol, person, property, public order, and other. To determine the five broad categories we considered categories presented in National Center for Juvenile Justice’s (NCJJ) annual Juvenile Court Statistics reports. The placement of offense categories into a broad category was carried out by an analyst and confirmed by an attorney. Table 18 describes the five broad categories and 22 offense categories. Some data sources contained additional information about youth, such as the youths’ geographic location (i.e., state or U.S. Circuit), outcome of the youths’ involvement with the justice system (e.g., adjudicated delinquent; placed in a facility or on probation), type of facility where the youth was placed (e.g., private, state, tribal), length of sentence, distance between youth’s residence and facility, and time in confinement. Generally, record-level information contained in these data systems are collected when the youth comes into contact with the justice system. In some instances, youth provide certain information (e.g., gender and race) to justice system officials. In other cases, justice officials obtain information from documentation associated with the youth, such as identification documents (e.g., tribal enrollment certifications) or pre- sentence investigation reports. Several of the record-level data sets we obtained were administrative data maintained by agencies. These data generally included information generated as cases are handled and are used to help the agency manage its operations. In particular, we obtained and analyzed record-level and summary data from the following federal, state and local, and tribal data sources: Record-level data from four DOJ agencies: 1. The United States Marshals Service’s (USMS) Justice Detainee Information System. This data system is USMS’s case management system for prisoners in custody, among other things. USMS provided us a data set with 1,589 records for youth admitted into USMS custody after being arrested by a law enforcement agency (LEA). Our analysis focused on the following key variables: fiscal year of custody start date, race, gender, age category, original offense description, arresting agency, and circuit. USMS collects information about individuals admitted into custody. USMS receives youth from various LEAs and collects information on the LEA that arrests the individual. We limited our analysis to youth arrested by federal agencies (e.g. FBI) and did not include youth who had been arrested by non-federal LEAs (e.g., municipalities). USMS custody data may not represent all individuals arrested by federal agencies, but identifies a minimum number of arrests for a given period. We used USMS custody data because we did not identify a data source for all federal arrests. The data USMS provided us was limited to individuals who were under 18 when they were admitted to USMS custody and USMS determined the age category for each record. 2. EOUSA’s Legal Information Office Network System. This data system was the EOUSA’s case management system for tracking declinations and litigation in criminal matters and cases, among other things. EOUSA provided us a data set with 2,361 records for suspects received. Our analysis of EOUSA data focused on the following key variables: fiscal year suspect was received, Native American status, age category, lead charge, circuit, and disposition. EOUSA used multiple variables from its Legal Information Office Network System to confirm that the individual was under 18. However, for 25 percent of the records (583 of 2,361), EOUSA could not provide an age category for the juvenile because the age was either unknown or EOUSA officials questioned the age information. When we analyzed the data by age categories, we excluded records with unknown or unreliable age categories. However, we included all EOUSA records when we analyzed other variables contained in the EOUSA data (e.g., offense). To analyze the offense associated with the individual, we used EOUSA’s “lead charge” variable which consists of statutory citations. To identify the offense, we researched each statutory citation. 3. The Office of Justice Programs’ (OJP) Census of Juveniles in Residential Placement (CJRP). This data source contains data collected through a biennial census of state and local (not federal) residential facilities housing youth in 2011, 2013 and 2015 that was administered by the United States Census Bureau on behalf of OJP. OJP provided us a data set with 165,141 records. Our analysis of CJRP data focused on the following key variables: calendar year, age group, race, facility state, gender, most serious offense, agency type (who placed the individual), facility type, and time in placement. State and local facilities include those managed by states, counties, municipalities, and tribal governments as well as private facilities, among others. CJRP has historically achieved response rates near or above 90 percent. However, participation in the CJRP is voluntary and response rates from tribal facilities have been lower. The source for the information collected by the census, such as age, were administrative records maintained by individual residential facilities. These data include youth who were in custody on the day of the census. We limited our analysis of the CJRP data to (1) individuals who were under the age of 18 on the date of the census and (2) youth who had been adjudicated—we did not include youth who were awaiting a trial or whose adjudication was pending. Finally, we excluded youth who were located in the Virgin Islands and Puerto Rico because no other data set appeared to include data for these geographic areas. The data set we analyzed contained 98,830 records. 4. Federal Bureau of Prisons’ (BOP) SENTRY data system. This system is BOP’s case management system for tracking information (e.g., admission type and sentencing) about prisoners in BOP’s custody. For this review, BOP provided two data sets. One data set was limited to youth who were adjudicated and sentenced to a facility overseen by BOP and contained 1,324 records. Our analysis of these BOP data was focused on the following key variables: fiscal year sentenced, age category, race, offense, and sentence length. BOP determined the age category for each record and the data were limited to individuals who were under 18. The second data set included youth who were admitted into a facility overseen by BOP but were not necessarily adjudicated and contained 925 records. Our analysis of these BOP data was focused on the following key variables: fiscal year admitted, race, distance, and admission assignment. BOP ensured the data were limited to individuals who were under 18. BOP provided the distance information by calculating the distance between a juvenile’s residence and the facility where a juvenile was placed. To analyze the distance information we created two categories of admission types: juveniles under the supervision of the United States Probation Office and juveniles in custody of BOP. Our analysis of these four DOJ data sources was limited through 2016 because when we initiated our analysis in April 2017 it was the most current data available. Record-level data from the Department of the Interior’s (DOI) Bureau of Indian Affairs (BIA) for youth arrested and admitted to three BIA- operated juvenile detention centers. We reviewed juvenile detention documents maintained by the three centers: Northern Cheyenne, Standing Rock, and Ute Mountain Ute. The types of documents included admission sheets and arrestee custody receipts, among others. We created a data set of admissions to the three centers using information contained in the documents provided. Our data set contained 956 records and included the following variables: unique ID, admission date, and charges (offense). Documents contained information about multiple offenses for individual admissions and did not identify the most serious or lead offense. As such, we included all offenses in our analysis of the centers’ information. Our analysis of this information was limited to 2012 through 2016 because records prior to 2012 were not available for any center when we initiated our analysis in April 2017. However, our data set does not contain records for 2012 for the Ute Mountain Ute center because that center did not have any of the source documents before 2013. Also, our data set did not contain records for 2012 through 2015 for the Standing Rock Youth Services Center because that center opened in May 2016. Summary data from DOJ’s FBI UCR SRS. The FBI’s primary objective is to generate a reliable set of crime statistics for use in law enforcement administration, operation, and management. FBI provided us with 7 years of data in separate annual files, which initially contained 1,529,736 gender-specific records. To analyze race, we summarized the data across gender. In addition, the records included summary records for drug and gambling offenses as well as records for specific drug offenses (e.g., sale, possession) and gambling offenses (e.g., bookmaking, lottery). To prevent over-counting, we excluded records with specific information from our analysis. These steps reduced our data set to 582,089 records with which we performed our analysis of UCR SRS data, which focused on the following key variables: calendar year, race, state, and offense. The majority of law enforcement agencies submit arrest data to the FBI through the UCR program. In 2016, about 90 percent of city, county, university and college, state, tribal, and federal agencies eligible to participate in the UCR Program submitted data (16,782 of 18,481). Although UCR SRS predominantly contains data from state and local LEAs, some federal and tribal LEAs report data into SRS. Agencies submit data monthly that must meet UCR’s data quality guidelines, such as using uniform definitions. There is no available data for the state of Florida because, according to DOJ officials, Florida does not follow UCR guidelines and reports only arrest totals which cannot be housed in the UCR SRS database. Further, a few states reported limited arrest data during the scope of our review (e.g., Illinois). Our analysis of these data was limited through 2016 because when we initiated our analysis in April 2017 it was the most current data available. Summary data from the NCJJ Easy Access to Juvenile Court Statistics (NCJJ’s juvenile court data) which is supported through funding from DOJ’s OJP. NCJJ obtains case-level and court-level data from state and local juvenile courts. This online juvenile court data is an interactive web-based application that allows users to analyze the actual databases that NCJJ uses to produce its annual Juvenile Court Statistics reports. The summary data available represents national estimates of delinquency cases handled by U.S. courts with juvenile jurisdiction. Our analysis of these data was limited to 2010 through 2014 because this was the most current data available when we conducted our analysis. The summary data we downloaded contained 86,400 cases spanning calendar years 2010 through 2014. Our analysis of NCJJ’s juvenile court data online focused on the following key variables: calendar year, race, offense, gender, and age. Summary data included in DOJ’s Bureau of Justice Statistics reports, such as the Jails in Indian Country report from 2016. This report provides information gathered from Bureau of Justice Statistics’ annual survey of Indian country jails, and includes all Indian country correctional facilities operated by tribal authorities or BIA. Our analysis of these data was limited to the survey reports covering years 2014, 2015, and 2016, and contained the number of Native American youth confined in tribal operated jails in Indian country as of June each year. We assessed the reliability of the record-level and some of the summary data by conducting electronic testing of the data and interviewing knowledgeable agency officials about the data systems. We assessed the reliability of the remaining summary data by interviewing knowledgeable agency officials about the summary data. We determined that the record-level and summary data sources included in this report were sufficiently reliable for the purposes of our reporting objectives. We determined that some record-level and summary data sources, such as certain data related to arrests and sentencing information, contained information already provided by other data sources or contained too few Native American youth observations to provide reliable, reportable information. We did not include these data sources in our report. We also determined that some data variables in certain data sources were not reliable for our purposes. For example, two data sources did not contain reliable information for the race of individuals. We did not include these data sources in our report. For each data source, we calculated the number and percent of Native American and non-Native American youth involved at each phase of the justice process (arrest, referral for adjudication, and confinement) for all three justice systems (federal, state and local, and tribal), where data were available. Generally, non-Native American records included Asian, Black, and White. Some data sources included other race categories— such as Pacific Islander and Hispanic. We then analyzed the characteristics of youth involvement with the justice system, such as the youths’ race, age category, gender, type of offense, geographic location, outcome of the youths’ involvement with the justice system, type of facility where the youth was placed (e.g., private, state, tribal), length of sentence, distance between youth’s residence and facility, and time in confinement, where data were available. If a record was missing a value for the characteristic we were analyzing (e.g., race, offense, gender, or age)—for example, the value was either blank or was “unknown”—we did not include the record in the analysis of that characteristic. We also analyzed the representation of Native American youth involved with the federal and state and local justice systems by comparing justice system data to 2010 U.S. Decennial Census information and U.S. Census estimates from 2011 to 2016. Specifically, for the federal system, we identified the representation of Native American youth in USMS’s custody data, EOUSA’s adjudication data, and BOP’s confinement data for fiscal years 2010 through 2016. We then identified the representation of Native American youth among the total youth population for the United States from the 2010 U.S. Decennial Census (as of April 1, 2010) and its updated estimates from 2011 through 2016 (as of July 1 for each year). Using these data, we compared the representation of Native American youth among each component of the federal justice system (i.e., USMS custody, EOUSA adjudication, and BOP confinement) to the total youth population for the United States. Similarly, we also compared the representation of Native American youth by individual states. To do this, we identified the representation of Native American youth in FBI’s UCR SRS arrest data as well as CJRP’s confinement data for individual states. We then identified the representation of Native American youth among the youth population for individual states from the U.S. Census’s 2010 decennial data and its updated estimates from 2011 through 2016. Using these data, we compared the representation of Native American youth among state and local justice systems (i.e., FBI’s UCR SRS arrest and CJRP’s confinement data) to the representation of Native Americans among the youth population for individual states. Because there is no single, centralized data source that contains data for youth involved in all justice systems (federal, state and local, tribal) and across all phases of the justice process (arrest, adjudication, confinement), it is not possible to track individuals through all phases of the justice system or identify the number of unique youth who have come into contact with the justice systems. Further, data are not comparable across data sources because data sources vary in how they define Native American and how they determine whether youth are Native American. Some federal agencies, such as USMS and BOP, share a unique identifier for an individual within the federal data sources. However, for purposes of this review and given privacy concerns related to juvenile data, we were unable to track individuals across phases of the federal justice system. We also collected perspectives from agency officials and five Native American organizations regarding factors that might contribute to the data characteristics we observed. We selected the five Native American organizations to include organizations whose mission and scope focuses in whole or in part on Native American juvenile justice issues and that have a national or geographically-specific perspective. The views of these organizations are not generalizable to all Native American organizations but provide valuable insights. To address our second objective on discretionary grant programs that federal agencies fund that could help prevent or address delinquency among Native American youth, we analyzed discretionary grants and cooperative agreements available for funding from fiscal years 2015 through 2017. To identify the discretionary grants and cooperative agreements, we conducted a keyword search of “youth or juvenile” in Grants.gov—an online repository that houses information on over 1,000 different grants across federal grant-making agencies. For the purposes of this review, we define “discretionary grant programs” to include both discretionary grants and cooperative agreements. We focused on discretionary grants and cooperative agreements because federal agencies generally award them to an array of entities based on a competitive review process, whereas federal agencies are generally required by statute to limit awards from the other types of grants to specific entities, typically U.S. state, local, and territorial governments. We then reviewed the search results of the three agencies with the highest number of grant program matches—DOI, DOJ, and the Department of Health and Human Services (HHS). Two analysts independently read the Grants.gov summary descriptions of the programs included in these search results and selected programs for which the description related to risk or protective factors discussed in the DOJ Office of Juvenile Justice and Delinquency Prevention (OJJDP) Tribal Youth in the Juvenile Justice System literature review; risk or protective factors identified in the July 2015 Technical Assistance Network for Children’s Behavioral Health brief on American Indian and Alaska Native Youth in the Juvenile Justice System; juvenile justice system reform principles, findings, or recommendations identified in Chapter 4 of the November 2014 Attorney General’s Advisory Committee on American Indian/Alaska Native Children Exposed to Violence report, Ending Violence so Children Can Thrive;or proposals to reform the juvenile justice system identified in Chapter 6 of the November 2013 Indian Law and Order Commission Report to the President and Congress of the United States, A Roadmap for Making Native America Safer. We also used the following principles to identify and select relevant grant programs: We excluded grant programs that focused specifically on victims as opposed to at risk youth or offenders. We included grant programs that specify tribes or Native Americans if the program’s funding opportunity announcement mentioned youth explicitly. We included grant programs that do not specify tribes or Native Americans as a primary beneficiary if the program’s funding opportunity announcement mentioned youth explicitly and if the program focused primarily on serving youth populations. After two analysts independently completed their initial determinations of which grant programs they considered relevant, they either confirmed their agreement or discussed any differences of opinion until they reached a consensus. If they could not reach agreement on whether a given program was relevant, a third, supervisory analyst made the final determination. We also reviewed the grant program funding opportunity announcements on HHS and DOJ’s websites and worked with officials from these agencies to identify any additional grant programs that could be relevant for the purposes for our review. We provided a list of the grant programs that we identified to DOJ and HHS for confirmation both during and after fiscal year 2017. Our final list of grant programs includes 122 programs. Despite these steps, it is possible that our analysis did not identify all relevant grant programs. We next determined which of 122 grant programs we identified specified tribes or Native Americans as a primary beneficiary and which did not by reviewing funding opportunity announcements for the programs to determine if the funding opportunity announcement’s title, executive summary, overview, or purpose specifically referenced tribes or Native Americans as the main or one of few beneficiaries of the proposed grant program funding. After a first analyst completed initial determinations of which of the grant programs specified tribes or Native Americans as a primary beneficiary, a second analyst reviewed those determinations and either confirmed agreement or discussed any differences of opinion until both analysts reached a consensus. We categorized each program into one or more issue areas (e.g., violence or trauma, substance abuse, mentoring, etc.). We used the risk and protective factors discussed in the OJJDP Tribal Youth in the Juvenile Justice System literature review as initial issue areas and added additional areas, as needed, for programs that did not fit within the initial areas. To determine the extent to which tribal governments or Native American organizations had access to the 122 grant programs, we reviewed both the eligibility of those organizations to apply for the grant programs and their level of success in applying for the grant programs. We defined “tribal governments” as the governing bodies of federally recognized tribes. We defined “Native American organizations” as organizations affiliated with federally recognized tribes, such as tribal colleges and universities, as well as non-tribal organizations that focus on serving Native American populations, such as urban Indian organizations. To determine whether tribal governments or Native American organizations were eligible to apply for the grant programs we identified, an analyst first reviewed the eligibility information within each of the grant program’s funding opportunity announcements. In instances where the analyst could not definitively determine whether tribal government or Native American organizations were eligible to apply for a given grant program, the analyst reviewed the program’s Grants.gov synopsis or followed up with agency officials. After the analyst made an initial determination of eligibility, a second analyst reviewed those determinations and either confirmed agreement or discussed any differences of opinion until both analysts reached a consensus. We also consulted with DOJ and HHS officials regarding those grant programs for which tribal governments or Native American organizations were ineligible to apply to determine the reasons why. To determine tribal governments and Native American organizations’ level of success in applying for the grant programs, we analyzed fiscal year 2015 through 2017 award data for the programs to determine the extent to which tribal governments and Native American organizations received funding from them. We also reviewed a non-generalizable sample of applications from 37 grant programs to determine the extent to which tribal governments and Native American organizations applied for these grant programs. Specifically, we requested the sample of applications from each of the five DOJ OJP offices and bureaus and seven HHS operating divisions from which we identified the 122 grant programs that either had a relatively larger estimated total program funding amount on Grants.gov for fiscal years 2015, 2016, or 2017 than other grant programs within the same OJP offices or HHS operating divisions or had specified tribes or Native Americans as a primary beneficiary. We assessed the reliability of the data we used by questioning knowledgeable officials. We determined that the data were sufficiently reliable for the purposes of this report. To determine some of the factors that affected the ability of tribal governments and Native American organizations to apply successfully for grant programs that could help prevent or address delinquency among Native American youth, we: interviewed or received written responses from DOJ and HHS officials to obtain their perspectives, interviewed or received written responses from representatives from a non-generalizable sample of 10 tribal governments and Native American organizations that applied for or received funding from one or more of the 122 grant programs, and reviewed a non-generalizable sample of 29 DOJ and 30 HHS peer review summary statements from unsuccessful applications that tribal governments and Native American organizations submitted for selected grant programs that we identified as relevant for the purposes of this review. We selected our non-generalizable sample of tribal governments and Native American organizations to include those that received multiple awards from relevant grant programs; tribal governments and Native American organizations that applied unsuccessfully for more than one relevant grant program; tribal governments with juvenile detention centers with the highest average daily populations in 2016; and tribal governments located in the states with the largest number of juvenile offenders in residential placement per 100,000 juveniles for American Indians according to the 2015 Easy Access to the Census of Juvenile Residential Placement. We analyzed the results of our interviews with representatives of the tribal governments and Native American organizations as well as with agency officials to discern possible themes regarding factors that affect tribal governments and Native American organizations’ ability to apply successfully for the relevant grant programs we identified. We selected the non-generalizable sample of peer review summary statements from grant programs that had a larger estimated total program funding amount on Grants.gov for fiscal years 2015, 2016, or 2017 than other grant programs within the same OJP offices or HHS operating divisions or had specified tribes or Native Americans as a primary beneficiary. However, if we could not identify an application from a tribal government or Native American organization from a given grant program from which we requested applications, we did not request peer review summary statements from that program. We then conducted a content analysis of the weaknesses noted in the statements submitted by tribal governments or Native American organizations in order to discern common themes. The information we obtained from the agency officials as well as representatives of the tribal governments and Native American organizations cannot be generalized more broadly to all tribal governments and Native American organizations or the applications they submitted for federal funding from fiscal year 2015 through 2017. However, the information provides important context and insights into the challenges tribal governments and Native American organizations face in applying for federal funding that could help prevent or address delinquency among Native American youth, as well as some of the common weaknesses that DOJ and HHS peer reviewers identified in unsuccessful applications from tribal governments and Native American organizations. We conducted this performance audit from November 2016 through September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Definitions and Agency Determinations of Native American Status in Data Sources Appendix III: Actions Agencies Reported Taking Related to Selected Task Force and Commission Recommendations The 2014 Attorney General Task Force report, Ending Violence so Children Can Thrive and the 2013 Indian Law and Order Commission report, A Roadmap for Making Native America Safer, both recommended actions related to Native American youth and youth justice issues. These recommendations included actions federal agencies could take to address some of the challenges noted in the reports, such as exposure to violence, abuse and neglect, and poverty. Table 20 provides examples of actions relevant federal agencies reported taking related to these recommendations. Appendix IV: Native American Youth Involvement with Tribal Justice Systems Comprehensive data from tribal justice systems on the involvement of Native American youth were not available. However, we identified and reviewed a few data sources that provided certain insights about the arrest, adjudication, and confinement of Native American youth by tribal justice systems. Following is a summary of our analysis of data from these sources. Arrests. Although comprehensive data on the number of tribal law enforcement agency (LEA) arrests were not available, we obtained and reviewed admission records from three juvenile detention centers in Indian country managed by the Department of the Interior’s Bureau of Indian Affairs (BIA). Based on those records, at least 388 Native American tribal youth were admitted to these three facilities in 2016, as shown in table 21. In the Northern Cheyenne facility for which we obtained records for 5 years, the number of youth admitted increased yearly between 2012 and 2016, from 14 to 204. According to BIA officials, this growth in the number of youth admitted to the Northern Cheyenne facility likely reflects an increase in admissions of Native American youth from surrounding tribes. Specifically, because the Northern Cheyenne facility is centrally located, they said it admits youth from other tribes which have grown accustomed to sending their youth to the facility. BIA officials also noted that the Northern Cheyenne facility services an area where there is a high rate of delinquency among youth, and because the facility works well with Native American youth struggling with delinquency issues, many tribes elect to send their delinquent youth to the facility. Further, since 2012, the Northern Cheyenne facility increased its bed space and staff, thus increasing its capacity to admit more youth, according to BIA officials. Even though comprehensive tribal arrest data are not available, DOJ’s Bureau of Justice Statistics (BJS) is currently undertaking an effort to increase collection of arrest data from tribal LEAs. Specifically, this data collection activity is the Census of Tribal Law Enforcement Agencies. This collection activity, which BJS plans to conduct in 2019, is to capture information including tribal LEA workloads and arrests, tribal LEA access to and participation in regional and national justice database systems, and tribal LEA reporting of crime data into FBI databases. Adjudication. Comprehensive data were not available to describe the extent to which tribal courts processed Native American youth, or adjudicated them delinquent or found them guilty. However, BJS concluded a tribal court data collection effort—the National Survey of Tribal Court Systems—in 2015. Through this survey, BJS gathered information from more than 300 tribal courts and other tribal judicial entities on their criminal, civil, domestic violence and juvenile caseloads, and pretrial and probation programs, among other things. DOJ officials told us that BJS has analyzed the data, and plans to release results in the future. Confinement. According to data published by DOJ’s Bureau of Justice Statistics, the number of youth in Indian country jails declined from 190 in 2014 to 170 in 2016 (about an 11 percent decrease). Appendix V: Selected Grant Programs That Could Help Prevent or Address Delinquency among Native American Youth Appendix V: Selected Grant Programs That Could Help Prevent or Address Delinquency among Native American Youth Department of Justice (73 grant programs) Office of Juvenile Justice and Delinquency Prevention (OJJDP) Grant program Grant program specified tribes or Native Americans as a primary beneficiary (Yes/No)? Tribal government or Native American organizations eligible to apply (Yes/No)? Grant program Grant program specified tribes or Native Americans as a primary beneficiary (Yes/No)? Grant program Grant program specified tribes or Native Americans as a primary beneficiary (Yes/No)? Tribal government or Native American organizations eligible to apply (Yes/No)? Grant program Grant program specified tribes or Native Americans as a primary beneficiary (Yes/No)? Grant program Tribal government or Native American organizations eligible to apply (Yes/No)? Grant program specified tribes or Native Americans as a primary beneficiary (Yes/No)? Bureau of Justice Statistics (BJS) Office of Sex Offender Sentencing, Monitoring, Apprehending, Registering, and Tracking Department of Health and Human Services (49 grant programs) Substance Abuse and Mental Health Services Administration (SAMHSA) Grant program Grant program specified tribes or Native Americans as a primary beneficiary (Yes/No)? Grant program Grant program specified tribes or Native Americans as a primary beneficiary (Yes/No)? Tribal government or Native American organizations eligible to apply (Yes/No)? Grant program Grant program specified tribes or Native Americans as a primary beneficiary (Yes/No)? Administration for Children and Families Administration for Children and Families Office of Minority Health Yes Office of Minority Health Yes Office of Minority Health Yes Office of Minority Health Yes Centers for Disease Control and Prevention (CDC) Grant program Tribal government or Native American organizations eligible to apply (Yes/No)? Grant program specified tribes or Native Americans as a primary beneficiary (Yes/No)? For the purposes of the review, we define “tribal governments” as the governing bodies of federally recognized tribes and “Native American organizations” as organizations affiliated with federally recognized tribes, such as tribal colleges and universities, as well as non-tribal organizations that focus on serving Native American populations, such as urban Indian organizations. According to DOJ officials, the National Intertribal Youth Leadership Development Initiative grant program had no successful applicants in fiscal year 2017. grantees. Tribal governments and Native American organizations are eligible to apply for the Drug- Free Communities Support Program, thus making them potentially eligible for the Sober Truth on Preventing Underage Drinking Act Grants program. Health Resources Services Administration issued a fiscal year 2017 Behavioral Health Workforce Education and Training for Paraprofessionals and Professionals funding opportunity announcement, but according an agency official the fiscal year 2017 funding opportunity announcement does not focus on professionals who provide services to youth, whereas the fiscal year 2016 funding opportunity does. Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, Taylor Matheson, Assistant Director; Tonnye’ Conner-White, Analyst-in-Charge; Anne Akin; Steven Rocker; and Emily Flores made key contributions to this report. Also contributing were Jessica Ard; Melinda Cordero; Elizabeth Dretsch; Eric Hauswirth; Kristy Love; Grant Mallie; Amanda Miller; Heidi Nielson; and Claire Peachey.
Why GAO Did This Study Native American youth face unique challenges when it comes to their contact with justice systems. Research shows that risk factors such as high rates of poverty and substance abuse make them susceptible to being involved with justice systems at the federal, state and local, and tribal levels. GAO was asked to examine the extent of Native American youth involvement in justice systems, and federal grant programs that may help address Native American youth delinquency. This report examines (1) what available data show about the number and characteristics of Native American youth in federal, state and local, and tribal justice systems; and (2) federal discretionary grant programs that could help prevent or address delinquency among Native American youth, and tribal government and Native American organizations' access to those grants. GAO analyzed federal, state and local, and tribal arrest, adjudication, and confinement data from 2010 through 2016 (the most recent available) from DOJ and the Department of the Interior. GAO also analyzed DOJ and HHS grant program award documentation from fiscal years 2015 through 2017, and application information for a sample of the grant programs chosen based on the amount of funding awarded and other factors. GAO also interviewed officials from DOJ, HHS, and 10 tribal governments or Native American organizations chosen to include successful and unsuccessful applicants to the grant programs, among other things. What GAO Found GAO's analysis of available data found that the number of American Indian and Alaska Native (Native American) youth in federal and state and local justice systems declined across all phases of the justice process—arrest, adjudication, and confinement—from 2010 through 2016. During this period, state and local arrests of Native American youth declined by almost 40 percent from 18,295 in 2010 to 11,002 in 2016. The vast majority of Native American youth came into contact with state and local justice systems rather than the federal system. However, more Native American youth were involved in the federal system than their percentage in the nationwide population (1.6 percent). For example, of all youth arrested by federal entities during the period, 18 percent were Native American. According to Department of Justice (DOJ) officials, this is due to federal jurisdiction over certain crimes involving Native Americans. Comprehensive data on Native American youth involvement in tribal justice systems were not available for analysis. GAO's analysis showed several differences between Native American and non-Native American youth in the federal justice system. For example, the majority of Native American youths' involvement was for offenses against a person, such as assault and sex offenses. In contrast, the majority of non-Native American youths' involvement was for public order offenses (e.g., immigration violations) or drug or alcohol offenses. On the other hand, in state and local justice systems, the involvement of Native American and non-Native American youth showed many similarities, such as similar offenses for each group. DOJ and the Department of Health and Human Services (HHS) offered at least 122 discretionary grants and cooperative agreements (grant programs) from fiscal years 2015 through 2017 that could be used to address juvenile delinquency among Native American youth. DOJ and HHS made approximately $1.2 billion in first-year awards to grantees during the period, of which the agencies awarded approximately $207.7 million to tribal governments or Native American organizations. Officials from the agencies, tribal governments, and Native American organizations identified factors they believe affect success in applying for grant programs. For example, some tribal governments and Native American organizations found being able to call or meet with federal officials during the application process helpful but found that short application deadlines are a challenge.
gao_GAO-18-544
gao_GAO-18-544_0
Background Noncompliance, including fraud, does not have a single source, but occurs across different types of taxes and taxpayers. It includes unintentional errors as well as intentional evasion, such as intentionally underreporting income, intentionally over-reporting expenses, and engaging in abusive tax shelters or frivolous tax schemes. IRS uses many approaches to address noncompliance, from sending notices to taxpayers to conducting complex audits. Many of these approaches can be burdensome to IRS and to taxpayers since they may occur years after taxpayers file their return. We have long highlighted the importance of strong preventive controls for detecting fraud because preventing payment of invalid refunds is easier and more cost-effective than trying to recover revenue through the pay- and-chase model of audits. IRS uses pre-refund compliance checks to confirm taxpayers’ identities, quickly and efficiently correct some clerical and mathematical errors, and detect possible fraud and noncompliance. As shown in figure 1, RRP analyzes individual tax returns claiming refunds and identifies characteristics predictive of IDT and other refund fraud before IRS issues refunds for those returns. IRS reported that between January 2015 and November 2017, RRP prevented the issuance of more than $6.51 billion in invalid refunds. As of March 30, 2018, IRS reports spending about $419 million developing and operating RRP. For fiscal year 2019, IRS requested $106 million to operate and further develop RRP. IRS Management of RRP According to IRS, RRP supports data, analytical, and case processing activities conducted by employees working in revenue protection, accounts management, taxpayer communications, and criminal prosecution. IRS employees from across these areas coordinate to oversee the development and operation of the system (see fig. 2). Four IRS divisions work with IRS’s Information Technology (IT) organization and Office of Research, Applied Analytics, and Statistics (RAAS) to develop, maintain, and operate RRP. The Wage and Investment (W&I) division leads the management of RRP with IRS’s IT offices. W&I’s audit programs cover mainly refundable credits claimed on individual income tax returns and the division develops policy and guidance for RRP and other pre-refund programs that detect suspicious returns. Coordinating with other IRS divisions, W&I and IT update RRP as needed to reflect any new business rules or changes to existing business rules, for example. The Large Business and International division provides RRP with business requirements specific to large corporations. The Criminal Investigation division reviews and analyzes tax returns throughout the filing season to identify fraudulent patterns and trends to incorporate into RRP. The Small Business and Self-Employed division audits individual and business tax returns to detect misreporting. RAAS leads development of some of RRP’s predictive models and IDT filters. The Return Review Program Aims to Detect, Select, and Prevent Invalid Refunds More Accurately and Efficiently As IRS’s primary pre-refund system for detecting IDT and other refund fraud, RRP performs three major activities (see fig. 3). Detection: RRP Uses Multiple Data Sources and Predictive Models, Among Other Techniques, to Detect Suspicious Returns According to IRS, RRP uses advanced analytic techniques and evaluates data from various sources to assign multiple scores to individual returns claiming refunds. The scores are based on characteristics of IDT and other refund fraud known to IRS. Higher fraud scores indicate the return’s greater potential for refund fraud. IRS officials told us that RRP’s design helps IRS identify increasingly sophisticated tax fraud. RRP’s analytic techniques include the following: Predictive models. IRS develops many different models that help detect emerging fraud, outliers, and taxpayer behavior inconsistencies in returns claiming refunds. These models also mine data and help IRS seek out patterns predictive of IDT and other refund fraud. For example, a model may use a combination of existing variables from the 1040 individual tax return, such as tax credits claimed and income. Business rules. RRP contains over 1,000 rules (a “yes” or “no” outcome) developed by IRS to flag returns for evidence of anomalous behavior. For example, RRP uses a business rule to distinguish between returns for which it has received an associated Form W-2, Wage and Tax Statement (W-2), from those which it has not. Clustering. RRP uses a tool that reveals patterns and relationships in masses of data allowing RRP to identify clusters of returns that share traits predictive of schemes and refund fraud. For example, IRS could use clustering to identify groups of returns that share the same geographic location, among other traits. According to IRS, this technique was developed to automate certain aspects of Criminal Investigation’s identification of fraud schemes. A number of systems connect to RRP and provide additional taxpayer data or third-party information for RRP to analyze. RRP contains taxpayers’ prior three years’ filing history and third parties—employers, banks, and others—file information returns to report wages, interest, and other payment information to taxpayers and IRS. For example, the Social Security Administration sends W-2s to IRS. The W-2 information is loaded regularly into RRP, along with other information returns, to validate wage and income information reported on individual returns claiming refunds—a process IRS calls systemic verification. Selection: RRP Filters Select Suspicious Returns for Further Action or Review RRP has filters that combine results from the analytic techniques to automatically make a selection decision and then a treatment decision before the return can move to the next processing step and a refund can be issued. Returns not selected by RRP continue through the pipeline process. Selection decision. Returns with fraud scores above thresholds— and meeting other criteria set by IRS management—will automatically be selected by RRP filters for further action or review. According to IRS, the agency’s capacity to review selected returns is part of the automated selection decision, as are other criteria that weigh the cost and risk to IRS. IRS reports that for the 2017 filing season, RRP selected 857,438 returns as potential IDT refund fraud and 219,210 returns as potential other refund fraud. This is less than 1 percent of almost 158 million individual returns filed that year. Treatment decision. RRP automatically assigns selected returns to the appropriate treatment based on the characteristics of IDT or other refund fraud RRP detected. Examples of treatments include the following: Identity theft refund fraud. Returns selected by an IDT filter are automatically assigned for treatment in the Taxpayer Protection Program. IRS notifies taxpayers that they must authenticate their identity before IRS will process the return or issue a refund. Taxpayers can verify their identity by calling an IRS telephone center, visiting a Taxpayer Assistance Center, or in some cases, authenticating online or via mail. If the taxpayer does not respond to the letter or fails to authenticate, the return is confirmed to be IDT refund fraud. Other refund fraud. If a return is selected by one of RRP’s non- identity theft filters, RRP automatically assigns the return, based on the characteristics of fraud identified, to the Integrity and Verification Operations (IVO) function within W&I’s Return Integrity and Compliance Services office for further action or review. For example, RRP may select a return as potential refund fraud because it is missing verification of income for a refundable tax credit, such as the Earned Income Tax Credit. IVO tax examiners may then, for example, contact employers to confirm the income and withholding amounts reported on the return. Frivolous returns. RRP selects returns that contain certain unsupportable arguments to avoid paying taxes or reduce tax liability. If IRS determines these returns to be frivolous, the taxpayer may be subject to penalty. RRP assigns potentially frivolous returns to IVO for review and to notify the taxpayer. Non-workload returns. RRP’s non-workload filters select returns that, according to IRS, score just below the thresholds for RRP’s other filters described above. IRS officials told us that RRP loops these returns for additional scoring and detection. Prevention: RRP Freezes Selected Returns and Improves Detection and Enforcement Efforts Across IRS RRP supports IRS’s efforts to prevent issuing invalid refunds in the following ways: Freezing refunds. RRP connects directly to IRS’s systems for processing individual tax returns and issues transaction codes directly to the Individual Master File depending on the type of refund fraud RRP detected. IRS reports that for the 2017 filing season, RRP prevented IRS from issuing about $4.4 billion in invalid refunds. Of that amount, $3.3 billion was attributed to IDT refund fraud and $1.1 billion to other refund fraud. When RRP selects a return as potential IDT refund fraud, RRP will simultaneously assign the return for treatment and issue a transaction code telling IRS’s processing systems to freeze the refund until the case is resolved. As a result, IRS can protect the refund until the review is complete or a legitimate taxpayer has authenticated his or her identity, at which point IRS will release the return. If RRP’s non-identity theft filters select the return because of characteristics predictive of other refund fraud, RRP issues a transaction code to freeze the return for 14 days while IVO examiners have the opportunity to screen the return. After 14 days, the return automatically resumes processing and the refund may be released. Accordingly, IRS officials told us that RRP prioritizes IDT treatment and if a return is selected by both IDT and other refund fraud filters, RRP will automatically assign the return to the Taxpayer Protection Program and freeze the refund. IRS officials told us that when RRP’s non-workload filters select a return, RRP will issue a transaction code that delays the payment of the refund associated with the return for 1 week. According to IRS officials, this delay provides IRS an opportunity to manually review returns that contain suspicious characteristics. Incorporating treatment results. IRS integrates the results from each return review into its analytic techniques to improve RRP’s detection ability and accuracy on an ongoing basis. For the 2018 filing season, IRS officials told us they were able to add functionality that uses real-time case feedback data to automatically improve the accuracy of some of RRP’s IDT fraud filters. IRS officials can also change RRP’s selection criteria or filters during the filing season based on emerging fraud or workload concerns. Detailed data and analysis. With RRP, all available taxpayer information is linked together and available for analysis and queries by IRS employees for post-refund enforcement activities, such as criminal investigations. RRP creates and distributes a report with the results of RRP’s clustering analysis to analysts in Criminal Investigation. IRS employees are also able to search RRP and analyze data relevant to their specific enforcement activities. Criminal Investigation officials told us they use RRP reports to identify suspicious returns that were not selected by RRP and flag them for further post-refund review. IRS Routinely Monitors RRP’s Performance and Adapts RRP to Improve Detection and Address Evolving Fraud Threats As the primary system for detecting IDT and other refund fraud and preventing IRS from paying invalid refunds, RRP is an integral part of IRS’s ability to process returns during the filing season. Therefore, monitoring and evaluation activities that rely on quality information to identify, analyze, and respond to changes—such as emerging fraud trends—are critical to ensure that RRP is operating effectively. Federal standards for internal control and the Fraud Risk Framework highlight the importance of monitoring and incorporating feedback on an ongoing basis so the system remains aligned with changing objectives, environments, laws, resources, and risks. Consistent with these practices, IRS follows an industry-standard process to conduct a range of monitoring and evaluation activities for RRP throughout the year (see fig. 4). IRS Evaluates and Updates RRP Each Year According to IRS officials, each year beginning in February, IRS evaluates and updates RRP to improve detection and accuracy for the next filing season. A leading practice in the Fraud Risk Framework is for managers to use the results of monitoring, evaluations, and investigations to improve fraud prevention, detection, and response. A more accurate RRP helps IRS use its resources more effectively. For example, if RRP automatically detects fraudulent returns previously identified by manual processes or post-refund enforcement activities, IRS can redirect those enforcement resources to identifying new and emerging fraud schemes. Further, as RRP selects fewer legitimate returns as suspicious, IRS employees are able to devote more of their time to identifying fraudulent returns. IRS officials stated that to improve RRP’s accuracy, IRS incorporates information about all refund fraud and noncompliance detected by other enforcement activities into RRP’s detection tools. IRS also uses information from its research efforts and external entities, as described below. Other enforcement activities. These activities include the Fraud Referral and Evaluation program, where, according to IRS, analysts manually review select tax returns that scored just below RRP’s selection thresholds. Another enforcement activity is the Dependent Database, a pre-refund screening system that identifies potential noncompliance related to the dependency and residency of children. IRS staff told us they evaluate refund fraud detected by the Dependent Database and Fraud Referral and Evaluation program that RRP missed and update RRP’s analytic techniques for the next year. Third, investigators in Criminal Investigation told us that they work with other IRS offices to incorporate new and emerging refund fraud patterns, such as those identified as a result of external data breaches, into RRP’s detection tools. To ensure that the updates are operating effectively, IRS staff track the percentage of invalid returns that RRP automatically selected that were previously detected by other IRS processes. IRS research. IRS officials stated that the agency uses information from a number of research efforts to inform updates or adaptations to RRP. For example, for the 2018 filing season, IRS changed RRP’s filters and selection criteria to automatically select returns that IRS held manually in 2017. IRS officials told us they made these changes after researching taxpayer behavior in noncompliant claims of the Earned Income Tax Credit and Additional Child Tax Credit during the 2017 filing season. Third-party information. IRS collaborates with external entities to strengthen IRS’s defenses against paying invalid refunds. IRS officials told us they use information from their collaborative efforts to update RRP’s detection tools for the upcoming filing season. These efforts include the External Leads Program, where participating financial institutions provide leads to IRS regarding deposits of suspicious refunds, and the Opt-In Program, a voluntary program where participating financial institutions flag and reject refunds issued by IRS via direct deposit if they find that certain characteristics do not match. IRS reported that in 2017, banks recovered 144,000 refunds with a value of $204 million. IRS has also used information from the Security Summit to improve RRP’s detection of IDT refund fraud. The Security Summit is a partnership between IRS, the tax preparation industry, and state departments of revenue to improve information sharing around IDT refund fraud. For the 2017 filing season, IRS incorporated a number of data elements into RRP’s detection tools that were identified by the Security Summit. IRS also incorporates legislative changes into RRP for the upcoming filing season. IRS officials told us in March 2018 that they are working to determine all the updates and changes they need to make to RRP’s analytic techniques for the 2019 filing season to ensure that RRP will make appropriate selections in accordance with Pub. L. No. 115-97, “An act to provide for reconciliation pursuant to titles II and V of the concurrent resolution on the budget for fiscal year 2018.” Prior to the Filing Season, IRS Tests RRP and Establishes Selection Criteria Between September and December each year, IRS tests RRP to ensure that the system’s updated detection tools meet objectives to increase detection and accuracy for the upcoming filing season. A key factor is setting RRP’s thresholds used to trigger if a return will be selected by RRP. For a given set of rules and criteria, as a threshold is lowered, the number of returns that RRP selects as suspicious will increase, including both fraudulent and legitimate returns. During this testing period, IRS officials determine appropriate threshold settings for RRP given IRS’s fraud detection objectives and IRS resources available to review selected returns. IRS management uses this process to inform its risk tolerance and fraud risk profile. According to the Fraud Risk Framework, effective managers of fraud risks use the program’s fraud risk profile to help decide how to allocate resources. IRS officials told us they test RRP’s analytic techniques and filters by running a random sample of prior-year returns through numerous iterations using different settings. This testing produces various outcomes. According to documents we reviewed and IRS officials, IRS management evaluates the outcomes using the following measures: Selection volume: the number of returns that RRP selects as potentially fraudulent and requiring further action or review by IRS analysts and examiners to confirm the return as fraudulent or legitimate. IRS uses this measure to gauge the workload resulting from certain combinations of settings in RRP. Accuracy: the percent of selected returns confirmed to be legitimate (the false detection rate). IRS uses this measure to evaluate the effect RRP’s settings may have on legitimate taxpayers whose refund may be delayed because their return was inaccurately selected. Revenue protected: the value of refunds associated with returns selected by RRP that IRS confirmed to be fraudulent. This measure can provide an estimate of RRP’s return on investment based on different combinations of settings in RRP. During the Filing Season IRS Monitors and Adapts RRP After IRS updates RRP and establishes selection criteria, RRP is ready to operate during the filing season. To ensure that RRP is performing as expected, IRS managers collect and analyze performance reports, meet weekly during the filing season, and adapt RRP to address emerging fraud or make other adjustments. We reviewed the various reports produced by RRP and IRS staff and determined that the information is reliable, relevant, and timely, as required by federal standards for internal control. IRS officials told us that daily reports highlighting RRP’s selections are helpful, especially during the first weeks of the filing season, to ensure that systems are operating effectively. Consistent with federal standards for internal control and the Fraud Risk Framework, we found that RRP is designed to be flexible and adaptive, and IRS can adjust RRP during the filing season to respond to emerging threats or other concerns. IRS officials told us they made several adjustments to RRP during the 2017 filing season: IRS adjusted the selection thresholds for one of RRP’s IDT filters after observing that the number of selections was exceeding projections, resulting in more selections than IRS officials expected and possibly a higher rate of legitimate returns being incorrectly selected. According to IRS officials, adjusting selection thresholds takes approximately 24 hours. To respond to an external data breach, for example, IRS officials told us they might lower RRP’s selection thresholds so that RRP selects more returns for review. IRS reported that it disabled a rule that it determined was incorrectly selecting legitimate tax returns. IRS officials told us they could address selection errors or respond to new or emerging fraud patterns by modifying RRP’s analytic techniques, such as its business rules and models. According to IRS officials, these types of changes require approval of the business rules governance board and take, on average, 10 business days. According to IRS documents we reviewed, early in the 2017 filing season, IRS discovered that RRP did not issue appropriate transaction codes to the Individual Master File to freeze about 11,000 returns selected as potential IDT refund fraud. As a result, some of these returns posted and refunds may have been issued incorrectly. IRS told us they fixed this error within 3 days of identifying it. As IRS Continues to Develop the Return Review Program, Additional Opportunities Exist to Improve Enforcement IRS Plans to Expand RRP Capabilities to Further Prevent Invalid Refunds IRS plans to continue developing RRP to expand its capabilities to detect refund fraud on business and partnership returns, as well as on individual returns that improperly claim nonrefundable tax credits. According to IRS, continued development of RRP will automate previously manual processes, eliminate duplicative efforts, and achieve greater efficiency. Business returns and partnership returns. IRS officials told us in January 2018 that they are currently working to develop rules, models, and filters in RRP to detect noncompliance and fraud in business and partnership returns. According to IRS, identity thieves have long used stolen business information to create and file fake W- 2s along with fraudulent individual tax returns. However, identity thieves are now using this information to file fraudulent business returns. In May 2018, IRS reported a sharp increase in the number of fraudulent business and partnership returns in recent years. Nonrefundable tax credits. IRS plans to develop models and rules in RRP to detect refund fraud on individual returns that improperly claim nonrefundable tax credits. A nonrefundable tax credit is limited to the taxpayer’s tax liability, which means the credit can be used to offset tax liability, but any excess of the credit over the tax liability is not refunded to the taxpayer. Examples of nonrefundable credits include the Child Tax Credit, Foreign Tax Credit, and Mortgage Interest Credit. According to IRS officials, IRS currently relies on a number of systems, including the Dependent Database, to screen returns for noncompliance associated with tax credits. IRS’s management of other major investments will affect the agency’s ability to realize the full potential of RRP’s current and planned capabilities because RRP interfaces with numerous legacy systems. For example, RRP obtains taxpayer information from the Individual Master File, which IRS has been working to replace with a modern database, the Customer Account Data Engine 2 (CADE 2). According to IRS, CADE 2 will provide RRP with additional taxpayer history data and more frequent data updates, improving RRP’s detection capabilities. However, as we reported in June 2018, IRS delivered only 46 percent of planned scope for CADE 2 during the time period we reviewed and paused a number of CADE 2 projects. As of June 2018, a completion date is uncertain. RRP’s effectiveness is limited by the system’s dependence on a legacy case management system. In 2015, IRS approved plans to implement an enterprise-wide case management system to consolidate and replace over 60 legacy systems IRS currently uses. IRS reports a number of limitations with the current systems, including redundancies between systems and limited visibility between programs. However, IRS encountered challenges with the investment, and in 2017 IRS paused development activities. As of June 2018, IRS is working to acquire another product to serve as the platform for IRS’s enterprise-wide case management system. Our prior work has identified actions that Congress could take that would improve IRS’s ability to administer the tax system and enforce tax laws. These actions could also improve IRS’s ability to further leverage RRP’s capabilities. For example, in August 2014 we suggested that Congress provide the Secretary of the Treasury with the regulatory authority to lower the threshold for requiring employers to electronically file W-2s from 250 returns annually to between 5 to 10 returns, as appropriate. Under current law, employers who file 250 or more W-2s annually are required to file W-2s electronically, while those who file fewer may opt to file on paper. Without this change, some employers’ paper W-2s are unavailable to RRP for matching before IRS issues refunds due to the additional time the Social Security Administration needs to process paper forms. Lowering the threshold would help IRS use RRP to verify returns before issuing refunds. This proposed change has been included in H.R. 5444. As of June 2018, H.R. 5444 passed the House and was being considered by the Senate Finance Committee. We have also suggested that Congress grant IRS broader math error authority, with appropriate safeguards against misuse of that authority, to correct taxpayer errors during tax return processing. IRS officials told us that this type of corrective authority would allow IRS to develop more efficient treatments for returns selected by RRP with obvious errors. Although the Consolidated Appropriations Act, 2016 gave IRS additional math error authority, it is limited to certain circumstances. Giving IRS broader math error authority or correctible error authority with appropriate controls would enable IRS to correct obvious noncompliance, would be less intrusive and burdensome to taxpayers than audits, and would potentially help taxpayers who underclaim tax benefits to which they are entitled. As of June 2018, Congress had not provided Treasury with such authority. IRS Has Not Fully Considered Opportunities to Improve Data Available to RRP IRS has additional opportunities to improve data available to RRP to enhance RRP’s detection and accuracy. As described above, RRP’s analytic techniques depend on taxpayer data and information from numerous IRS systems and external entities. RRP’s access to useful and timely information enables IRS to more fully utilize RRP’s analytic techniques to detect suspicious returns, leading to more accurate selection and treatment decisions. Given RRP’s importance to IRS’s mission, it is critical that IRS considers and addresses risks that could affect the accuracy and effectiveness of RRP’s detection and selection activities. According to the Office of Management and Budget, risks include not only threats but also opportunities that could affect an agency’s ability to achieve its mission. IRS and Congress have previously considered opportunities and taken steps to enhance some data made available to RRP. For example: IRS expanded RRP’s use of relevant data from electronically filed returns and information returns. For example, as mentioned previously, IRS incorporated a number of data elements identified through the Security Summit into RRP. In 2016 and 2017, IRS used these data elements to develop additional business rules and models specific to electronically filed returns. IRS also expanded RRP analytic techniques to incorporate data from Forms 1099-MISC, which taxpayers may use to report non-employee compensation. Consistent with our prior reporting, in 2015 Congress enacted legislation to help IRS prevent invalid refunds associated with IDT and other refund fraud. This change allows IRS more time to use RRP to match wage information to tax returns and to identify any inconsistencies before issuing refunds. Since 2017, employers have been required to submit W-2s to the Social Security Administration by January 31, about 1 to 2 months earlier than in prior years. The act also required IRS to hold refunds for all taxpayers claiming the Earned Income Tax Credit or the Additional Child Tax Credit. In 2018 we made recommendations that IRS fully assess the benefits and costs of using existing authority to hold additional taxpayer refunds as well as extending the date for releasing those refunds until it can verify wage information. IRS outlined a number of actions it plans to take to address these recommendations. Taking these actions could prevent IRS from issuing millions of dollars in invalid refunds annually. IRS officials told us that they are taking steps to enhance RRP’s ability to detect fraudulent returns filed using prisoners’ Social Security numbers. To do this, IRS is working to load updated prisoner data into RRP more frequently and developing additional business rules. The Treasury Inspector General for Tax Administration (TIGTA) has reported that refund fraud associated with prisoner Social Security numbers is a significant problem for tax administration, accounting for IRS’s issuance of potentially fraudulent refunds worth tens of millions of dollars in 2015. Based on our prior work, we found that there may be additional opportunities for IRS to enhance RRP by improving data made available to it: Making W-2 information available more frequently. In January 2018, we reported that IRS’s ability to verify information on tax returns early in the filing season was affected by limitations with its IT systems. IRS receives and maintains information return data, including W-2 and 1099-MISC forms, through the Information Return Master File (IRMF) system. IRMF then makes the data available to RRP for systemic verification, the automated process that uses W-2s to verify that taxpayers accurately reported their income and other information on their tax returns. IRS receives the W-2 data from the Social Security Administration daily—up to 25 million W-2s per day— but only loads the data into IRMF and RRP weekly. According to IRS, to add new information returns to IRMF, IRS staff need to reload all existing information at the same time. As employers and financial institutions send more documents to IRS during the filing season, reloading IRMF can take 3 days or more because updates take more time as IRMF’s file increases in size, ultimately containing billions of information returns. IRS officials told us that having W-2s available for analysis sooner would benefit RRP detection and selection of fraudulent returns. In addition, matching W-2 information can also provide sufficient assurance of a valid return, even if characteristics of the return might otherwise raise suspicion. According to our analysis of RRP data for the 2017 filing season, matching available W-2s resulted in RRP excluding 367,027 electronically filed returns that RRP otherwise would have selected as suspicious. Having W-2 information loaded more frequently and available for RRP’s systemic verification helps IRS improve its use of limited enforcement resources by more accurately identifying fraudulent returns and excluding legitimate returns. As of April 2018, IRS officials had drafted but not yet approved a work request to send IRMF data to RRP daily between January and March during the 2019 filing season. In preparing the draft request, IRS officials told us they are assessing how frequently the agency can efficiently load data into IRMF as the filing season progresses. Federal standards for internal control require federal managers to analyze and address risks to agency objectives. As noted previously, risks include not only threats but also opportunities. Leading practices in fraud risk management further state that managers should take into account external risks that can impact the effectiveness of fraud prevention efforts. Until IRS makes incoming employer W-2s available to RRP more frequently, IRS will not address an opportunity to expand the use of RRP’s systemic verification process to more accurately detect and select invalid refund returns for additional action. Making more information available electronically from returns filed on paper. RRP’s analytic techniques could be strengthened if the program had electronic access to additional information from filers of paper returns. While about 90 percent of individual taxpayers file their returns electronically, over 19 million taxpayers filed on paper in 2017. To control costs, IRS transcribes a limited amount of information provided by paper filers into its computer databases. This practice limits the amount of information readily available for enforcement and other tax administration activities that rely on digitized information. We also reported that according to IRS officials, digitizing and posting more comprehensive information provided by paper filers could facilitate enforcement efforts, expedite contacts for faster resolution, reduce handling costs, and increase compliance revenue. In October 2011 we found that IRS considered a number of options to make more information from paper returns available electronically, including increasing manual transcription, optical character recognition technology, and barcoding technology. An optical character recognition system would read text directly from all paper returns using optical scanners and recognition software and convert the text to digital data. A 2-D bar code is a black and white grid that encodes tax return data allowing IRS to scan the bar code to digitize and import the data into IRS’s systems, such as RRP. We recommended in 2011 that IRS determine whether and to what extent the benefits of barcoding would outweigh the costs. In response to our recommendations, in 2012 IRS updated an earlier evaluation of implementing barcoding technology for paper returns. The agency estimated that implementing and using barcoding technology over a 10-year period from fiscal years 2015 to 2025 would yield about $109 million in benefits, compared to about $13 million in costs—a substantial return on investment. IRS estimated benefits based on anticipated reductions in staff hours dedicated to the coding, editing, transcription, and error resolution functions of paper return processing. However, because of statutory limitations, a legislative change is necessary to require individuals, estates, and trusts to print their federal income tax returns with a scannable bar code. In each of its congressional justifications for fiscal years 2012 to 2016, IRS requested that Congress require returns prepared electronically but filed on paper include a scannable code printed on the return. The National Taxpayer Advocate made a similar legislative proposal in 2017. As of June 2018, Congress had not taken action on the proposal. In addition to barcoding, there are other technologies IRS could use to digitize more information from paper returns to further improve tax administration and enforcement activities. However, as of June 2018, IRS had not taken any additional steps to further evaluate the costs and benefits of digitizing individual return information, taking into consideration new technology or additional benefits associated with RRP’s enhanced enforcement capabilities. IRS’s strategic plan identifies expanding the agency’s use of digitized information as a key activity toward its goal to increase the efficiency and effectiveness of IRS operations. Updating and expanding its 2012 analysis of the costs and benefits of digitizing returns to consider any new technology or additional benefit to RRP would provide IRS managers and Congress with valuable information to implement the most cost-effective options for making additional, digitized information available for enforcing and administering taxes. This information could help IRS make progress toward its mission by improving RRP’s detection and selection of suspicious returns. In addition, greater efficiency in the paper return transcription process could free additional resources for enforcement and administration activities. IRS Has Not Fully Considered Opportunities to Use RRP to Improve Other Tax Enforcement Activities IRS has not yet evaluated the costs and benefits of expanding RRP to improve other tax enforcement activities, such as compliance checks or audits, for returns not claiming refunds. All individual returns (Forms 1040) are loaded into RRP as part of return processing. However, RRP is used to prevent IRS from paying invalid refunds as part of IRS’s pre- refund enforcement activities and, therefore, according to IRS officials, RRP has been limited to detecting and selecting individual returns claiming refunds. Currently, IRS does not use RRP to support other enforcement activities that detect misreporting or noncompliance on individual tax returns not claiming refunds, which also contribute to the tax gap—the difference between taxes owed and what are paid on time. Underreporting of income represents the majority of the tax gap, with the average annual underreporting of individual income tax on both refund and non-refund returns for tax years 2008 to 2010 estimated by IRS to be about $264 billion or 57 percent of the total gross tax gap of $458 billion. Given the large amount of revenue lost each year due to underreporting, it is important that IRS consider opportunities to improve its enforcement efforts and promote compliance. IRS’s enforcement of tax laws helps fund the U.S. government by collecting revenue from noncompliant taxpayers and, perhaps more importantly, promoting voluntary compliance by giving taxpayers confidence that others are paying their fair share. According to IRS officials, RRP has benefited IRS’s pre-refund enforcement activities by enhancing detection of IDT and other refund fraud, providing more cost-effective treatment, and enhancing data analytics for improved enforcement. Based on this review of RRP’s capabilities and our prior work on tax enforcement and administration, we identified a number of activities and processes that could be improved and enhanced if IRS expanded RRP to analyze returns not claiming refunds, in addition to returns with refunds. For example: Enhanced detection and selection of potential noncompliance. IRS reported that RRP significantly enhanced its detection of IDT and other refund fraud over prior systems. In January 2018 we recommended—and IRS outlined planned actions—that IRS assess the benefits and costs of additional uses and applications of W-2 data for pre-refund compliance checks, such as underreporting, employment fraud, and other noncompliance. Underreporting occurs when a taxpayer underreports income or claims unwarranted deductions or tax credits. As previously noted, underreporting accounts for the largest portion of the tax gap. To detect underreporting by individuals, after the filing season and after refunds have been issued, IRS uses its Automated Underreporter (AUR) program to electronically match income information reported to IRS by third parties, such as banks and employers, against information that taxpayers report on their tax returns. During our review, we found that this process of matching income information is similar to RRP’s pre- refund systemic verification process that occurs during return processing, but only applies to returns claiming refunds. IRS should consider expanding RRP’s capabilities to use RRP as a platform to perform AUR matching on all individual returns during return processing and post-processing, as more information returns are available for matching. In May 2018, IRS officials told us that, in response to our January 2018 recommendation, IRS is assessing the possibility of using RRP to perform some AUR checks. However, until IRS expands RRP to analyze returns not claiming refunds, these compliance checks will not cover all potential underreporting. During this review of RRP, we also found that IRS could implement predictive models of noncompliance in RRP to select returns for audits. Audits are an important enforcement tool for IRS to identify noncompliance in reporting tax obligations and to enhance voluntary reporting compliance. IRS’s Small Business and Self-Employed (SB/SE) division conducts audits of individual taxpayers after the return has been processed. SB/SE staff review the returns identified for potential audit by various processes. One of these audit selection processes is a computer algorithm—discriminant function (DIF)—that uses models to score all individual returns (with and without refunds) for their likelihood of noncompliance, an indicator of their audit potential. The DIF models are developed from a unique data set and include variables IRS has found to be effective in predicting the likelihood that a return would have a significant tax change if audited. The additional information available in RRP, such as taxpayer history, has the potential to improve the DIF models and therefore the DIF scoring. IRS officials told us that they plan to examine opportunities to use RRP for some SB/SE audit selection processes, such as incorporating DIF scoring into RRP. However, as of April 2018 IRS had not taken any action. More efficient and effective treatment of potentially noncompliant returns. IRS reported that RRP automated and streamlined many of IRS’s selection and treatment processes for preventing the issuance of invalid refunds. Using RRP to improve IRS’s detection and selection of potentially noncompliant returns during return processing could lead IRS to consider treatment options, such as soft notices, that engage taxpayers earlier, to help IRS and taxpayers resolve issues more quickly. A soft notice does not always require a response from the taxpayer; instead, it provides information about a potential error and asks taxpayers to review their records. Consequently, soft notices can be more efficient than other treatments, such as telephone calls or in-person interactions. This treatment option is consistent with IRS’s strategic objective to reduce the time between filing and resolution of compliance issues. One strategy IRS highlights to achieve this objective is to review and refine IRS’s risk-based systems, like RRP, to detect potential issues early. Currently, IRS’s enforcement activities, including SB/SE audits and AUR, occur after the return has been processed and the filing season ends. For example, AUR begins matching information returns to individual tax returns in July after the filing season has ended, and according to TIGTA, routinely identifies more than 20 million individual tax returns with discrepancies each year. In 2013 we reported that IRS took on average, over 1 year—2 years in some cases—to notify taxpayers about discrepancies. These delays are a challenge for IRS and the taxpayer. For example, when additional tax is owed, as time passes taxpayers may be less likely, or less able, to pay the original debt owed and any associated penalties that may have accrued since the time of filing. Taxpayers may also be less likely to have the relevant tax records needed to respond to IRS questions. Notifying taxpayers earlier of a potential error could help bring them into compliance more effectively than other enforcement options. We found that IRS could also expand RRP’s capabilities to use RRP to identify and generate soft notices for taxpayers that do not pay taxes owed at the time of return processing. IRS does not contact electronic filers with an unpaid tax balance until mid-May, weeks after the April payment deadline. This treatment option could help IRS collect taxes owed and also help taxpayers by making them aware of payment options earlier and allowing them to avoid interest and penalties. IRS officials agreed that it is more likely to recover any debt owed if the taxpayer is notified earlier. Enhanced data analytics for improved enforcement. Just as IRS is using RRP data and reporting capabilities to better target resources for enforcement activities associated with refund returns, we found that IRS could increase its access to useful data if it expanded RRP to analyze returns not claiming refunds. For example, using RRP’s enhanced data analytics, including access to multiple data sources, IRS could better identify characteristics of other types of noncompliance to improve detection and enforcement. This approach is consistent with IRS’s strategic goal to advance data analytics to inform decision making and improve operational outcomes. Officials from IRS’s Office of Research, Applied Analytics, and Statistics told us that RRP is a valuable data source for research on IDT and other refund fraud. However, until IRS expands RRP to analyze and score individual returns not claiming refunds, IRS will be limited in its ability to use RRP’s data analytics to help IRS address other types of noncompliance and fraud. Evaluating the costs and benefits of expanding RRP to analyze individual returns not claiming refunds to support other tax enforcement activities is consistent with the goals and objectives outlined in IRS’s Strategic Plan to encourage compliance through tax administration and enforcement and increase operational efficiency and effectiveness. IRS has identified and implemented opportunities to expand RRP to better detect IDT and other refund fraud in individual and business returns. However, until IRS evaluates the costs and benefits of expanding RRP to support other enforcement activities, IRS may be missing opportunities to realize operational efficiencies by streamlining the detection and treatment of other types of noncompliance and fraud. Additionally, IRS may be missing an opportunity to promote voluntary compliance with tax laws and make progress toward closing the estimated $458 billion average annual gross tax gap. Conclusions Noncompliance, including tax fraud, has been a long-standing challenge for IRS. More recently, IDT refund fraud has emerged as a costly and evolving threat to taxpayers and the tax system. As part of IRS’s effort to strategically address these challenges, RRP provides opportunities for IRS to operate more efficiently, increase taxpayer compliance, and combat refund fraud. IRS has plans to continue developing and enhancing RRP, including analyzing business returns for fraud. However, IRS has not fully examined opportunities to improve the availability of information that RRP’s analytic tools rely on. These opportunities include examining the costs and benefits of making more information from paper returns available electronically and making W-2 information available to RRP for income verification more frequently. Until IRS conducts such analyses, the agency will be missing opportunities to improve RRP’s detection and accuracy and prevent paying invalid refunds. These evaluations can also inform Congress’s decisions on requiring scannable codes on some printed tax returns, as well as issues we highlighted in our previous work, including lowering the e-file threshold for employers filing W-2s and expanding IRS’s correctible error authority. Congressional action on these issues would help IRS better leverage RRP’s capabilities. Further, RRP has the potential to improve tax enforcement in other areas such as underreporting and audit selection if IRS can successfully expand RRP’s detection and selection capabilities to analyze individual tax returns, including those not claiming refunds, for fraud and noncompliance. Earlier detection of anomalies and notification can increase compliance and collection rates. Matter for Congressional Consideration Congress should consider legislation to require that returns prepared electronically but filed on paper include a scannable code printed on the return. (Matter for Consideration 1) Recommendations for Executive Action We are making the following five recommendations to IRS. The Commissioner of Internal Revenue should increase the frequency at which incoming W-2 information is made available to RRP. (Recommendation 1) The Commissioner of Internal Revenue should update and expand a 2012 analysis of the costs and benefits of digitizing returns filed on paper to consider any new technology or additional benefits associated with RRP’s enhanced enforcement capabilities. (Recommendation 2) Based on the assessment in recommendation 2, the Commissioner of Internal Revenue should implement the most cost-effective method to digitize information provided by taxpayers who file returns on paper. (Recommendation 3) The Commissioner of Internal Revenue should evaluate the costs and benefits of expanding RRP to analyze individual returns not claiming refunds to support other enforcement activities. (Recommendation 4) Based on the assessment in recommendation 4, the Commissioner of Internal Revenue should expand RRP to support identified activities. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to the Commissioner of Internal Revenue for review and comment. In its written comments, which are summarized below and reprinted in appendix II, IRS agreed with our five recommendations stating that it is taking action to address them and will provide a more detailed corrective action plan. IRS agreed with our recommendations aimed at improving information available to RRP to enhance detection of fraudulent returns. IRS stated that it is evaluating the frequency at which W-2 data is made available to RRP and options for digitizing returns filed on paper. IRS further noted that it is evaluating other associated information provided to RRP for detection. As stated earlier, efforts to improve RRP’s detection and accuracy will protect additional federal revenue. IRS agreed with our recommendations to evaluate options for expanding RRP to improve tax enforcement and compliance. IRS stated that its objective is to make RRP the primary detection system for pre- and post- refund processing across the agency. IRS stated that to expand RRP to analyze returns not claiming refunds, a legislative change requiring all information returns to be filed electronically will be necessary to achieve maximum benefit from RRP. In this report, we highlight legislative issues from our prior work, including lowering the e-file threshold for employers filing W-2s and expanding IRS’s correctible error authority, to help IRS better leverage RRP’s capabilities. However, we are confident that even under current conditions, IRS could use RRP to further improve compliance and its enforcement efforts. For example, with the current electronic filing requirements, RRP could help IRS detect and resolve individual underreporting earlier in the process. IRS stated its intention to collaborate with GAO and other organizations to determine appropriate actions after assessing the results of its analyses. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Commissioner of Internal Revenue, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512- 9110 or mctiguej@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Return Review Program Investment Summary The Return Review Program (RRP) is one of the Internal Revenue Service’s (IRS) major information technology investments. IRS began developing RRP in 2009 to improve its ability to detect fraudulent returns. In October 2016, RRP replaced IRS’s legacy system, the Electronic Fraud Detection System (EFDS) as IRS’s primary fraud detection system. IRS originally planned for RRP to be operating by 2014 because IRS had determined that by 2015 EFDS would not be reliable. However, in 2014, IRS paused RRP’s development to reconsider RRP’s capabilities within IRS’s strategic fraud detection goals. The year-long pause delayed EFDS replacement and retirement until 2016. RRP operated as IRS’s primary system for detecting identity theft and other refund fraud beginning with the 2017 filing season. Figure 5 is a timeline of IRS’s development of RRP. Appendix II: Comments from the Internal Revenue Service Appendix III: GAO Contact and Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Neil Pinney (Assistant Director), Margaret M. Adams (Analyst-in-Charge), Michael Bechetti, Mark Canter, Pamela Davidson, Robert Gebhart, James A. Howard, Jesse T. Jordan, Paul Middleton, Sabine Paul, J. Daniel Paulk, and Bradley Roach, made significant contributions to this report.
Why GAO Did This Study Tax noncompliance, including refund fraud, threatens the integrity of the tax system and costs the federal government hundreds of billions of dollars annually. RRP is IRS's primary pre-refund system for detecting and preventing the issuance of invalid refunds. IRS reported that between January 2015 and November 2017 RRP prevented the issuance of more than $6.51 billion in invalid refunds. GAO was asked to examine RRP's capabilities. This report (1) describes how RRP detects and selects suspicious returns and prevents invalid refunds; (2) assesses how IRS monitors and adapts RRP; and (3) examines what else, if anything, IRS can do to strengthen RRP and use it to address other enforcement issues. GAO reviewed IRS plans for RRP and documents on its performance. GAO compared IRS's efforts to federal internal control standards, GAO's Fraud Risk Framework and IRS's strategic plan. GAO interviewed IRS officials who work on and use RRP. What GAO Found The Internal Revenue Service's (IRS) Return Review Program (RRP) detects and selects potentially fraudulent returns to prevent the issuance of invalid refunds. According to IRS, RRP uses advanced analytic techniques and various data sources, including prior-year tax returns, to assign multiple scores to individual returns based on characteristics of identity theft and other refund fraud. GAO found that IRS routinely monitors RRP's performance and adapts RRP to improve detection and address evolving fraud threats. Each year IRS updates RRP's detection tools to improve accuracy for the next filing season. IRS has plans to continue developing RRP to further prevent invalid refunds, including using RRP to analyze and detect fraudulent business returns. However, GAO identified other opportunities for IRS to improve RRP's fraud detection and to use RRP for other enforcement activities: RRP's ability to accurately detect and select suspicious returns could benefit from having information on Forms W-2, Wage and Tax Statements (W-2) available for analysis more frequently. As of April 2018, IRS officials said they were drafting but had not yet approved a work request to load W-2s into RRP daily instead of weekly for the 2019 filing season. IRS could collect more information electronically from paper filers. One approach IRS evaluated in 2012 is to digitize some paper returns using barcoding technology, but it has not updated that analysis or expanded it to consider other digitizing technologies. IRS requested that Congress require that returns prepared electronically but filed on paper include a scannable code printed on the return, but Congress had not done so as of May 2018. IRS could apply RRP's capabilities to improve other tax enforcement activities, such as audit selection or underreporting detection. Individuals' underreporting of tax liabilities accounts for hundreds of billions in lost tax revenue. Until IRS evaluates the costs and benefits of expanding RRP to analyze returns not claiming refunds, IRS will not have the information needed to make decisions that could help streamline processes for detecting and treating additional types of noncompliance and fraud. What GAO Recommends GAO suggests Congress consider legislation to require that returns prepared electronically but filed on paper include a scannable code. GAO is also making five recommendations to IRS, including that IRS take action to make incoming W-2s available to RRP more frequently, update and expand a 2012 analysis of the costs and benefits of digitizing returns filed on paper, evaluate the costs and benefits of expanding RRP to analyze returns not claiming refunds, and take any appropriate action based on those evaluations. IRS agreed with GAO's recommendations.
gao_GAO-17-801T
gao_GAO-17-801T_0
Background Many of our reports and testimonies include recommendations that, if acted upon, may result in tangible benefits for the U.S. taxpayer by improving the federal government’s efficiency, effectiveness, and accountability. Implemented recommendations can result in financial or nonfinancial benefits for the federal government. An estimated financial benefit is based on agency actions taken in response to our recommendations; such benefits can result in reduced government expenditures, increased revenues, or a reallocation of funds to other areas. For example, in fiscal year 2016, our work across the federal government resulted in $63.4 billion in financial benefits. Other benefits that result from our work cannot be measured in dollar terms, and we refer to them as nonfinancial or other benefits. During fiscal year 2016, we recorded a total of 1,234 other benefits from our work that cannot be measured in dollars, but that led to program and operational improvements to the federal government. These benefits are linked to specific recommendations or other work that we completed over several years and could include improvements to agency programs, processes, and policies. In some cases, benefits are realized based on the actions of Congress. For example, since 1994, we have found that EPA faces challenges in its ability to assess and control toxic chemicals under the Toxic Substances Control Act of 1976—largely due to issues of statutory choice, regulatory control, data, confidentiality, workload, and resources. In response to our work and the work of others, Congress passed the Lautenberg Act in 2016, giving EPA greater authority to implement several of our outstanding recommendations related to these six areas and positioning the agency to better protect public health and the environment from the risks posed by toxic chemicals. As part of our responsibilities under generally accepted government auditing standards, we periodically follow up on recommendations we have made to agencies and report their status to Congress. Agencies also have a responsibility to monitor and maintain accurate records on their progress made toward addressing our recommendations. After issuing a report, we follow up with audited agencies at least once a year to determine the extent to which they have implemented our recommendations and the benefits that they have realized. During these follow-up contacts, we identify for agencies what additional actions, if any, they would need to take to address our recommendations. A recommendation is considered implemented when agencies have taken actions that, consistent with our recommendation, address the issue or deficiency we identified and upon which the recommendation is based. Experience has shown that it takes time for agencies to implement some recommendations. For this reason, we actively track unaddressed (i.e., open) recommendations for 4 years and review them to determine whether implementation can be reasonably expected. The review includes consideration of alternative strategies an agency may have for implementing recommendations. Our experience has shown that recommendations remaining open after 4 years are generally not implemented in subsequent years. We will close a recommendation as not implemented if an agency has indicated that it was not planning to take action or if we have determined that it is unlikely that the agency will take action to address the recommendation. Figure 1 shows our process for monitoring and reporting on recommendations. We maintain a publicly available database with information on the current status of most open recommendations. The database allows searches by agency, congressional committee, or key words and is available at http://www.gao.gov/openrecs.html. In addition to our process for monitoring and reporting on recommendations, we use other mechanisms to encourage agencies to implement our recommendations in a timely manner. For example, we initiated an effort in fiscal year 2015 to call attention to unimplemented recommendations that we believe warrant priority attention by the Secretary or agency heads at key departments and agencies. We sent letters to the heads of key executive branch agencies, including EPA, in fiscal years 2015, 2016, and 2017 identifying these high-priority recommendations and urging the agency head to continue to provide attention to these issues. EPA Has Implemented 191 of 318 GAO Recommendations, Which Relate to a Variety of EPA Operations and Programs As of August 23, 2017, EPA had implemented 191 of the 318 recommendations we made since fiscal year 2007, and the recommendations fall into six broad categories that relate to EPA operations and programs. EPA had not yet fully implemented the remaining 127 recommendations. Figure 2 shows the status of the 318 recommendations. For recommendations that we made over 4 years ago (i.e., fiscal years 2007 to 2012), EPA had implemented 77 percent. For recommendations made since fiscal year 2013, EPA had implemented 34 percent. The 318 recommendations we made to EPA since fiscal year 2007 fall into six broad categories that relate to EPA operations and programs and generally align with many of the goals and strategies identified in EPA’s Fiscal Year 2014-2018 Strategic Plan. These six broad categories are: (1) management and operations; (2) water issues, which includes water infrastructure, drinking water, water quality, and ecosystem restoration; (3) environmental contamination and cleanup, which includes environmental cleanup, pollution prevention, hazardous and other waste programs, and emergency management; (4) toxics, chemical safety, and pesticides; (5) public health and environmental justice; and (6) air quality, climate change, and energy efficiency. The percentage of recommendations implemented within each category ranged from 80 percent for the environmental contamination and cleanup category to 48 percent in the management and operations category. Figure 3 shows the number of recommendations we identified in each of these categories and the percentage of recommendations within each category that had been implemented and not implemented. Almost three-fourths of the recommendations we made since fiscal year 2007 fall into three categories: management and operations, water issues, and environmental contamination and cleanup. The recommendations to EPA relating to management and operations included actions for better managing its grants, better coordinating management of its laboratories, and improving the agency’s information security. Recommendations on water issues included actions targeted at improving the regulation of contaminants in drinking water, improving water quality and ecosystem health in regions such as the Great Lakes and Chesapeake Bay, and better managing water pollution from both point and nonpoint sources. Recommendations related to environmental contamination and cleanup included: taking actions for better managing cleanup at hazardous waste sites; enhancing responses to disasters, such as the collapse of the World Trade Center on September 11, 2001, and Hurricane Katrina in August 2005; and promoting proper disposal and recycling of electronic waste. The remaining quarter of the recommendations fell into the other three categories of toxics, chemical safety, and pesticides; air quality, climate change, and energy efficiency; and public health and environmental justice. Appendix I lists, by category, our reports with recommendations to EPA since fiscal year 2007, and for each report lists the numbers of implemented, not implemented, and total recommendations, as of August 23, 2017. Of the 127 recommendations that EPA has not implemented, we made 82, or 65 percent, since fiscal year 2013 and 45, or 35 percent, earlier (i.e., fiscal years 2007 to 2012). Most of these recommendations concern EPA management and operations and water issues. Some examples of recommendations that have not yet been implemented in these categories are described below. Management and Operations In January 2017, we made recommendations to EPA related to their management of grants. In 2015, EPA awarded roughly $3.9 billion, about 49 percent of its budget, in grants to states, local governments, tribes, and other recipients. These grants supported activities such as repairing aging water infrastructure, cleaning up hazardous waste sites, improving air quality, and preventing pollution. In our January 2017 report, we concluded that EPA’s ability to manage this portfolio depended primarily on grant specialists and project officers, but the agency did not have the information it needed to allocate grants management resources in an effective and efficient manner. In addition, EPA had not identified project officer critical skills and competencies or monitored its recruitment and retention efforts for grant specialists. We recommended that EPA, among other things, develop documented processes that could be consistently applied by EPA offices to collect and analyze data about grants management workloads and use these data to inform staff allocation. We also recommended that EPA review project officer critical skills and competencies and determine training needs to address gaps and develop recruitment and retention performance measures and collect performance data for these measures. According to a May 2017 letter, EPA agreed with the five recommendations we made in the report and identified steps it was initiating to address them. We will continue to monitor EPA’s actions to implement these recommendations. In August 2014, we made recommendations to EPA related to information security. Federal agencies rely on contractors to operate computer systems and process information on their behalf. Federal law and policy require that agencies ensure that contractors adequately protect these systems and information. In our August 2014 report, we evaluated how six agencies, including EPA, oversaw contractor-operated systems. With regard to EPA, we found that the agency generally established security and privacy requirements for contractors to follow and prepared for assessments to determine the effectiveness of contractors’ implementation of controls but was inconsistent in overseeing the execution and review of those assessments. We recommended that EPA develop, document, and implement oversight procedures for ensuring that, for each contractor-operated system: (1) a system test is fully executed and (2) plans of action and milestones with estimated completion dates and resources assigned for resolution are maintained. In comments on the report, EPA generally agreed with our recommendations and has recently told us that it has taken steps to implement these recommendations. We will evaluate whether these steps meet the intent of the recommendations. In March 2010, we made recommendations to EPA related to workforce planning. The ability of federal agencies to achieve their mission and carry out their responsibilities depends in large part on whether they can sustain a workforce that possesses the necessary education, knowledge, skills, and other competencies. We and others have shown that successful organizations use strategic workforce planning to help meet present and future mission requirements. In our March 2010 report on workforce planning at EPA and other agencies, we found that EPA’s workforce plan was not clearly aligned with its strategic plan or budget formulation, consistent with leading workforce planning principles. For example, EPA’s workforce plan did not show how full-time equivalent employees, skills, and locations would be aligned with the strategic plan or budget. Without alignment to the strategic plan, we concluded that EPA was at risk of not having the appropriately skilled workforce it needs to effectively achieve its mission. We recommended, among other things, that EPA incorporate into its workforce plan clear and explicit links between the workforce plan and the strategic plan, and describe how the workforce plan will help the agency achieve its strategic goals. In comments on our report, EPA generally agreed with our recommendation. According to EPA, the agency has taken some positive steps toward better workforce planning, such as developing workforce planning gap analyses. However, EPA has not fully implemented this recommendation. Water Issues In May 2012, we made recommendations to EPA related to a key program under section 319 of the Clean Water Act to address water pollution from nonpoint sources. Under this program, EPA provides grants to states to implement programs and fund projects that address nonpoint source pollution. We found that EPA’s regional offices had varied widely in the extent of their oversight and the amount of influence they had exerted over states’ nonpoint source pollution management programs. In addition, EPA’s primary measures of effectiveness of states’ management programs did not always demonstrate the achievement of program goals, which are to eliminate remaining water quality problems and prevent new threats from creating future water quality problems in water bodies currently of high quality. To help protect water quality, we recommended that EPA (1) provide guidance to its regional offices on overseeing state programs and, (2) in its revised reporting guidelines to states, emphasize measures that more accurately reflect the overall health of targeted water bodies and demonstrate states’ focus on protecting high-quality water bodies, where appropriate. EPA agreed with these recommendations in its comments on the report. In 2013, EPA issued final guidelines laying out expectations for EPA’s regional oversight and issued a memorandum to its regional managers highlighting their oversight responsibilities. However, in a subsequent report issued in July 2016, we found that EPA’s 2013 guidance did not completely address our recommendation to provide sufficient guidance to states to fulfill their oversight responsibilities. We also found that according to EPA officials, the agency planned to make changes to some of the program’s measures of effectiveness. Although EPA has taken some action, these recommendations remain open pending EPA’s (1) ensuring that the guidelines to states incorporate specific instructions on how to review states’ plans and criteria for ensuring funded projects reflect characteristics of effective implementation and tangible results, and (2) improving its measures of program effectiveness. EPA’s Implementation of GAO Recommendations and Related Work Has Resulted in Process and Programmatic Improvements and Financial Benefits We have identified many benefits—process and programmatic improvements and financial benefits—based on EPA taking actions on our recommendations and related work. Since fiscal year 2007, we have identified improvements to EPA’s operations and programs in categories such as management and operations, water issues, and public health and environmental justice. In addition, we have identified financial benefits resulting from the implementation of our recommendations and our related work. Process Improvements The following are examples of process improvements we have identified based on actions EPA has taken in response to our recommendations. Management and Operations In August 2015, we reviewed EPA’s grant management program, including the extent to which its grants management plan followed leading practices for federal strategic planning. We found that EPA could better ensure the effectiveness of its planning framework for meeting grants management goals. We recommended that EPA incorporate all leading practices in federal strategic planning relevant to grants management as it finalized its draft 2016-2020 grants management plan, such as defining strategies that address management challenges and identifying the resources, actions, and time frames needed to meet EPA’s goals. In response to our recommendation, EPA fully incorporated each of the relevant leading practices for federal strategic planning in its final 2016- 2020 grants management plan, issued in February 2016. Specifically, EPA included an annual priority-setting process to identify strategies to address management challenges and the resources needed to achieve its goals. EPA also incorporated mechanisms to ensure leadership accountability for achieving results, including numeric targets and time frames for each action identified in performance measures. Consequently, EPA has better assurance that its 2016-2020 grants management plan is an effective framework to guide and assess its efforts to meet its grants management goals. In August 2011, we found that EPA operated 37 laboratories across the nation to provide the scientific research, technical support, and analytical service to support its mission. In that report, we also found that EPA did not use a comprehensive process for managing its laboratories’ workforce and lacked basic information on its laboratory workload and workforce. Without such information, we found that EPA could not undertake succession planning and management to help the organization adapt to meet emerging and future needs. We recommended that EPA for all of its laboratories develop a comprehensive workforce planning process that is based on reliable workforce data and reflects the agency’s current and future needs in the overall number of federal and contract employees, skills, and deployment across all laboratory facilities. EPA generally agreed with our recommendation and, in 2015, developed a comprehensive workforce planning process for all of its laboratories and, according to the agency, collected, verified, and analyzed, from all of its laboratories, workforce data that included personnel’s organization, location, grade levels, and area of expertise. Water Issues In October 2012, we found that funding for rural water and wastewater infrastructure was fragmented across the three largest federal programs—EPA’s Drinking Water and Clean Water State Revolving Fund programs and the U.S. Department of Agriculture’s (USDA) Rural Utilities Service Waste and Waste Disposal program—leading to program overlap and possible duplication of effort when communities applied for these programs. For example, we found that some communities had to prepare separate environmental analyses for each program, resulting in delays and increased costs to communities applying to the programs. We recommended that EPA and USDA work together and with state and community officials to develop guidelines to assist states in developing uniform environmental analyses that could be used, to the extent appropriate, to meet state and federal requirements for water and wastewater infrastructure projects. In February 2017, EPA and USDA issued a joint memorandum to address concerns identified in our report and highlighted best practices currently employed in some states to eliminate duplicative environmental reviews. In particular, the memorandum highlighted a uniform environmental review document developed by the state of Pennsylvania. To eliminate potential duplication of effort during the environmental review process, the memorandum encouraged state programs to evaluate the best practices and incorporate the practices into their own operations where applicable. Programmatic Improvements The following are examples of programmatic improvements we have identified based on actions EPA has taken in response to our recommendations. Water Issues Under the Clean Water Act, EPA currently regulates 58 industrial categories of wastewater pollution—such as petroleum refining, fertilizer manufacturing, and coal mining—with technology-based regulations called “effluent guidelines.” Such guidelines are applied in permits to limit the pollutants that facilities may discharge. The Clean Water Act also calls for EPA to revise the guidelines when appropriate. EPA has done so, for example, to reflect advances in treatment technology or changes in industries. EPA uses a two-phase process to identify industrial categories needing new or revised effluent guidelines, including an initial “screening” phase in which EPA ranks industrial categories according to the total toxicity of their wastewater. In September 2012, we concluded that limitations in EPA’s screening phase may have led the agency to overlook some industrial categories that warrant further review for new or revised effluent guidelines. For example, during the screening phase, EPA had not considered the availability of advanced treatment technologies for most industrial categories. We recommended that EPA modify the screening phase of its review process to include a thorough consideration of information on the treatment technologies available to industrial categories as it considered revisions to its screening and review process. In comments on the report, EPA agreed that factoring treatment technology information into its reviews would be valuable. In September 2014, EPA published a combined Final 2012 and Preliminary 2014 Effluent Guidelines Program report that discussed revisions to its screening process in response to our report. Specifically, EPA stated that it recognized the need to consider the availability of treatment technologies, process, changes, or pollution-prevention practices in the screening phase of its process and said that it was targeting new data sources to provide such information. In July 2015, EPA published its “Final 2014 Effluent Guidelines Program” with a diagram showing the change to EPA’s screening process to include screening of treatment technologies. Public Health and Environmental Justice EPA established a 1995 Policy on Evaluating Health Risks to Children to ensure that the agency consistently considers children in its actions, since children can be more vulnerable than adults to certain environmental hazards. In August 2013, we found that EPA did not have a specific process for program offices that led regulatory workgroups to document how the agency considers children’s health risks in rulemakings and other actions or how the agency’s analyses comply with the 1995 policy. We recommended that EPA require lead program offices to document their decisions in rulemakings and other actions regarding how health risks to children were considered and that their decisions be consistent with EPA’s children’s health policy. In comments on our report, EPA generally agreed with the recommendation and stated that the Office of Children’s Health Protection worked with the Office of Policy and the program offices to assure a consistent approach for documenting these decisions as part of EPA’s process to develop rules, regulations, and other agency actions. Subsequently, in October of 2014, EPA finalized a template for all EPA employees to use that outlined how to address EPA’s 1995 policy and other requirements under various situations. The template instructs lead program offices to document their decisions in rulemaking and other actions regarding how they considered health risks to children (e.g., conducting a children’s health risk assessment), or provide a rationale for why such an evaluation was not necessary. Financial Benefits The following are examples of financial benefits we have identified based on actions EPA has taken in response to our prior reviews. Environmental Contamination and Cleanup During the course of work related to a July 2008 report on the funding and reported costs of Superfund enforcement and administrative activities, we reviewed EPA’s methodology for calculating the indirect costs—or administrative costs for managing the Superfund program—that EPA charged responsible parties in fiscal year 2006. In conducting this work, we identified two spending codes for which associated administrative costs had not been carried over into EPA’s calculations of the indirect cost rate applicable to each region for fiscal year 2006. As a result of this error, we determined that the percentage that EPA was charging responsible parties for indirect costs associated with fiscal year 2006 spending was lower than it should have been. In response to our finding, EPA published revised indirect cost rates for fiscal years 2005 and 2006 in May 2008 to correct the error. EPA acknowledged that correcting this error would result in more money being potentially recoverable from responsible parties. In 2010, we estimated that the additional amount EPA has recovered (or would recover) had a present value worth about $42.2 million. Management and Operations Since fiscal year 2000, we have issued a body of work aimed at raising the level of attention given to improper payments across government. Our work demonstrated that improper payments have been a long- standing, widespread, and significant problem in the federal government and as a result, contributed to Congress passing the Improper Payments Information Act of 2002 (IPIA). This act, as amended, requires, among other things, that all agencies annually identify and review programs and activities that may be susceptible to significant improper payments, provisions that coincide with recommendations we have made that agencies estimate, reduce, and publicly report improper payments. Subsequently, in 2005, EPA began reporting on the improper payment rate for the Clean Water and Drinking Water State Revolving Funds. By 2009, the most recent year for which we identified financial benefits from the agency addressing improper payments, EPA reported that its total improper payment error rates for the State Revolving Funds declined by 0.16 percent since it first reported on this issue. This resulted in about a $4.5 million decrease in improper payments from the Clean Water and Drinking Water State Revolving Funds for fiscal years 2008 and 2009. In conclusion, as the fiscal pressures facing the government continue, so too does the need for executive branch agencies to improve the efficiency and effectiveness of government programs and activities. Our recommendations provide a significant opportunity to improve the government’s fiscal position, better serve the public, and make government programs more efficient and effective. We believe that EPA’s implementation of our outstanding recommendations will enable the agency to continue to improve its performance and the efficiency and effectiveness of its operations. We will continue to work with Congress to monitor and draw attention to these important issues. Chairman Murphy, Ranking Member DeGette, and Members of the Committee, this completes my prepared statement. I would be pleased to answer questions that you may have at this time. GAO Contacts and Staff Acknowledgments If you or your staff members have any future questions about this testimony, please contact Alfredo Gómez at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Barb Patterson, Assistant Director; Cindy Gilbert; Anne Hobson; Richard Johnson; Dan C. Royer; and Kiki Theodoropoulos. Appendix I: GAO Reports since Fiscal Year 2007 with Recommendations to EPA, by Category Appendix I: GAO Reports since Fiscal Year 2007 with Recommendations to EPA, by Category This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study EPA's mission is to protect human health and the environment. To accomplish this mission, EPA develops and enforces environmental regulations; awards grants; and studies environmental issues, among other things. GAO has conducted reviews focused on various aspects of EPA's operations and programs. Through this work, GAO has made numerous recommendations to improve EPA's performance and the efficiency and effectiveness of its operations. GAO follows up with executive branch agencies to determine the extent to which they have implemented its recommendations. In fiscal year 2015, GAO began sending letters annually to the heads of key executive branch agencies, including EPA, identifying unimplemented recommendations that warrant priority attention. This statement discusses (1) the status of EPA's implementation of GAO recommendations made since fiscal year 2007 and how these recommendations relate to EPA's operations and programs and (2) examples of benefits realized by EPA and others based on GAO's work, including through the agency's implementation of these recommendations. This statement is based on GAO's work since fiscal year 2007 and on an analysis of recommendations GAO has made to EPA during this period. What GAO Found As of August 23, 2017, the U.S. Environmental Protection Agency (EPA) had implemented 191 of the 318 recommendations GAO made since fiscal year 2007. EPA had not yet implemented the remaining 127 recommendations. The figure below shows the status of the 318 recommendations. The recommendations fall into six broad categories that relate to EPA programs and operations: (1) management and operations; (2) water issues; (3) environmental contamination and cleanup; (4) toxics, chemical safety, and pesticides; (5) public health and environmental justice; and (6) air quality, climate change, and energy efficiency. Almost three-fourths of the recommendations fall into the first three categories and include actions for EPA to better manage grants, improve the regulation of drinking water contaminants, and better manage hazardous waste cleanup. Most of the recommendations that have not yet been implemented concern EPA management and operations and water issues. For example, regarding management and operations, EPA has not yet implemented GAO's recommendation to link its workforce plan with its strategic plan to help ensure EPA has an appropriately skilled workforce to achieve its mission. Similarly, for water issues, EPA has not fully implemented GAO's recommendation to provide guidance to regional offices on overseeing state water quality programs. GAO has identified many benefits—that is, process and programmatic improvements and financial benefits—based on EPA taking actions on GAO's recommendations and related work. For example, in October 2012, GAO recommended that EPA and the U.S. Department of Agriculture (USDA) develop guidelines to assist states in developing uniform environmental analyses to meet state and federal requirements for water and wastewater infrastructure projects. EPA and USDA issued a joint memorandum in February 2017 that, among other things, highlighted best practices to eliminate duplicative environmental reviews. In addition, GAO has identified financial benefits from the implementation of its recommendations and related work. For example, during the course of work related to a July 2008 report, GAO identified an error in EPA's calculation of recoverable indirect costs for hazardous waste cleanup. EPA acknowledged the error and published revised indirect costs rates. As a result, GAO estimated in 2010 that EPA had recovered or would recover $42.2 million.
gao_GAO-19-132
gao_GAO-19-132_0
Background Senior Army leadership has acknowledged that the service must change how it develops requirements and acquires weapon systems in order to be successful in future wars. However, the Army’s history of failed, costly weapon system procurements to replace aging weaponry is due, in part, to requirements that could not be met and the immaturity of key technologies. Many of these programs failed to provide any capability to the warfighter despite the time and funding expended. Some examples of these cancelled programs are listed in table 1 below. Army Modernization Efforts Since 2017 In the fall of 2017, the Army began a new modernization effort to rapidly develop and field new capabilities. As a part of this effort, the Army’s then-Acting Secretary and the Chief of Staff in an October 3, 2017 memorandum identified six priorities to guide Army modernization: next generation combat vehicle, air and missile defense, and soldier lethality. Given that modernization is an ongoing process, and with Army expectations that some capabilities will be delivered sooner than others, we have divided Army modernization into two timeframes for the purposes of this report: Near-term modernization: from fiscal years 2019 to 2023, including buying existing systems and technologies to fill the Army’s urgent needs. Long-term modernization: fiscal year 2024 and beyond, including the development of new systems and technologies to meet anticipated needs and maintain superiority over major adversaries. In September 2018, we addressed the Army’s efforts for near-term modernization. We found that the Army had set decisively defeating near-peer adversaries as an overarching objective, but had not established processes for evaluating its modernization efforts against this objective. We also found that the Army had not yet completed a cost analysis of its near-term modernization efforts. To address these issues, we recommended that the Army develop a plan to finalize processes for evaluating the contributions of its near-term investments to the ability to decisively defeat a near-peer adversary; and finalize and report to Congress its cost analysis of near-term investments. DOD concurred with both of these recommendations. As we have previously reported, the Army’s long-term modernization efforts as well as those of the other DOD military services will depend upon adequate and effective investments in science and technology. These are investments that focus on increasing fundamental knowledge of new capabilities, applying that knowledge, and demonstrating the technological feasibility of capabilities. Army Acquisition Process As with all the military services in DOD, the Army’s acquisition process generally includes a number of phases including: (1) the materiel solution analysis phase, (2) the technology maturation and risk reduction phase, (3) the engineering and manufacturing development phase, and (4) the production and deployment phase. In this report we refer to these phases more simply as materiel solution analysis, technology development, system development, and production. Before these phases begin, the Army must establish requirements to guide the acquisition process. Requirements describe the capability desired to be achieved through the use of operational performance attributes—the testable and measurable characteristics—necessary to the design of a proposed system and for establishing a program’s cost, schedule, and performance baselines. These requirements include the key performance parameters and system attributes that guide a program’s development, demonstration, and testing. The Army approval authority for all Army warfighting capability requirements is the Army Chief of Staff. At the end of the initial three phases, the Army holds a milestone review, as shown in figure 1 below, to assess an acquisition program’s readiness to proceed to the next phase, consistent with relevant DOD policies and federal statutes. The Assistant Secretary of the Army for Acquisition, Logistics, and Technology is generally the Army’s milestone decision authority. The process is also subject to intermediate reviews by senior Army staff. Prior GAO Work We have issued several reports related to the Army’s modernization efforts that assess areas regarding requirements and technology development, effective cross-functional teams, and mergers and organizational transformations: Requirements and Technology Development. In our extensive work issued over two decades on requirements and technology development, we have emphasized the importance of promoting leading practices such as communication between end-users and requirements developers; prototyping capabilities as part of technology and product development; and maturing technology to a certain threshold before approving product development. Cross-Functional Teams. In February 2018, we identified eight leading practices that effective cross-functional teams should have: effective communication mechanisms; well-defined goals common to the team, team leader, and an inclusive team environment where all team members have collective responsibility and individual accountability for the team’s work; a well-defined team structure with project-specific rules and autonomy to make decisions rapidly; senior managers who view their teams as a priority; commitment to the team’s goals; and leaders empowered to make decisions and provide feedback and developmental opportunities. Mergers and Organizational Transformations. In July 2003, we found that the key to successful mergers and organizational transformations is to recognize the “people” element and implement strategies to help individuals maximize their full potential while simultaneously managing the risk of reduced productivity and effectiveness that often occurs as a result of changes. We identified nine leading practices new organizations should follow including ensuring top leadership drives the transformation and establishing a communication strategy, among others. Army Is Establishing New Organizations to Lead Modernization Efforts and Prioritizing Solutions to Address Near-term Capability Gaps while Identifying Long-term Needs The Army’s cross-functional team pilots and early efforts by the Army Futures Command have prioritized closing near-term capability gaps, and have begun planning the transition to long-term capabilities. The cross- functional teams were pilot programs to improve the quality and timeliness of requirements and technology development. These cross- functional teams are transitioning from independent organizations to organizations within the Army Futures Command, which will also subsume other existing Army organizations tasked with modernization. Army Futures Command is in the process of establishing its policies, processes, and functions as well as its relationships with other Army organizations. It plans to reach full capability by July 2019. The Army has already identified near-term priorities and realigned over $1 billion in science and technology funding for long-term modernization. Army Futures Command will be responsible for continuing this prioritization. Army Established Cross- Functional Teams to Pilot Its Modernization Efforts In an attempt to increase the efficiency of its requirements and technology development efforts, the Army established cross-functional team pilots for modernization. A directive from the then-acting Secretary of the Army on October 6, 2017, established eight multi-disciplinary cross-functional teams on a pilot basis. The eight cross-functional team pilots were assigned to address the six priority areas, as outlined in table 2. These cross-functional team pilots were intended to: take steps toward achieving the six modernization priorities; leverage expertise from industry and academia; identify ways to use experimentation, prototyping, and demonstrations; and identify opportunities to improve the efficiency of requirements development and the overall defense systems acquisition process. Cross-functional team pilots were structured to help achieve these goals. Each cross-functional team pilot consisted of core staff and subject matter experts from across the Army. To facilitate the rapid approval of requirements, each cross-functional team pilot was led by a general officer or a senior civilian official who could communicate directly with the highest levels of the Army. The goal of staffing these teams was to ensure that each team had individuals who specialized in acquisition, requirements, science and technology, test and evaluation, resourcing, contracting, cost analysis, sustainment, and military operations. The goal of bringing different experts together was to facilitate collaboration and immediate opportunities for stakeholders to provide input as opposed to the more traditional requirements development process, in which input has typically been provided separately. Officials told us that, while all of these subject matter experts may have provided input on the requirements development process in the past, placing them on a single team offered the promise of streamlining those efforts and could eliminate the need for multiple reviews. Figure 2 below compares the requirements development process under cross-functional teams to how the Army has traditionally developed requirements. The cross-functional team locations chosen by senior Army leadership coincide with the locations of related Army organizations or industry hubs, which could help to facilitate this exchange of ideas among technical experts, and inform prototyping and experimentation. For example, the cross-functional team pilot for the Future Vertical Lift was stationed at Redstone Arsenal where the Army’s existing research, development, and engineering center for aviation is located. In congressional testimony, the Commander of Army Futures Command stated that in order to achieve their near- and long-term modernization objectives, they will have to reduce their requirements development timelines from 3 to 5 years to less than 1 year. According to cross- functional team members we spoke with, the cross-functional team pilots were able to demonstrate progress toward achieving the goals set out for them. Specifically, cross-functional team pilots completed requirements documentation for one of the Mounted Assured Positioning, Navigation and Timing System’s capabilities in less than a year; replaced small airborne radio with completion of directed requirement for the Integrated Tactical Network in less than 60 days; and completed requirements documentation for a soldier lethality capability in 15 days as opposed to the expected 4 months. Army Futures Command Scheduled to Become Fully Operational by July 2019 The Army has taken initial steps to consolidate all its modernization efforts under one authority, in addition to its initiation of the cross- functional team pilots. In particular, the Secretary of the Army established the Army Futures Command through the issuance of a general order on June 4, 2018. According to Army documentation, the intent of the new command is to provide unity of command, accountability, and modernization at the speed and scale required to prevail in future conflicts. This organization is led by a four-star general like its organizational peers: Army Materiel Command, Training and Doctrine Command, and Forces Command. Establishing Army Futures Command is the most significant institutional change to the Army since it reorganized in 1973 in the wake of the Vietnam War. The Army is in the process of establishing the new command, but has just begun to define its organizational structures. According to the 2018 Army general order, Army Futures Command reached initial operating capability in July 2018. According to Army Futures Command officials and documentation, the new organization is charged with integrating several existing requirements and technology development organizations—such as Army Capabilities Integration Center in Fort Eustis, Virginia and Research, Development, and Engineering Command headquartered in Aberdeen, Maryland—as well as the cross-functional team pilots. The cross-functional team pilots are in the process of being integrated into the new command and, according to Army officials, will continue to be responsible for managing the Army’s six modernization priorities. In addition, Army Futures Command will be supported by a number of operational and administrative offices to assist the components with executing their missions. According to Army officials and documentation, the new command will be organized around three major components: Futures and Concepts: responsible for identifying and prioritizing capability and development needs and opportunities. This organization subsumed the Army Capabilities Integration Center on December 7, 2018—formerly part of Army Training and Doctrine Command, which focuses primarily on the education and training of soldiers. Combat Development: responsible for conceptualizing and developing solutions for identified needs and opportunities. This organization will subsume Research, Development and Engineering Command—currently a part of Army Materiel Command, which focuses primarily on sustainment. Combat Systems: responsible for refining, engineering, and producing new capabilities. The acquisition program offices will communicate with the new command through this organization to ensure integration of acquisition functions. However, the program offices will continue to report to the Assistant Secretary of the Army for Acquisition, Logistics and Technology. Army Futures Command will be headquartered in Austin, Texas, and existing organizations are not expected to change their locations. According to Army officials and documentation, the Army chose Austin because of its proximity to science, technology, engineering, and mathematics talent, as well as private sector innovators that officials believe will assist the command in achieving its modernization goals. According to senior Army leadership we spoke with, the new command headquarters will have around 300 staff in place by July 2019, a workforce that may grow to 500 employees—100 military and 400 civilians. Our analysis of Army’s plans for initial staffing at the Army Futures Command headquarters, based on data from July 1, 2018, found that about one-third of headquarters staff would be involved directly in modernization efforts, such as engineers and operations specialists, and the remainder would consist of support staff, including legal counsel and contracting professionals. Figure 3 shows the locations of the known major Army Futures Command components, the 8 cross-functional teams being integrated under Army Futures Command, and its new headquarters. Although initial steps have been taken to establish the new command, key steps have not yet been completed. The Army stated in the executive order establishing the command that it will consider Army Futures Command fully operational once it is sufficiently staffed with operational facilities, secure funding, and the ability to execute its assigned mission, roles, and responsibilities. At full operating capability, officials told us Army Futures Command will also have finalized the organizational structure and the reporting responsibilities of its various components. However, Army Futures Command has not yet established policies and procedures detailing how it will execute its assigned mission, roles, and responsibilities. For example, we found that it is not yet clear how Army Futures Command will coordinate its responsibilities with existing acquisition organizations within the Army that do not directly report to it. One such organization is the Office of the Assistant Secretary of the Army for Acquisition, Logistics and Technology—the civilian authority responsible for the overall supervision of acquisition matters for the Army—and the acquisition offices it oversees. To mitigate concerns about coordination, in August 2018, the Army issued a directive signed by the Secretary of the Army designating the military deputy of this office as an advisor to Army Futures Command, a designation aimed at establishing a means of coordination. Army Futures Command officials have also stated that the Assistant Secretary of the Army for Acquisition, Logistics and Technology will retain full acquisition authorities as required by law. Army documentation shows that further policies and procedures are expected to be issued in 2019. The Army’s Efforts Have Balanced Modernization by Prioritizing Mitigation of Near-term Capability Gaps while Identifying Long- term Needs The Army recognizes the need to balance near-term and long-term modernization over time. To do so, the Army has balanced its modernization efforts by funding the closure of near-term capability gaps, and identifying long-term needs to be funded. Since announcing the modernization efforts in 2017, the Army has directed more funding toward closing near-term capability gaps. For example, as part of the planning for the fiscal year 2019 budget process, the Army identified 67 high-priority programs, such as the M-1 Abrams tank and the AH-64 Apache helicopter, with capability gaps in need of further investment. To support these priorities, the Army identified a need for $16 billion in increased funding in fiscal years 2019 through 2023. The 2018 Army Modernization Strategy report identified the need for additional resources for near-term efforts, including plans to spend billions of dollars for acquisition of maneuverable short range air defense capabilities in fiscal years 2020 through 2024. The same report described plans to spend hundreds of millions of dollars over the same period for prototyping technologies for the Next-Generation Combat Vehicle, a longer-term capability. The Army has also begun to plan research and development efforts for its long-term modernization needs. The Army identified long-term capabilities for all of the modernization priorities, as well as dates that science and technology efforts should transition to programs of record. Army officials stated that, ultimately, multiple programs of record may be considered for each capability area. For example, the Army identified science and technology efforts to develop an advanced powertrain for the Next Generation Combat Vehicle and identified planned transition dates to the program in fiscal years 2020 and 2023. The 2018 Army Modernization Strategy report provides additional details on long-term modernization efforts for three of its six priorities: Future Vertical Lift, Soldier Lethality, and Next-Generation Combat Vehicle. Figure 4 below presents a timeline for some of the proposed capabilities within each of the six priorities. The Army has realigned some resources to support its long-term modernization priorities. In identifying long-term capabilities, we found that the Army has evaluated its science and technology portfolio to determine alignment with the six modernization priorities. For example, as part of an October 2017 review for the office of the Deputy Under Secretary of the Army, the eight cross-functional team pilots examined science and technology investments to identify which efforts contributed to the priorities and which efforts did not contribute to them. According to this review and Army officials, the Army realigned over $1 billion in funding toward the priorities for fiscal years 2019 through 2023, for a total of $7.5 billion directed at these priorities. The review preserved $2.3 billion in funding for basic research for the same time period. According to Army officials, similar science and technology reviews will be conducted annually to help cross-functional teams manage their respective programs’ progress and identify further opportunities for investment. To fund future modernization efforts, both the science and technology review and the review for the fiscal year 2020 budget process also identified opportunities to reduce funding for, or eliminate, some existing programs. For example, plans for the air and missile defense portfolio include an option to divest from legacy short range air defense programs in fiscal year 2029 if its Indirect Fires Protection Capability program becomes fully operational. This aligns with statements from Army officials that program decisions will be driven not by specific schedules but by the maturity of replacement capabilities. New Organizations Have Generally Applied Leading Practices but the Army Futures Command Has Taken Limited Steps to Fully Apply These Practices The Army has generally applied leading practices for technology development and establishing effective cross-functional teams, and has begun to apply leading practices for mergers and organizational transformations for the Army Futures Command. During the Army’s pilot phase for its eight cross-functional teams, the teams took actions consistent with leading practices for technology development, such as bringing together requirements developers and warfighters, planning prototype demonstrations, and maturing technology prior to beginning an acquisition program. The Army’s pilot teams also applied eight leading practices we have identified for establishing effective cross-functional teams to varying degrees. In addition, senior Army leadership has been clear in its support for the new command and has clearly outlined a timeframe for its establishment, actions that are in line with the leading practices for mergers and organizational transformations we have identified in prior work. Whether further application of these leading practices will continue under the new command is unclear as the role of the cross-functional teams has not yet been formalized and Army Futures Command has not yet taken all the steps needed to reach full operational capability. Cross-Functional Team Pilots Generally Applied Leading Practices for Technology Development, but Plan to Move into System Development Early We found that the Army’s eight cross-functional team pilots generally applied leading practices identified in our prior work when it came to their requirements and technology development efforts. As we found in April 2018, positive outcomes result from taking a knowledge-based approach to product development that demonstrates high levels of knowledge before making significant resource commitments. Our review of the Army’s cross-functional team pilots found that they have generally applied leading practices to the following two areas: Promoted communication between end-users and requirements developers. The Army directive that established the cross-functional team pilots as well as these teams’ charters state that teams will follow a methodology of collaboration between warfighters and developers to prepare capability documents. An official from the Synthetic Training Environment cross-functional team told us that involving industry representatives and warfighters helps the cross- functional team get “closer to what ‘right’ looks like” early in the requirements development process. By promoting communication between industry representatives and warfighters, the cross-functional teams helped ensure that developer resources better matched end- user needs. Planned to prototype capabilities as part of technology and product development. The Army directive establishing the cross- functional team pilots states that cross-functional teams should incorporate iterative experimentation and technical demonstrations to inform capability requirements. As an illustration of this practice, officials from the Future Vertical Lift cross-functional team told us that they will hold a “fly off” between two competitive prototypes of the Future Attack Reconnaissance Aircraft in fiscal year 2023 before choosing a design for follow-on testing and integration in fiscal year 2024. However, we are concerned that the Army has plans to mature technology to a level lower than the threshold recommended by leading practices before beginning system development. Specifically, we found that the Army’s October 2017 science and technology review identified a goal of demonstrating new technologies in a relevant environment, such as a highly realistic laboratory setting, before transitioning them to specific platforms or programs. As an example, the Soldier Lethality cross- functional team began maturing technology for the next generation squad automatic rifle to this level of maturity to prepare it for the transition to product development, scheduled for the end of fiscal year 2019. Under leading practices that we identified, prototypes should be demonstrated in an operational or realistic environment—not simply in a relevant environment—prior to starting system development to ensure that they work as intended for the end-user. The Army’s choice to start a formal acquisition program at lower levels of technology maturity raises concerns that are consistent with those we have raised in the past. Our past work indicates that by demonstrating technologies only in a relevant rather than an operational environment, the Army increases the risk that new capabilities will not perform as intended and require further technological maturation while in system development. This could raise costs and extend timelines for delivery of equipment to the warfighter. For example, almost two decades ago in a 1999 report, we recommended demonstrating technologies in an operational environment prior to system development and DOD concurred with that recommendation. We have also reported the importance of achieving this level of maturity on an annual basis since 2003, most recently in 2018, in our assessment of DOD’s major weapon system acquisition programs. In addition, we again reiterated this leading practice in 2016 in our technology readiness assessment guide. While DOD has a policy, based in statute, that generally requires major defense acquisition programs to, at a minimum, demonstrate technologies in a relevant environment before system development, that policy does not preclude the cross-functional teams from pursuing a higher level of maturity. Such an approach would be consistent with leading practices that recommend maturing technologies to a higher level. By applying these leading practices, the cross-functional teams could better ensure that prototypes are demonstrated in an operational or realistic environment prior to starting system development to ensure that they work as intended for the end-user. Cross-Functional Team Pilots Demonstrated Some Leading Practices for Effective Teams, but Few Steps Taken to Incorporate these Practices in New Command Our prior work has identified eight leading practices that organizations should use for establishing effective cross-functional teams. In reviewing the Army’s eight cross-functional team pilots, we found that they have applied these practices to varying degrees. Table 3 describes these leading practices. All eight Army cross-functional team pilots fully applied four of these leading practices. Well-defined team goals. We found that each cross-functional team pilot charter clearly defined its team’s goals. For example, the Long- Range Precision Fires cross-functional team charter states that it will rapidly integrate and synchronize the requirements development process to deliver cutting edge capabilities to the operating force as the best possible return on investment for warfighters. In addition, senior Army leadership approved the charters containing each team’s goals, ensuring that the goals defined for the teams were linked to the Army’s larger goal of modernization. Open and regular communication. Members of all eight cross- functional team pilots shared information with each other, sought feedback, and communicated with team leaders and senior Army leadership. For example, officials from the Next Generation Combat Vehicle cross-functional team told us that ongoing dialogue with senior Army leadership resulted in numerous rounds of refined guidance. The cross-functional team took that guidance, reconvened, assessed options, and then presented another round of updates to Army senior leadership. Moreover, the directive establishing the cross-functional team pilots requires that they develop capability documents, informed by experimentation and technical demonstrations, to ensure that planned capabilities are technologically feasible, affordable, and therefore can eventually be provided to soldiers. According to Army officials, developing such documents requires open and regular communication between team members who have expertise in diverse fields such as contracting, cost analysis, and testing. Autonomy. The eight cross-functional team pilots’ charters show, and interviews with members confirm, that teams are granted substantial autonomy by senior Army leadership. The cross-functional team charters give teams the authority to solve internal problems through market research, prototyping, technical demonstrations, and user assessments. For example, the Synthetic Training Environment cross- functional team and senior Army leadership stressed to us the importance of experimentation as an opportunity to “fail early and fail cheap.” According to cross-functional team members, this allows cross-functional teams to move on and avoid expensive and time- consuming failures later in the acquisition process, as has happened with Army in the past. Furthermore, cross-functional teams can reach out to subject matter experts needed to develop requirements without having to obtain permission from senior Army leadership. Committed team members. All eight cross-functional team pilots include members with expertise in diverse fields who are committed to achieving team goals. For example, the Network cross-functional team charter states that the team should consist of experienced and committed subject matter experts executing disciplined initiatives and willing to take prudent risks. In addition, the directive establishing the cross-functional teams states that they should leverage industry and academia where appropriate to increase knowledge and expertise. Staffing information provided by multiple cross-functional teams demonstrates the diversity of expertise the Army has applied to these efforts. Cross-functional team members also provided us with multiple examples of how their teams have leveraged outreach with industry and academia to improve their understanding of requirements and technology. Additionally, we found that the eight cross-functional team pilots have at least partially applied the following four leading practices. Senior management support. Senior Army leaders, including the Secretary and the Chief of Staff, have championed the cross- functional team pilots in public statements. Although an Army official told us that he was aware of a member of a cross-functional team (who left the team) receiving a civilian achievement award, we did not find any documentary evidence of senior Army leaders providing incentives or recognition to members of the eight cross-functional team pilots. Because many members of cross-functional teams, including some leaders of these teams, work in a number of different roles, they do not have a consistent chain of command that can provide incentives or recognition across all of their activities. The “dual-hatted” nature of team members—in which they work for their parent organization as well as the cross-functional team pilot—may further complicate full application of this leading practice. Empowered team leaders. The team leaders of all eight cross- functional team pilots are empowered to make decisions and regularly interact with senior Army leaders. While an Army official stated that team leaders and Army leadership provide guidance to cross- functional team members, we did not find any documentary evidence of these leaders providing feedback to members of those teams. However, many members of the cross-functional teams, including directors, are only temporarily assigned to cross-functional team pilots because they work in other functions simultaneously. Well-defined team structure. While most cross-functional team pilots have established operating procedures and organizational structures, we found that some have not provided training to their members on the operations of cross-functional teams and how they relate to other organizations. Our previous work identified appropriate training as a key characteristic of a well-defined team structure. Most cross- functional team charters do not address the issue of training. Through our discussions with the cross-functional teams, we found the following with respect to training: An official from the Soldier Lethality cross-functional team told us that team members received training and planned to attend further training to enhance creative and “outside-the-box” thinking. The director of the Network cross-functional team told us that, even though he did not receive training, he was able to leverage his previous experience leading matrixed organizations. The Long-Range Precision Fires cross-functional team told us that members started their work without any training and this posed a challenge as they were unfamiliar with each other’s roles and work. Inclusive team environment. The founding documents for the cross- functional team pilots themselves generally did not address attributes of this leading practice, such as having team members that support and trust one another. However, discussions with team members indicate some teams have invested in creating such an environment. The Soldier Lethality cross-functional team members stated that working in a cross-functional team as opposed to working as separate individuals in disparate offices, allowed them write requirements faster. It also created an atmosphere in which members got to know each other’s experiences and trust each other’s views. Officials from the Synthetic Training Environment cross-functional team told us they spent their first week gaining an understanding of each team member’s role on the team to foster such inclusivity. As previously described, the cross-functional team pilots were an effort to achieve several goals including to identify ways the Army could increase efficiency in requirements and technology development. According to Army officials, the teams have shown initial progress in doing so, delivering requirements—and in some cases developing capabilities for delivery in the next two years—to the warfighter in shorter than anticipated timeframes. However, the Army has not yet definitively established the cross-functional teams’ roles, responsibilities, and how they will operate within Army Futures Command. As a result, it is unclear if the Army will benefit from the experience and expertise of these teams applying leading practices as they transition into Army Futures Command. Until the Army takes formal steps to institutionalize the beneficial practices used by the cross-functional teams during the pilot phase such as autonomy, proactive decision making, and access to senior leadership it will be missing a valuable opportunity to integrate these practices into the new command. Army Futures Command Does Not Have a Formal Plan to Identify and Share Lessons Learned from Cross-Functional Team Pilots The Army directive that established the cross-functional teams directed each team pilot to capture best practices and lessons learned and report them to the Army office that oversaw their efforts. Officials from the cross-functional teams described to us lessons they learned and planned to pass on to their oversight office for the benefit of Army Futures Command. For example, officials from the Air and Missile Defense cross- functional team stated that having direct access to the Under Secretary and the Vice Chief of Staff of the Army is important for obtaining quick decisions, which save time and money in getting capabilities to the warfighter. While officials from Army Futures Command told us that they intend to collect lessons learned from the cross-functional team pilots, they do not yet have a formal plan to identify and incorporate lessons learned. Since the cross-functional team pilots were established to experiment with new approaches, it is important that they take steps to capture the lessons they have learned—positive and negative—so they can be shared as these teams are integrated into Army Futures Command. If the Army fails to institutionalize these lessons learned in the new command, it risks losing the benefits from the experiences of these pilots thereby either repeating past mistakes or failing to benefit from past practices that worked well. If it can capture the lessons learned, it has an opportunity to accelerate the progress these teams made during their pilot phase and spread the benefits across all the cross-functional teams and across a wider range of specific military capabilities they are pursuing. In our discussions with Army Futures Command officials they agreed that formalizing and implementing a plan to collect and incorporate lessons learned would be beneficial. Incorporating Leading Practices for Organizational Transformations Could Benefit Army Futures Command Army officials told us that the establishment of Army Futures Command represents a dramatic organizational transformation in how the Army will develop weapon systems and platforms. In our previous work on mergers and organizational transformations in federal agencies, we have identified several leading practices, as shown in table 4 below, that can help agencies undertaking such transformational efforts. As the Army is standing up Army Futures Command, it has begun to apply some of the leading practices for mergers and organizational transformations. For example, senior Army officials have provided a clear and consistent rationale for establishing the new command in official directives and in public appearances. They have also clearly described the mission of the Army Futures Command and established a timeline for its implementation. However, the command has not yet formalized and institutionalized its authorities, responsibilities, policies and procedures nor taken steps to apply these or other leading practices. While we observed a strong organizational unity of purpose and collaboration from the current senior leadership in the Army for the Army Futures Command, this could change as the Army’s leadership changes. For example, according to law, the tenure of the Chief of Staff of the Army is generally limited to 4 years and the current Chief of Staff has already served 3 years. Furthermore, the Secretary of the Army is appointed by the President, subject to the advice and consent of the Senate, and therefore may change with new presidential administrations and during administrations. For example, the past 6 people, prior to the current secretary, confirmed as the Secretary of the Army served an average of 959 days—about 2 and one-half years. The current secretary has already served about 1 year. Further, senior Army officials told us that they expect changes at both top and mid-tier leadership within the new command will periodically occur as a result of the Army’s normal system of rotations for officers. For example, a senior military official in Army Futures Command told us that they expect commanders of components will rotate every 4 years. Therefore, because this modernization effort is expected to span a decade or longer, continued support from current and future senior Army officials, such as the Chief of Staff and the Secretary of the Army, will be essential to ensure the success of the new command into the future. We have previously reported in our work on internal controls that it is important to establish the organizational structure necessary to enable an entity to plan, execute, control, and assess the organization in achieving its objectives as well as respond to potential changes in, among other things, personnel. By fully applying key principles of major mergers and organizational transformations as the Army completes the process of establishing the Army Futures Command, the Army can better ensure the new command realizes its goals for modernization through development of well-defined requirements, incorporation of mature technologies, and development of systems that provide the warfighter with the capabilities needed for future conflicts. Conclusions The Army has made substantial changes to how it intends to coordinate and oversee modernization efforts, due at least in part to the lost years and billions of dollars from past efforts to modernize. The Army has taken positive steps to improve its current modernization efforts and has already seen some initial successes. The creation of the new command, the integration of the cross-functional teams to better refine requirements and cultivate technologies, the realignment of several existing organizations, and the shifting of personnel gives the Army a unique opportunity to take advantage of leading practices and its own lessons learned. The Army, however, faces some key challenges. In particular, the Army’s intent to transition technologies to weapon systems before technologies are matured is inconsistent with leading practices, risks delays in equipping the warfighter, and can potentially lead to cost overruns. In addition, the cross-functional team pilots have demonstrated some initial successes in shortening the requirements development process—and, more generally, in collaborating across the Army—but it is not clear what steps the Army Futures Command plans to take to incorporate the experience and expertise of these teams in applying leading practices and thereby sustain these benefits. Further, the Army lacks a formal plan to identify and incorporate lessons learned from the cross-functional teams as Army Futures Command becomes fully operational and could thereby miss an opportunity to leverage the experience of these teams on past practices that worked well and those that did not. Finally, as the Army finalizes the roles, authorities, and responsibilities for the Army Futures Command it can benefit from applying leading practices related to mergers and organizational transformations. This can help ensure that Army Futures Command realizes its goals for modernization including unity of command, accountability, and modernization at the speed and scale required to prevail in future conflicts. Recommendations for Executive Action We are making four recommendations to the Secretary of the Army: The Secretary of the Army should ensure that the Commanding General of Army Futures Command applies leading practices as they relate to technology development, particularly that of demonstrating technology in an operational environment prior to starting system development. (Recommendation 1) The Secretary of the Army should ensure that the Commanding General of Army Futures Command takes steps to incorporate the experiences of the cross-functional teams in applying leading practices for effective cross-functional teams. (Recommendation 2) The Secretary of the Army should ensure that the Commanding General of Army Futures Command executes a process for identifying and incorporating lessons learned from cross-functional team pilots into the new command. (Recommendation 3) The Secretary of the Army should ensure that the Commanding General of Army Futures Command fully applies leading practices for mergers and organizational transformations as roles, responsibilities, policies and procedures are finalized for the new command. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to the Department of Defense for review and comment. In its written comments, reproduced in appendix II, the Department concurred with all four of our recommendations and made certain technical comments which we incorporated as appropriate. In concurring with our recommendation on demonstrating technology in an operational environment, the Department of Defense requested that we reword the recommendation to reflect that technology maturity be considered with other factors, such as risk assessment and troop availability. We understand the Department’s desire for flexibility, but continue to believe that reaching higher levels of technological maturity, through demonstrating technologies in an operational environment prior to beginning system development adds significant value by reducing risk; something that could help the Army deliver capabilities it believes are urgently needed. As such, we made no change to the recommendation. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Army, the Commander of Army Futures Command, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or ludwigsonj@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made significant contributions to this report are listed in appendix III. Appendix I: Objectives, Scope and Methodology Section 1061 of the National Defense Authorization Act for Fiscal Year 2018 included a provision for GAO to report on the Army’s modernization strategy. This report assesses (1) the status of the Army’s efforts to establish new acquisition organizations while balancing near- and long- term modernization; and (2) the extent to which the Army has applied leading practices to do so. To assess the status of the Army’s efforts to establish new acquisition organizations we reviewed the Army general orders and directives that established these organizations. This review included documentation such as: Army General Order 2018-10 that established the Army Futures Command, as well as reassigned existing organizations, such as the Army Capabilities Integration Center from the Training and Doctrine Command and the eight cross-functional team pilots to the new command. Army Directive 2017-24 that established the cross-functional team pilots and provided guidance on how they should operate to improve the quality and speed of materiel development activities. Army Directive 2017-22 that provided guidance for implementation of acquisition reform policy/initiatives to reflect modernization such as directive 2017-29 to improve the integration of science and technology into concept, capability, and materiel development. Army Regulation 73-1 (Test and Evaluation Policy) Army Regulation 70-1 (Army Acquisition Policy) Army Regulation 71-9 (Warfighting Capabilities Determination) Training and Doctrine Command Regulation 71-20 (Concept Development, Capabilities Determination, and Capabilities Integration) Headquarters, Department of the Army Executive Order 176-18 (Establishment of Army Futures Command) We also interviewed the Under Secretary of the Army, officials from Army Futures Command and related organizations like the Office of Process Innovation and Integration, members of the eight cross-functional teams, the Army Capabilities Integration Center, and the Army Research, Development, and Engineering Command. To assess the balance of modernization priorities between near-term and long-term, we reviewed documentation related to those lines of effort including: the 2018 Army Modernization Strategy report—which describes the rationale behind modernization and the efforts for each priority, the Strategic Portfolio Analysis Review for Fiscal Year 2020—which is a part of the budget process to determine priorities, align science and technology efforts to capabilities, and plan milestones, the Deputy Under Secretary of the Army and Research and Development Command Science and Technology Review of October 2017—which describes the science and technology priorities for each cross-functional team and realigns funding through identifying opportunities to divest, and Strategic Capability Roadmaps—which provide a timeline for the development and fielding of the capabilities being developed by some of the cross-functional teams. To review these documents, we created a data collection instrument to capture the efforts as they related to each of the eight cross-functional teams and consolidate the different sources of information. We first collected information about the capabilities in which cross-functional team officials indicated their involvement. For these capabilities, we recorded planned milestones and the date that the capability would first be operational. We also recorded whether or not the capability was new or an incremental upgrade, the science and technology efforts to develop that capability, and whether or not those efforts contributed to other capabilities. We then collected data related to the general efforts of the cross-functional teams. These efforts included divestment opportunities, and the amounts of funding aligned to the associated modernization priority. We also interviewed officials from the cross-functional teams, the office of Army G-8, and other Army offices. To address the extent to which the Army’s cross-functional team pilots applied leading practices for technology development, we Reviewed cross-functional team charters, the 2018 Army Modernization Strategy report, Fiscal Years 2019 and 2020 Strategic Portfolio Analysis, the Army’s Fiscal Year 2019 President’s Budget, and the Army’s October 2017 Science and Technology Review to identify actions related to the development of near- and long-term capabilities for the Army’s six modernization priorities that align with the eight cross-functional teams. Interviewed cross-functional team officials to learn about technology development activities they conducted or planned to conduct regarding these priorities. Selected leading practices from our body of work on weapons systems acquisitions based on which ones are most relevant to where the cross-functional teams’ activities fit within the broader weapons systems acquisition process. Consolidated relevant data from Army documentation and statements from Army officials regarding their technology development efforts in a record of analysis containing a description of leading practices for technology development identified in our prior work. Compared Army documentation and cross-functional team officials’ statements against leading practices for technology development identified in our prior work, specifically promoting communication between requirement developers’ and end-users, prototyping technologies, and maturing technology to a specific threshold. To address the extent to which cross-functional team pilots applied leading practices for establishing effective cross-functional teams, we Reviewed Army Directive 2017-24, which established the cross- functional teams, as well as each team’s charter. Interviewed officials from each cross-functional team and other Army offices regarding the collaborative, communicative, and technology development efforts of these teams. Consolidated and analyzed data from Army documentation and statements from Army officials related to leading practices for establishing effective cross-functional teams, identified in our prior work. Compared the content of the Army documents and statements from cross-functional team officials against leading practices identified in our prior work to determine whether cross-functional teams had demonstrated actions consistent with these practices. We then had a second analyst check the same documents and statements to verify our initial result. To address the extent to which Army Futures Command applied leading practices for mergers and organizational transformations and incorporated lessons learned from the cross-functional team pilots, we Reviewed Headquarters Department of the Army Executive Order 176-18, which established the Army Futures Command, and Army Directive 2017-33, which established the Modernization Task Force. Interviewed senior Army officials involved in the establishment of the new command and cross-functional team officials. We selected leading practices identified by GAO for mergers and organizational transformations in our prior work because the establishment of Army Futures Command represents the largest organizational transformation the Army has undertaken since 1973 and includes merging existing Army organizations into a new command. Although Army Futures Command is not yet fully operational, we analyzed Army documentation and officials’ statements regarding the new command against leading practices identified in our prior work and the lessons learned from the cross-functional teams to assess whether it had applied these leading practices. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, J. Kristopher Keener (Assistant Director), Joe E. Hunter (Analyst-in-Charge), Jenna Blair, Emily Bond, Matthew T. Crosby, Cale Jones, Kevin O’Neill, John Pendleton, John Rastler, A. Maurice Robinson, and Roxanna Sun made significant contributions to this review.
Why GAO Did This Study In order for the Army to maintain its technological edge over potential adversaries, it plans to invest in near- and long-term modernization efforts. However, the Army has struggled with modernization initiatives in the past. For example, the Future Combat System was canceled after a cost of $21 billion and delivery of few new capabilities. The National Defense Authorization Act for Fiscal Year 2018 included a provision for GAO to report on the Army's modernization strategy. This report assesses (1) the status of the Army's near- and long-term modernization efforts; and (2) the extent to which the Army has applied leading practices to these efforts. GAO reviewed Army directives, procedures, and policies; and compared the Army's efforts with leading practices for requirements and technology development, effective cross-functional teams, and mergers and organizational transformations. What GAO Found Since 2017, when the Army announced its initiative to update its forces and equipment with improved capabilities—known as modernization—it has established and assigned eight cross-functional teams to pilot how to address these needs; established the Army Futures Command as the focal point for modernization efforts, with a four-star general to oversee it; and realigned over $1 billion in science and technology funding to support modernization efforts within the $7.5 billion expected to be spent over the next 5 years. To date, the Army has generally applied leading practices identified by GAO to its modernization efforts. For example, the cross-functional team pilots generally applied leading practices for determining requirements and technology development and for establishing effective teams. Similarly, as the Army began the process of establishing the Army Futures Command, it has started to apply the leading practices for mergers and organizational transformations by establishing a clearly defined mission and providing a clear consistent rationale for the command. However, GAO identified other areas where the Army has not fully applied leading practices to its modernization efforts including the following: Under the modernization effort, the Army plans to begin weapon systems development at a lower level of maturity than what is recommended by leading practices. GAO has raised concerns about this type of practice for almost two decades for other Army acquisitions, because proceeding into weapon systems development at earlier stages of technology maturity raises the risk that the resulting systems could experience cost increases, delivery delays, or failure to deliver desired capabilities. Taking this approach for acquisitions under the modernization effort raises similar concerns for the Army's six prioritized capability needs. The Army has not developed a plan for capturing the lessons learned from the cross-functional team pilots, and therefore may miss an opportunity to leverage the experience of these teams in applying leading practices. What GAO Recommends GAO is making four recommendations, including that the Army follow leading practices for maturing technologies to a higher level than currently planned and develop a plan to capture lessons learned from the cross-functional teams. DOD concurred with all the recommendations.
gao_GAO-18-229
gao_GAO-18-229_0
Background Law Enforcement Interaction with Individuals with Mental Illness Since the 1960s, the percentage of individuals with mental illness being treated in a hospitalized setting has decreased dramatically in an effort to move care away from institutional settings into a wider range of community-based treatment. This process, known as “deinstitutionalization,” has been driven in part by limited funding available for mental health services, changes in treatment philosophy, and medical advancements. According to a 2015 Federal Bureau of Investigation (FBI) publication, one result from this shift is that local police departments have had to meet the growing needs of individuals suffering mental health emergencies (e.g., a schizophrenic episode), and are often the first source of assistance in helping to arrange treatment for these individuals. Similarly, the IACP reports that police officers often have to “manage situations that result from a history of mental health policy and legislative decisions made by federal and state governments.” According to the IACP, law enforcement officers—generally local police—may then find themselves serving in a role similar to that of a social worker in attempting to locate treatment services for such individuals. The IACP also reports that such increasing interactions may result in individuals with mental illness being arrested and placed in jail, rather than receiving treatment from mental health facilities. This can result in a cycle of arrest, imprisonment, and recidivism for such individuals. In addition, interactions between law enforcement officers and individuals with mental illness have the potential to escalate into violence. In recent years, a number of professional organizations and advocacy groups such as IACP, the Police Executive Research Forum (PERF), the National Alliance on Mental Illness, and Council of State Governments Justice Center (CSG JC) have researched and advocated for different approaches that may reduce the likelihood of violent encounters or help officers connect the individuals they encounter with proper treatment services. In addition, DOJ’s Bureau of Justice Assistance (BJA), within its Office of Justice Programs, has created a compendium of existing information and research in the field of state and local law enforcement responses to individuals with mental illness. Federal law enforcement officers and agents may interact with individuals displaying signs of mental illness in a number of different types of incidents while performing their various missions, such as protecting federal property or officials or when apprehending subjects of an investigation. Figure 1 provides one example of a possible incident an officer or agent might experience and the response options available. Generally, when federal officers and agents encounter individuals displaying signs of mental illness—and there is no evidence of a federal crime—they may refer them to local law enforcement or health care providers to assess their mental health and determine whether they need further health care. If local providers determine that such care is needed, it is generally provided through a voluntary or involuntary commitment to a local mental health services provider. One exception to this is for correctional officers and other staff within BOP, as these staff interact with individuals with a diagnosed mental illness as part of their daily duties in ensuring a secure prison environment. BOP pre- designates all inmates entering its institutions and assigns initial mental health and medical screen assignments. Throughout an inmate’s incarceration, BOP’s psychologists, psychiatrists, and qualified mid-level practitioners can determine a new mental health care level following a review of records and a face-to-face clinical interview. Relevant Legislation and Departmental Efforts Under section 504 of the Rehabilitation Act of 1973, as amended, discrimination on the basis of disability in federally funded and federally conducted programs and activities is prohibited. A person with a disability includes anyone who has a physical or mental impairment that substantially limits one or more major life activities, has a record of such impairment, or is regarded as having such an impairment. DHS and DOJ both currently have efforts underway, in various stages of development, to have their components review their existing policies, guidance, and training in response to departmental guidance on addressing individuals with disabilities and obligations under section 504. Pursuant to departmental guidance, after completing their reviews, components are to determine areas that could be enhanced. Within DHS, components have been asked to report on the status of their efforts to DHS’ Office for Civil Rights and Civil Liberties (CRCL). Within DOJ, the Office of the Deputy Attorney General (ODAG) is overseeing components’ efforts. In addition, the 21st Century Cures Act requires the Attorney General to provide direction and guidance for the following by December 13, 2017: “Programs that offer specialized and comprehensive training, in procedures to identify and appropriately respond to incidents in which the unique needs of individuals who have a mental illness are involved, to first responders and tactical units of—(A) Federal law enforcement agencies; and (B) other Federal criminal justice agencies, such as and the Administrative Office of the United States Courts, and other agencies that the Attorney General determines appropriate.” “The establishment of, or improvement of existing, computerized information systems to provide timely information to employees of Federal law enforcement agencies, and Federal criminal justice agencies to improve the response of such employees to situations involving individuals who have a mental illness.” Discussion Groups Identified Several Challenges that Officers and Agents Encounter When Responding to Incidents Involving Individuals with Mental Illness According to the DHS and DOJ law enforcement officers and agents we interviewed, they are not positioned to diagnose any specific mental health condition that an individual might have, as they are not trained mental health professionals. However, responding to incidents involving individuals with mental illness can be challenging for multiple reasons, including determining whether the person is suffering from a mental illness or from another issue, such as drug addiction, and communicating with the person, for example, when a person may be suffering from delusions. These officers and agents face these challenges while also being responsible for ensuring their own safety and that of others in the area. Some of the common challenges officers and agents identified during our discussion groups follow. Identifying Whether an Individual Has a Mental Illness Some officers and agents in our group discussions stated that when encountering individuals displaying erratic behavior (e.g., rapid or nonsensical speech, paranoid or delusional statements), it can be difficult to determine if that behavior is attributable to a mental illness or the influence of drugs. Specifically, Border Patrol agents—who are broadly responsible for preventing the illegal entry or exit of people and goods at places other than ports of entry—stated that determining whether someone has a mental illness or is experiencing other issues is challenging and may be complicated by language barriers. Border Patrol agents may at times encounter large groups of people attempting to cross the border at one time and thus have limited time to make that determination. ATF officers—who may encounter individuals with a mental illness who are targets of an investigation—commented that incidents may involve an individual who could suffer a mental illness (treated or untreated), or be under the influence of alcohol or drugs. Unless the individual discloses his or her condition, or family or friends are there to explain the condition, officers would not know the cause of the individual’s behavior. They explained that if mental health information about a suspect is known in advance of an operation, officers can adjust their approach; however, they told us that most of the time they do not know if someone has a mental health condition and how it might present itself. Similarly, an FBI police officer—who may encounter individuals displaying signs of mental illness if those individuals enter an FBI office—told us that it can be challenging to deal with an individual who is acting erratically, not knowing precisely whether the behavior is attributable to a mental illness, and there may be limited time available to address an individual posing a safety risk. BOP corrections officers also echoed this challenge. They said that despite having back-up mental health staff on call, their initial reaction to an inmate exhibiting some type of erratic behavior has to be fairly quick to secure the safety of the staff and other inmates. Officers and agents across components and departments made clear that they are not mental health professionals or psychologists and, as such, are charged with responding to the behaviors that are exhibited to secure the scene. Communicating with Individuals with a Mental Illness Some of the officers and agents in our discussion groups stated that communicating effectively with someone exhibiting signs of a mental illness and understanding what he or she may be going through or how he or she sees reality can be challenging. One officer told us that trying to make individuals who may have a mental illness understand that their reality is not everyone else’s reality is particularly challenging. This was very difficult, for example, for Secret Service Uniformed Division officers who explained that they encounter individuals when providing security along the White House fence and for FPS officers, who often encounter individuals displaying signs of mental illness near or in federal buildings that they are assigned to protect. As the Secret Service officers explained, even if individuals exhibit delusional behavior, so long as they have not broken any laws, then they are free to be near protected federal venues and the officers are limited in any actions they can take. One officer, discussing the challenges in speaking with someone with a mental illness who may be experiencing delusions, stated that the person is “wholeheartedly convinced that what he or she perceives is the true reality.” Officers and agents who we met with in CBP reported that they rely on common sense to dictate appropriate action and use reasonable efforts to protect themselves and others. They noted that additional training on communicating effectively with individuals suffering from mental illness could be beneficial. The challenges noted above in identifying causes of erratic behavior or effectively communicating with individuals with a mental illness can make it difficult for officers to resolve a tense situation or apprehend an individual (if necessary) as securely or peacefully as possible. For example, Border Patrol agents stated that ensuring that such encounters are resolved safely for the individuals involved and other members of the public is their biggest challenge. It might require removing someone in distress from a group of individuals that he or she may be traveling with or keeping him or her calm. When someone is in an extreme state of panic, emotional distress, or anger, officers try to remove the person from the group to prevent a potential incident from escalating quickly. Operating with Limited Access to Mental Health Resources Officers and agents also stated that a limited number of mental health professionals available within their components or through local agencies can pose a challenge in helping persons with mental illness receive necessary treatment. As such, they must rely on state and local entities in the area (e.g., law enforcement, hospitals) to provide assistance for individuals. Federal Air Marshals—who provide protection at airports and other transportation modes—we spoke with explained that since they do not have holding facilities to secure individuals with mental illness, they are reliant on local law enforcement and mental health professionals to manage an incident. Officers and agents highlighted the importance of maintaining close relationships with state and local partners and added that trained mental health professionals provide an excellent resource. In addition, officers and agents in some discussion groups noted there may be training offered by state or local agencies related to understanding and responding to individuals with mental illness that could be leveraged by federal agencies. Officers and agents reported, however, that it can be difficult for the components to find the time and resources to send officers to the trainings. According to USMS officers— who provide security at federal courthouses and oversee transport of federal prisoners—this is particularly challenging in small offices where there may be very few staff. Frequently Encountering the Same Individuals Another common challenge noted in discussion groups was that officers and agents repeatedly encounter the same individuals with mental illness. Officers and agents explained that they can sometimes apprehend individuals who are creating a disturbance, but these individuals often cannot be charged with a federal crime. As such, following the apprehension, the officers and agents release these individuals to local or state authorities who may transport them to local providers for a mental health evaluation. Typically, if the local providers determine a commitment is necessary, they will hold these individuals at a hospital or clinic for up to 72 hours. According to the officers and agents in our discussion groups, many of these individuals return after they are released and the officers and agents encounter them time and again, with very little that they can do to provide these individuals with assistance. According to the officers and agents, incidents involving frequent encounters with the same individuals can take time away from performing other important activities. Secret Service Uniformed Division officers told us they repeatedly encounter the same individuals with mental illness and know some of these individuals very well. For example, Secret Service officers stated that when performing their duties in patrolling the grounds of the White House, they have had frequent encounters with a woman who believes she has family members living in the White House. The officers have turned her away from the scene on multiple occasions, but she continues to return. Components Have Some Type of Training, Policies, and Guidance Related to Mental Illness, and Reviews to Enhance Practices are Underway DHS and DOJ Components Offer, Receive, or Are Developing Some Type of Training Related to Mental Illness All of the law enforcement components in our scope offer training directly, receive training through FLETC, or are developing some training on responding to incidents involving individuals with mental illness. Agency and FLETC training includes courses on communication, de-escalation, and suicide prevention (related to federal inmates). Since these components have varying missions and operational needs and interact with the public in different capacities, the nature and scope of this training, as well as the number of courses and the duration of courses offered varies. For example, BOP’s staff—including food service workers and nurses, as well as correctional officers—have daily contact with inmates with mental illness and can act as “first responders” when situations merit. According to BOP officials, training is offered to all staff in all of its institutions on mental health and working with the mentally ill, along with courses on communication, de-escalation, suicide prevention, and use of force. As another example, ATF’s agents told us they have less routine contact with individuals with mental illness, but ATF offers a course to its agents on de-escalation concepts and tactics, which addresses responding to incidents involving individuals with mental illness, as well as crisis intervention training to its cadre of crisis negotiators. Further, some of the components’ training is mandatory and offered annually through class instruction or online portals. These courses may be offered to new hires or available to tenured officers. In addition, some components’ training courses are delivered as stand-alone sessions, while others may be modules within a larger course exploring other law enforcement topics. Three DHS operational components in our scope, in addition to FLETC, offered some type of training specifically for their officers and agents. Another one (TSA) has training in development as of October 2017, on topics related to responding to incidents involving individuals with mental illness. FLETC explained that it provides basic training to all DHS law enforcement officers through one of three basic program categories— Center Basic, Center Integrated Basic, and Agency Specific Basic—which vary in length. Two Center Basic training programs include a 2-hour module titled Managing Abnormal Behavior, which covers how to identify common signs of mental disorders (among other things) and how to handle people exhibiting abnormal behavior. Specifically, this module examines basic human behavior that may be classified as abnormal, differentiates between mental disorders, and also covers physical and organic causes that may be related to abnormal behavior with the appropriate officer responses. In addition, FLETC informed us that it has developed scenario-based training in these programs, allowing the officers or agents to develop decision-making skills in situations involving people exhibiting abnormal behavior. See appendix II for more information on FLETC’s training programs. U.S. Secret Service Training We observed Secret Service training on Protective Intelligence Questioning for First Line Officers, which is offered to Uniformed Division Officers. The course instructor played the role of three different individuals with schizophrenia, bipolar disorder, and sociopathic personality disorder and trained agents on interacting and interviewing subjects who attempt to breach the White House fence. In addition to this module provided to all DHS agents and officers, the components in our review also offer or are preparing component-specific training courses. Table 2 lists illustrative examples of DHS training. In addition, TSA has developed a mandatory course entitled Awareness Training on Mental Health Conditions to be delivered in the classroom and through scenarios and exercises during fiscal year 2018. This course is designed to introduce Federal Air Marshals to the fundamentals of predominant mental disorders, such as schizophrenia or psychosis. All of the DOJ components in our review provide some type of training to their officers on topics related to responding to incidents involving individuals with mental illness—as illustrated in Table 3. DHS and DOJ Components Have Existing Policies or Guidance That Addresses Responding to Individuals with Mental Illness The law enforcement components within our scope at DHS and DOJ have policies or guidance in place that addresses responding to incidents involving individuals with mental illness. Some components’ policies or guidance specifically addresses mental illness, while others touch on the issue as part of larger policies on other topics (such as use of force)—as illustrated in Table 4. All DHS and DOJ Components Are Reviewing Policies, Guidance, and Training to Align with Departmental Guidance DHS Efforts to Review Policies, Guidance, and Training DHS has guidance in place to help ensure that its components have policies and training that ensure their alignment with section 504 of the Rehabilitation Act. In 2013 and 2015, respectively, DHS issued a directive and implementing instruction to its components intended to strengthen compliance with section 504. These documents required DHS components to conduct a self-evaluation and prepare a component plan identifying any policies or practices that may result in a qualified individual with a disability being excluded from participation in, or being denied the benefits of, a program or activity. Department of Homeland Security (DHS) Component Self-Evaluation Tool The self-evaluation tool that DHS’s Office of Civil Rights and Civil Liberties developed requires components to—among other things—describe whether there is an established policy ensuring equal treatment for individuals with disabilities, how the component’s personnel and procedures ensure that individuals with disabilities are treated in a nondiscriminatory manner, and the component’s process for providing auxiliary aids and services to ensure effective communication. The tool also provides examples of interactions in the areas of customer service, security, and custody activities that would likely be compliant, or possibly noncompliant, with section 504 of the Rehabilitation Act. In 2016, DHS’s CRCL office issued guidance and a self-evaluation tool to DHS components on the steps to take in performing the self-evaluation of their facilities, programs, policies, and practices (to include training). The guidance also addresses the development and execution of the components’ plans intended to remedy any areas deemed insufficient in permitting individuals with disabilities—including mental illness—to participate fully in the components’ programs and activities. Disability Access Coordinators, who are representatives from each component charged with overseeing their components’ responses to DHS Rehabilitation Act guidance, are leading the components’ efforts in conducting the self-evaluations. CRCL set a deadline for components to submit all self-evaluations to CRCL for review by the end of August 2017. As of September 2017, all five components had submitted self- evaluations. CRCL officials explained that as they review self- evaluations, they are looking to see if policies or training for law enforcement officers’ and agents’ responses to individuals with mental illness have been identified or otherwise addressed. If not, the officials indicated that they will request the components identify and address this topic in their plans for aligning with Rehabilitation Act guidance. The remaining steps in CRCL’s effort to review and comment on component plans as of September follow: December 31, 2017: CRCL provides comments to components on the content of their self-evaluations. February 28, 2018: the components develop and submit their draft plans for aligning with the Rehabilitation Act guidance. April 30, 2018: CRCL reviews and provides comments on the components’ draft plans. May 31, 2018: the components address CRCL’s comments and submit their final plans for alignment with Rehabilitation Act guidance for approval. DOJ Efforts to Review Policies, Guidance, and Training DOJ has directed components to review and implement guidance on addressing individuals with disabilities—including mental illness—and obligations under section 504. Specifically, in January 2017, DOJ’s then- Deputy Attorney General issued a memo with attached guidance directing components to review their policies and training and, where necessary, modify or develop policies and training to implement legal requirements and principles related to section 504. This guidance identified, among other things, DOJ’s law enforcement components’ legal obligations under section 504 as well as the policies and procedures that components must have so that officers and agents can anticipate and plan for encounters with members of the public with disabilities. For example, the guidance states that law enforcement components must train officers and agents on different types of commonly encountered disabilities; how to identify, without medical or psychological training, analysis, or diagnosis, common characteristics and behaviors most often associated with disabilities; and appropriate responses to the challenges that an encounter with a member of the public with a disability may present. Training for officers and agents in effective communication with members of the public with a mental illness is explicitly referenced in the guidance as well. To date, officials from DOJ’s Office of the Deputy Attorney General (ODAG)—who are overseeing the components’ efforts—have maintained communication with the components to confirm that they have begun reviewing their policies and training to identify any deficiencies or necessary enhancements pursuant to the January 2017 guidance. During the course of our review and in part due to our inquiries, in the fall of 2017, ODAG notified the components that they should complete their reviews by December 2017. ODAG also notified the components that they should begin implementing any new policies or training identified by September 2018. In addition, a provision of the 21st Century Cures Act—section 14025— requires DOJ to provide direction and guidance to federal law enforcement agencies and federal criminal justice agencies on training programs and improved technologies related to responding to individuals with mental illness, by December 13, 2017. ODAG officials told us that the January 2017 guidance addresses the requirement to provide direction and guidance on training for the DOJ components, but acknowledged that it does not respond to all of the requirements for the Attorney General under section 14025 of the 21st Century Cures Act. In particular, section 14025 requires the Attorney General to provide direction and guidance to federal law enforcement agencies and federal criminal justice agencies beyond DOJ in the areas of specialized and comprehensive training programs to identify and respond to individuals with mental illness. Section 14025 also calls for direction and guidance on the establishment and improvement of computerized information systems to provide timely information related to situations involving individuals with mental illness. As a result of our questions about whether such efforts would be developed, on December 7, 2017, DOJ sent a letter from the Principal Deputy Assistant Attorney General for the Office of Justice Programs to federal law enforcement partners outlining resources available for federal law enforcement when considering training or procedures appropriate for their missions. Specifically, DOJ sent the letter to executive officers within DOJ, DHS, the Administrative Office of the United States Courts, and other executive departments that DOJ deemed appropriate. Some examples of resources that the letter highlights include (1) the Police- Mental Health Collaboration Toolkit, which provides resources to assist law enforcement agencies in partnering with mental health providers (and is discussed later in this report) and (2) a forthcoming “roadmap” planned for release in 2018 that the Office of Justice Programs and BJA are developing that will help law enforcement agencies as they plan for engagement with mental health entities. Stakeholders Cited Leading Practices and Tools for Effective Law Enforcement Responses, and Components Have Generally Leveraged Information from Other Knowledgeable Parties Two Leading Practices and Four Tools Can Enhance Officer Responses to Individuals with Mental Illness Of the six stakeholders in the field of law enforcement-mental health we interviewed, all six considered the Crisis Intervention Team Model to be a leading practice and five considered the Co-responder Model to be a leading practice—see Figure 2. These practices are typically implemented at local and state law enforcement agencies. Nevertheless, certain aspects and associated benefits could be considered in other settings, such as federal law enforcement operations. In addition, stakeholders cited four key tools that may assist law enforcement agencies in responding to individuals with mental illness. These tools can include training guides, summary reports, or model policies, among other things, as shown in table 5. Components Have Generally Leveraged Information from Other Knowledgeable Parties, and BJA Is Standing Up a Training and Technical Assistance Center DHS and DOJ law enforcement components generally leveraged information from knowledgeable parties within their departments on efforts to respond to incidents involving individuals with mental illness. To enhance information sharing among DHS components, CRCL has implemented an interagency collaboration mechanism. Specifically, CRCL officials reported that since June 2016 they have led monthly coordination conference calls with component Disability Access Coordinators to collaborate on their respective efforts to complete their self-evaluations. According to the Disability Access Coordinators, these sessions have provided a forum to share ideas and lessons learned across the DHS components. In addition, according to CRCL officials, once their office receives the components’ self-evaluations and plans, it aims to disseminate information on lessons learned and effective practices to all the components. Coordination efforts to leverage information also exist within DOJ. Specifically, through the efforts to review policies and training under the January 2017 guidance and provisions of the 21st Century Cures Act discussed earlier, DOJ’s components have reported taking efforts to collaborate with one another and share information on training, best practices and lessons learned. For example, officials from ATF reported holding meetings with other components to discuss their efforts to implement the January 2017 guidance. Additionally, BJA officials told us they took part in the ODAG’s working group in early 2016 when the then-Deputy Attorney General’s January 2017 guidance was in development. Along with BJA, this ODAG working group included DOJ’s law enforcement components and other offices within the department. The working group provided a forum to advise ODAG in developing the January 2017 guidance and discuss issues surrounding disabilities, which involved responses to individuals with mental illness. BJA officials told us that they provided to components a compendium of all its resources available to assist law enforcement’s response to incidents involving individuals with mental illness. BJA officials said they later took the most promising of these and folded them into its Police-Mental Health Collaboration Toolkit. Further, BJA officials told us that they make all of the resources it develops, including the Toolkit, publicly available on the BJA website. According to the officials, these resources are available for all law enforcement agencies, including federal entities, to review and consider implementing as they deem appropriate. In addition to these online resources, which facilitate information sharing, BJA is also planning to release a national CIT curriculum in 2018 that will serve as a resource that can be tailored to reflect mental health training and collaboration under development or underway at the local level. The Office of Justice Programs is supporting a partnership between the IACP and a research organization to deliver the curriculum to law enforcement agencies. In addition, BJA—as one of DOJ’s grant-making entities—is standing up the National Training and Technical Assistance Center to Improve Law Enforcement Responses to Individuals with Mental Health Disorders and Intellectual and Developmental Disabilities. BJA officials reported that in September 2017, BJA selected the awardee to design and operate the center. Once the center is operational, it will benefit state, local, and tribal law enforcement entities. In addition, BJA envisions that the center will facilitate better collaboration between law enforcement agencies and their mental health partners. A BJA official also acknowledged that the center could serve as an additional resource for federal law enforcement agencies to consult as they review their trainings, policies, and guidance relevant to responding to incidents involving individuals with mental illness. Agency Comments We provided a draft of this report to DOJ and DHS for their review and comment. The departments did not provide us with formal written comments, but did provide technical comments, which we incorporated as appropriate. We are also sending this report to the appropriate congressional committees and members. In addition, this report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions, please contact Diana Maurer at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made significant contributions to this report are listed in appendix III. Appendix I: Objectives, Scope and Methodology This report addresses the following key questions: (1) What challenges, if any, do federal law enforcement officers at selected Department of Homeland Security (DHS) and Department of Justice (DOJ) components face when responding to incidents involving individuals with mental illness? (2) What type of training, policies, and guidance, if any, are in place at selected DHS and DOJ components to prepare federal law enforcement officers for responding to incidents involving individuals with mental illness? (3) What leading practices or tools have relevant stakeholders cited for effective responses to incidents involving individuals with mental illness, and how have DHS and DOJ components leveraged information from other knowledgeable parties? We focused our review on the training, policies, and guidance put forth by the DHS and DOJ components listed in table 6 below because they comprise nearly all of the federal law enforcement officers in these agencies. To identify challenges that federal law enforcement officers and agents at our selected DHS and DOJ components face when responding to incidents involving individuals with mental illness, we held discussion groups of six to eleven agents or officers within each component in our scope. We worked with officials at each component to identify officers and agents with varied tenures and experiences. We held semi-structured in- person and telephone discussion groups using a script and set of questions. Discussion groups are not designed to provide generalizable or statistically reliable results; they are instead intended to generate in- depth information about the reasons for the discussion group participants’ attitudes on specific topics and to offer insight into their concerns. During the discussion groups, we asked officers and agents what challenges they face when responding to incidents involving individuals with mental illness, among other topics. We moderated each discussion to keep participants focused on the specified issues within discussion time frames. Participants identified challenges when we explicitly asked them to do so, or during the course of the discussion. We took detailed notes on each discussion and documented the perspectives participants raised in each discussion group. We then summarized the information collected and identified common themes. Because our questions were open-ended and designed to allow participants to discuss any challenges they may have experienced, we cannot determine whether the absence of a particular concern or challenge by a group of officers or agents is an indication that they did not experience the concern or that they did not raise it when asked broadly about the topic. While these participants’ perspectives cannot be generalized to their entire component or all law enforcement components, their views provided insights into the challenges federal law enforcement officers and agents face when responding to incidents involving individuals with mental illness. We have relied on the observations gathered during these discussion groups to answer this reporting objective as the officers and agents are uniquely positioned to speak to their experiences, and any challenges they face, responding to incidents involving individuals with mental illness. To identify the training, policies, and guidance in place, we reviewed documents from each of our selected law enforcement components, when available, to examine their nature and scope. We further reviewed information on the duration, requirements, and delivery mechanism of the training. We then summarized and verified this training information with each component through email documentation. For the policies, we reviewed the documentation to determine whether it was specific to responding to incidents involving individuals with mental illness or whether mental illness was contained within a larger directive. We also reviewed 2018 budget justification documents for each component in order to identify changes in staffing levels or training plans that might be related to officers’ and agents’ response to incidents involving individuals with mental illness. We also interviewed officials responsible for the development or delivery of training, policies, or guidance from the components in our scope to gather additional information that could help prepare federal law enforcement officers and agents to respond to incidents involving individuals with mental illness. In addition, since section 504 of the Rehabilitation Act of 1973, as amended, prohibits discrimination on the basis of disability, which includes mental illness, in federally funded and federally conducted programs and activities, we took steps to understand the section’s applicability to federal law enforcement operations. Specifically, we reviewed departmental guidance related to section 504 and reviewed the selected components’ documentation of efforts to review their training, policies, and procedures in accordance with that guidance. We also interviewed officials from the departmental offices overseeing these component efforts—DHS’s Office of Civil Rights and Civil Liberties (CRCL) and DOJ’s Office of the Deputy Attorney General (ODAG). To identify leading practices or tools stakeholders cited for effective law enforcement responses to incidents involving individuals with mental illness, we used a multi-stage process Specifically, we: 1. conducted a search of databases, such as ProQuest and Scopus, and organizational websites, such as those from the Council of State Governments, Justice Center (CSG JC) and Police Executive Research Forum (PERF), to identify published work related to law enforcement responses to individuals with mental illness that had been published on or after January 1, 2007 (the last 10 years). 2. reviewed the 96 published research papers and articles that our initial search yielded and then refined our selection criteria to include only those that were literature reviews, meta-analyses, or summary papers published by academics, think tanks and advocacy groups, or government agencies. We reviewed summary articles rather than all the primary research articles to balance breadth, depth, and efficiency. After refining our search, there were 16 documents that met our selection criteria. 3. reviewed the 16 to identify any potential leading practices. We determined that a practice was potentially leading if it was found in at least one of the remaining 16 articles and was a law enforcement – mental health program. Using these criteria, we identified two potential leading practices. 4. asked individual and organizational stakeholders to validate whether these were leading practices and to identify any additional leading practices that we might have missed. In order for us to consider an independent researcher as a stakeholder, the individual needed to have (a) authored or co-authored at least 2 of the 16 documents that met our search criteria as outlined earlier and (b) been recommended by another stakeholder. These criteria yielded two independent researchers from whom to solicit views. In order for us to consider an organization as a stakeholder, the organizations needed to have either (a) conducted research on law enforcement responses to individuals with mental illness; (b) administered law enforcement- mental health collaborative programs; or (c) launched a national campaign on law enforcement responses to individuals with mental illness. After reviewing the websites of organizations that potentially met these criteria, we selected four organizations from which to solicit views. In addition, we selected individuals within the organizations as knowledgeable stakeholders if they were either (1) recommended by another stakeholder; or (2) managed a law enforcement-mental health program or national campaign. As a result of these steps, we identified and interviewed six stakeholders (two independent researchers and four organizations) to gather their broad views of the dynamic between law enforcement and individuals with mental illness; to obtain their observations of any practices or tools, such as training guides or reports that have been used to enhance officer response; and to provide feedback on leading practices. The six selected stakeholders were: Amy Watson, Ph.D.: Professor at the Jane Addams College of Social Work, University of Illinois at Chicago. Melissa Reuland, M.S.: Research Fellow at the Police Foundation and Senior Research Program Manager at Johns Hopkins School of Medicine, Department of Psychiatry. Council of State Governments, Justice Center (CSG JC): a national nonprofit organization that serves policymakers at the local, state, and federal levels from all branches of government. It aims to provide practical, nonpartisan advice and consensus- driven strategies, informed by available evidence, to increase public safety and strengthen communities. International Association of Chiefs of Police (IACP): a professional association for law enforcement, representing more than 30,000 members in more than 150 countries. IACP aims to advance the law enforcement profession through advocacy, outreach, education, and programs. National Alliance on Mental Illness: a national grassroots mental health organization dedicated to building better lives for the millions of Americans affected by mental illness. Police Executive Research Forum (PERF): an independent research organization that seeks to identify best practices on issues such as reducing police use of force; developing community and problem-oriented policing; and evaluating crime reduction strategies. After reaching out to each researcher and organization, we then sent a follow up written request to each of them to attempt to achieve consensus on whether or not the two practices we identified through our search—the Crisis Intervention Team (CIT) Model and the Co-responder Model— should be considered leading. We also took note of any tools they mentioned and probed further to understand their origins and intent. We confirmed with all six of the selected stakeholders that the CIT Model met our definition of leading practice and confirmed with five out of the six stakeholders that the Co-responder Model met our definition. Some stakeholders also identified other practices as leading; however, none of those practices had at least two other stakeholders confirm it as a leading practice. In addition, to determine how DOJ and DHS components leverage information from other knowledgeable parties, such as experts, associations, or colleagues in other components, we reviewed relevant documentation on these efforts, as available. We also interviewed agency officials from the components in our scope who are responsible for the development or delivery of training or policies. We conducted this performance audit from February 2017 through February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform an audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Description of the Federal Law Enforcement Training Centers (FLETC) Basic Training Programs FLETC provides basic training to all Department of Homeland Security (DHS) law enforcement officers through one of three basic program categories, which vary in length, described as follows: Center Basic is a FLETC training program category in which personnel from various agencies are provided with the critical competencies of a specific job, job series, or a group of closely related job series. FLETC provides all instruction. Training is offered in three basic training programs: Criminal Investigator Training Program, Uniformed Police Training Program and the Land Management Police Training Program. Center Integrated Basic is a FLETC training program category that provides entry-level law enforcement officers or direct law enforcement support personnel from a single partner organization with the core competencies of a specific job series or a group of closely related job series. FLETC provides all common and basic core foundational instruction (i.e., firearms, physical techniques, etc.). This category of training includes eight specific programs. Agency-Specific Basic is a training program category designed to provide entry-level law enforcement officers or direct law enforcement support personnel with instruction necessary to meet a single agency’s mission-specific basic training needs. Generally, Agency- Specific Basic courses precede or follow a Center Basic training program, with partner organizations providing the majority of the instruction. Agency-Specific Basic covers an additional 59 training programs. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Joy A. Booth (Assistant Director) and Adam Couvillion (Analyst-in-Charge) managed this assignment. Kisha Clark, Eric Hauswirth, Gina Hoover, Susan Hsu, Candace Silva- Martin, Michael Silver, Janet Temko-Blinder, and Adam Vogt made key contributions to this report.
Why GAO Did This Study Law enforcement encounters with individuals with mental illness may require special training and skills and can sometimes involve volatile situations, risking tragic injuries or even death. The 21st Century Cures Act includes a provision for GAO to review the practices that federal first responders, tactical units, and corrections officers (for the purposes of this study, “law enforcement officers and agents”) are trained to use in responding to incidents involving individuals with mental illness. This report addresses (1) challenges that federal law enforcement officers and agents face; (2) applicable training, policies, and guidance; and (3) existing leading practices, relevant tools, and efforts to leverage information. GAO selected the five DHS and five DOJ law enforcement components (e.g., Secret Service, Federal Bureau of Investigation) that represent the largest concentration of law enforcement officers within the two departments. GAO reviewed the training, policies, and guidance in place, as well as efforts to enhance them, and discussed these matters with knowledgeable officials. In addition, GAO held discussion groups with a nongeneralizable sample of law enforcement officers and agents, selected through component contacts, to discuss their perspectives. GAO also reviewed studies on law enforcement responses to individuals with mental illness to help identify leading practices and tools and interviewed stakeholders, selected through a structured process, to obtain their perspectives. What GAO Found Law enforcement officers and agents from the Departments of Homeland Security (DHS) and Justice (DOJ) cited a number of challenges in our discussion groups related to their response to incidents involving individuals with a mental illness. All of the federal law enforcement components in GAO's review either offer, receive, or are developing some form of training to their law enforcement officers and agents that addresses responding to incidents involving individuals with a mental illness. Further, all components have relevant policies or guidance in place, and all are undertaking efforts to enhance their practices in accordance with departmental guidance. Since DHS and DOJ components have varying missions and operational needs and interact with the public in different capacities, the nature and scope of training, as well as the number and duration of courses offered in response to individuals with mental illness varies; however, they generally include elements focusing on de-escalation and communication. In addition, DHS and DOJ both have efforts underway to have components review their training and policies under departmental guidance and plan to begin implementing any changes by 2018. Stakeholders cited leading practices and tools for effective law enforcement responses, and DHS and DOJ components have generally leveraged information from other knowledgeable parties. For example, the Crisis Intervention Team approach involves training selected law enforcement officers on mental health topics and dispatching those officers on mental-health related calls. While models like this are typically used by state and local law enforcement agencies, their benefits could be considered in other settings such as federal law enforcement. DHS and DOJ officials are also using collaborative mechanisms within their departments, such as conference calls and working groups with officials, that have helped them leverage information from knowledgeable parties. In addition, DOJ's Bureau of Justice Assistance (BJA), which supports programs and initiatives in the areas of law enforcement, among other activities, has developed and makes publicly available resources such as its Police-Mental Health Collaboration Toolkit. BJA also is working to stand up a national training and technical assistance center to improve law enforcement responses to people with mental illness. While aimed at state, local, and tribal law enforcement, a BJA official also acknowledged that the center could serve as an additional resource for federal law enforcement agencies to consult as they review relevant trainings, policies, and guidance on this topic.
gao_GAO-19-232
gao_GAO-19-232_0
Background Overview of Federal Disaster Response Federal agencies can respond to a disaster when effective response and recovery are beyond the capabilities of the affected state and local governments. In such cases, the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act) permits the President to declare a major disaster in response to a request by the governor of a state or territory or by the chief executive of a tribal government. Such a declaration is the mechanism by which the federal government becomes involved in funding and coordinating response and recovery activities. At least 30 federal agencies administer disaster assistance programs and activities. Under the National Response Framework, which governs any type of federal disaster or emergency response, the Department of Homeland Security (DHS) is the federal department with primary responsibility for coordinating disaster response. Within DHS, FEMA has lead responsibility and provides three principal forms of funding for disaster recovery—Individual Assistance, Public Assistance, and Hazard Mitigation. The Individual Assistance Program provides financial assistance directly to survivors for expenses that cannot be met through insurance or low-interest loans, such as temporary housing, counseling, unemployment compensation, or medical expenses. The Public Assistance Program provides federal disaster grant assistance to state, local, tribal, and territorial governments and certain types of nonprofit organizations for debris removal, emergency protection, and the restoration of facilities. The Hazard Mitigation Program is designed to help communities prepare for and recover from future disasters. It funds a wide range of projects, such as purchasing properties in flood-prone areas, adding shutters to windows, and rebuilding culverts in drainage ditches. The Small Business Act also authorizes SBA to make direct loans to help businesses, nonprofit organizations, homeowners, and renters repair or replace property damaged or destroyed in a federally declared disaster. HUD uses data from FEMA and SBA to make decisions on the amount of CDBG-DR funding to allocate to affected communities. History of CDBG-DR The Housing and Community Development Act of 1974 created the CDBG program to develop viable urban communities by providing decent housing and a suitable living environment and by expanding economic opportunities, principally for low- and moderate-income persons. Program funds can be used for housing, economic development, neighborhood revitalization, and other community development activities. Because the CDBG program already has a mechanism to provide federal funds to states and localities, the program is widely viewed as a flexible solution to disburse federal funds to address unmet needs in emergency situations. When disasters occur, Congress often appropriates additional CDBG funding (CDBG-DR) through supplemental appropriations. These appropriations often provide HUD the authority to waive or modify many of the statutory and regulatory provisions governing the CDBG program, thus providing states with greater flexibility and discretion to address recovery needs. Eligible activities that grantees have undertaken with CDBG-DR funds include relocation payments to displaced residents, acquisition of damaged properties, rehabilitation of damaged homes, rehabilitation of public facilities such as neighborhood centers and roads, and hazard mitigation. In numerous appropriations from fiscal year 1993 to 2018, Congress provided more than $86 billion in CDBG-DR funds to help states recover from federal disasters. For example, Congress directed CDBG-DR funds toward recovery and rebuilding efforts in the Gulf Coast after Hurricanes Katrina, Rita, and Wilma in 2005; New York after the September 11th terrorist attacks in 2001; North Dakota, South Dakota, and Minnesota after the floods in 1997; Oklahoma City after the 1995 bombing of the Alfred Murrah Building; Southern California after the 1994 Northridge earthquake; and Florida after Hurricane Andrew in 1992. As of January 2019, HUD was overseeing 106 CDBG-DR grants totaling more than $54 billion. CDBG-DR Funds Allocated to 2017 Grantees Once Congress appropriates CDBG-DR funds, HUD publishes notices in the Federal Register to allocate the funding appropriated to affected communities based on unmet need, and to outline the grant process and requirements for the grantees’ use of the funds. In 2018, HUD allocated the vast majority of the 2017 funds to four agencies: Puerto Rico’s Department of Housing (Departamento de la Vivienda), the Texas General Land Office, the U.S. Virgin Islands Housing Finance Authority, and Florida’s Department of Economic Opportunity. Table 1 shows the CDBG-DR funding that HUD had allocated to the 2017 grantees as of February 2019 and the remaining funds to be allocated. The funding was allocated in two portions, one in February 2018 and one in August 2018. The nearly $33 billion in funding that Puerto Rico, Texas, the U.S. Virgin Islands, and Florida are to receive for recovery from Hurricanes Harvey, Irma, and Maria is almost 60 times more than the total amount of traditional CDBG funds they received in the last 5 years (see table 2). The 2017 CDBG-DR funding that Puerto Rico, Texas, and Florida received also greatly exceeded their most recent prior CDBG-DR grants. In 2008, Puerto Rico was allocated approximately $30 million in CDBG-DR funds in response to Hurricane Ike. Between 2016 and 2017, Texas was allocated approximately $313.5 million in CDBG-DR funds in response to floods that occurred in 2015 and 2016. In 2016, Florida was allocated approximately $117.9 million in CDBG-DR funds in response to Hurricanes Hermine and Matthew. The U.S. Virgin Islands had not previously received CDBG-DR funds. Administration of CDBG- DR Funds HUD’s Office of Community Planning and Development (CPD) administers the traditional CDBG program and CDBG-DR funds. Before 2004, existing CPD staff administered CDBG-DR. In 2004, HUD established the Disaster Recovery and Special Issues Division within CPD’s Office of Block Grant Assistance to manage large CDBG-DR grantees with allocations of $500 million or more. CPD field office staff generally manage all other grantees. Other HUD officials are also involved with CDBG-DR, including the Departmental Enforcement Center and Office of Policy Development and Research. The Departmental Enforcement Center works with several of HUD’s program areas, including CPD, to ensure that federally funded programs operate according to program guidelines and regulations. For example, center staff help CPD review grantees’ financial processes and procedures. The Office of Policy Development and Research maintains current information on housing needs, market conditions, and existing programs and conducts research on community development issues. Its staff use this information to help CPD award CDBG-DR funds. All Grantees Have Signed Grant Agreements but Need to Take Additional Steps before Funds Reach Disaster Victims As of January 2019, all four grantees had entered into grant agreements with HUD for their initial 2017 CDBG-DR funds, but they needed to take additional steps before disbursing funds to individuals affected by the 2017 hurricanes. According to the February 2018 Federal Register notice allocating the initial $7.4 billion in CDBG-DR funds, grantees were required to take a number of steps before they could enter into a grant agreement with HUD and begin expending funds (see fig.1). These steps had associated deadlines, which the four grantees generally met. The steps grantees were required to take before they could enter into a grant agreement included the following: Financial processes and procedures. Grantees were required to document their financial controls, procurement processes, and grant management procedures (including those for preventing the duplication of benefits, ensuring timely expenditures, and preventing and detecting fraud, waste, and abuse). By the end of September 2018, HUD had certified that all four grantees had proficient financial controls, procurement processes, and grant management procedures. Implementation plan. Grantees were required to submit an implementation plan that describes their capacity to carry out the recovery and how they will address any capacity gaps. By the end of September 2018, HUD had approved the implementation plans and capacity assessments of all four grantees. Action plan. Finally, grantees were required to submit an action plan for disaster recovery that includes an assessment of unmet needs for housing, infrastructure, and economic revitalization and a description of activities intended to meet these needs. By the end of July 2018, all four grantees had approved action plans. Once these steps were completed, HUD and the grantees could sign grant agreements, and the grantees could begin drawing down funds. All four of the grantees had signed grant agreements with HUD by the end of September 2018. The February 2018 Federal Register notice required grantees to begin drawing down funds by August 13, 2018, but a HUD official told us that the grantees were unable to meet this requirement because HUD had not yet finalized an agreement with three grantees by that date and had just entered into a grant agreement with Florida. The grant agreements require grantees to expend their entire CDBG-DR allocations on eligible activities within 6 years of signing their grant agreements. According to HUD officials, this requirement has been included in grant agreements since 2015 to help speed up the expenditure of funds. (As discussed in the last section of this report, some CDBG-DR grantees have been slow to expend their funds.) As of January 2019, the grantees had generally not drawn down funds for individuals affected by the 2017 hurricanes because they were designing and setting up the activities to assist these individuals. Specifically, as of January 2019, Texas had drawn down approximately $18 million and Florida had drawn down approximately $1 million of their allocations generally for administrative and planning expenses. The other two grantees had not drawn down any of their February 2018 allocations (see table 3). As of the end of 2018, the grantees were taking steps to design and set up the activities approved in their action plans and planned to implement activities in stages. Florida. On September 24, 2018, Florida opened the registration period for a program that provides rehabilitation or replacement assistance to owner-occupied homes and rental properties impacted by Hurricane Irma. According to Florida officials, residents have until March 29, 2019, to register. The purpose of the registration process is for Florida to evaluate the potentially eligible population. According to Florida officials, Florida began taking applications from registrants on November 27, 2018, and staff were conducting eligibility reviews on completed applications as of late December 2018. Puerto Rico. Puerto Rico officials said they planned to stagger the implementation of their approved CDBG-DR activities. They would begin with activities they considered to be critical such as providing assistance for the rehabilitation, reconstruction, or relocation of owner-occupied units and gap financing for properties being developed with Low-Income Housing Tax Credits. Officials said they planned to begin taking applications by the end of calendar year 2018 or early 2019 but that the start dates depended on HUD’s approval of the activities’ policies and procedures. Texas. On July 23, 2018, Texas began taking applications for a program that provides assistance for the rehabilitation, reconstruction, and new construction of affordable multifamily rental housing. Texas officials said they expected to begin signing agreements with selected developers early in calendar year 2019. In addition, on November 27, 2018, Texas began taking applications for a program that provides assistance for the rehabilitation and reconstruction of owner-occupied single-family homes. In late December 2018, Texas officials told us they were reviewing the more than 1,500 completed applications for program eligibility. U.S. Virgin Islands. The U.S. Virgin Islands planned to first implement two housing programs that provide assistance for the rehabilitation or reconstruction of storm-damaged residential owner- occupied units and for the construction of new homes for first-time homebuyers. U.S. Virgin Islands officials stated that as of November 2018, they were working on policies and procedures for the subrecipients that will help administer these programs and that they planned to launch both programs early in calendar year 2019. The U.S. Virgin Islands also planned to provide assistance for the rehabilitation or construction of affordable rental housing units but did not provide information on when it planned to implement this activity. In addition, officials said they anticipate funding some infrastructure projects in early 2019. Grantees Have Taken Some Steps to Establish Financial Processes and Assess Capacity and Unmet Needs Grantees Generally Used Existing Financial Processes and Procedures for Certification To meet the requirement for certification of financial controls, procurement processes, and grant management procedures (financial processes and procedures), all four 2017 grantees told us that they generally used processes and procedures that were already in place to administer prior CDBG-DR grants or other HUD funds. For example, Texas and Florida asked HUD to generally rely on the certification and supporting documentation of financial processes and procedures that they had submitted for previous CDBG-DR grants. U.S. Virgin Islands officials told us they generally relied on the financial processes and procedures they have in place for the administration of the traditional CDBG program. Similarly, Puerto Rico officials told us that they relied on existing financial processes and procedures they have in place for other federal funds, including other HUD and FEMA funds. We and the HUD OIG have ongoing or completed work on controls over CDBG-DR funds. We have ongoing work examining, among other things, HUD’s internal control plan for the 2017 appropriated disaster funds, including CDBG-DR funds. In response to a congressional request, the HUD OIG reviewed the ability of the grantees in Texas and Florida to follow applicable federal regulations and requirements. In its reports on Texas and Florida, the HUD OIG identified concerns with grantees’ financial processes and procedures. Texas. In a May 2018 report, the HUD OIG stated that Texas had prior audit findings related to procurement that the agency should avoid repeating. For example, for a prior CDBG-DR grant, the HUD OIG found that Texas did not show how its procurement process was equivalent to federal requirements. Among other things, the HUD OIG recommended that HUD require Texas to ensure that its procurement and expenditure policies and procedures are implemented and working as designed. Texas responded that it would clarify the procurement processes in its financial submission if needed. Florida. In September 2018, the HUD OIG found weaknesses in Florida’s controls over its drawdown of funds and classification of costs. For example, it found that for a prior CDBG-DR grant, Florida drew down more funds than it expended on administrative and planning costs, and that the grantee charged $30,000 to a prior CDBG-DR grant that should have been charged to its 2017 CDBG-DR grant. The report acknowledged that Florida had taken steps to address this concern, but the OIG recommended, among other things, that the grantee establish adequate financial controls to ensure that its disaster funds are properly classified and allocated to the correct grant. Florida agreed with the recommendation, noting that it had corrected the discrepancy the HUD OIG identified during the audit and stating that it would continue to improve its internal controls. In addition, Florida officials told us that they have worked with HUD staff to ensure that financial and programmatic staff are trained to correctly classify costs and verify that they are accurately allocated and recorded. According to HUD OIG officials, they plan to begin similar reviews of Puerto Rico and the U.S. Virgin Islands in early calendar year 2019. Grantees Made Organizational Changes to Increase Capacity and Identified Significant Staffing Needs The February 2018 Federal Register notice required grantees to assess staff capacity and identify necessary personnel for the administration of CDBG-DR funds. To increase their capacity to manage the 2017 CDBG-DR funds, grantees made changes to their organizational structure. Florida. The Florida Department of Economic Opportunity created a disaster recovery office to administer the 2017 CDBG-DR grants because, according to Florida officials, the grants were significantly larger than its traditional CDBG grant and prior CDBG-DR grants. Puerto Rico. The Puerto Rico Department of Housing, which had not administered prior CDBG or CDBG-DR funding, created a disaster recovery division to manage its CDBG-DR allocation. Texas. The Texas General Land Office, the lead state agency for long-term disaster recovery, established a single point of contact for its subrecipients and created a planning team. Authority, which administers the territory’s traditional CDBG program, created a division to manage its CDBG-DR allocation. Grantees still need to fill many vacant positions to administer the 2017 CDBG-DR funds. All of the grantees planned to hire more in-house staff (see table 4). As of December 2018, about 48 percent of the needed full- time equivalent positions at the four grantees were vacant—with vacancies at individual grantees ranging from about 15 percent for Texas to about 78 percent for Puerto Rico. These positions will be funded with CDBG-DR funds. All four 2017 grantees also planned to use contractors to help fill gaps in expertise and operational capacity. Florida. According to Florida officials, Florida had hired three vendors to help administer its CDBG-DR funds as of December 2018. They stated that the first vendor employed two staff to conduct an organizational study for Florida to help improve staffing efficiencies, the second vendor had 250 staff working to implement Hurricane Irma programs and activities, and the third vendor supplied five project management staff to support CDBG-DR activities. The officials also stated Florida plans to procure third-party monitoring services, contract staff services, and additional support to meet audit and compliance requirements. Puerto Rico. Puerto Rico hired two contractors to help it set up the grant. Specifically, 20 contract staff assisted Puerto Rico with development of its action plan. Puerto Rico also planned to hire vendors to help administer the territory’s CDBG-DR activities, but they had not yet determined the number of contract staff needed. Texas. According to Texas officials, Texas hired eight vendors to, among other things, administer the state’s housing assistance activities and track the progress of its CDBG-DR activities. As of December 2018, these vendors had 192 staff. U.S. Virgin Islands. According to a U.S. Virgin Islands official, the U.S. Virgin Islands hired a contractor to help set up the grant, including assisting with the development of its action plan. The official also told us that the U.S. Virgin Islands planned to hire contractors to help support the implementation of its CDBG-DR activities but it had not yet determined the number of contract staff needed. The HUD OIG has raised concerns about the capacity of two of the 2017 CDBG-DR grantees. In a May 2018 report, the HUD OIG found that Texas did not have enough staff to adequately administer its 2017 CDBG- DR funds. At the time of its review, the HUD OIG found that 37 percent of the grantee’s full-time positions were vacant. Texas responded that it had been actively determining optimal staffing levels and hiring timeframes, but did not have a reserve budget to hire staff before receiving its 2017 allocation. Similarly, in a September 2018 report, the HUD OIG recommended that Florida continue to fill its vacancies and assess staffing resources as it prepared for additional disaster funds. Florida accepted the recommendation and stated that it was taking steps to assess and address staffing needs. As discussed in the last section of this report, building the capacity needed to manage large grants has historically been a challenge for CDBG-DR grantees. Grantees Generally Used the Same Data as HUD to Estimate Unmet Housing Needs, but Their Methodologies Varied Grantees were also required to submit an action plan for disaster recovery that includes an assessment of unmet needs in housing, infrastructure, and economic revitalization. The purpose of these unmet needs assessments was to help grantees understand the type and location of community needs and to target their CDBG-DR funds to those areas with the greatest need. We focused on grantees’ estimates of unmet housing needs because the February 2018 Federal Register notice required grantees to primarily use their initial CDBG-DR allocation to address their unmet housing needs. HUD’s Estimation of Unmet Needs Before grantees developed their unmet needs assessments, HUD estimated their unmet needs to allocate the appropriated CDBG-DR funds. HUD calculated unmet housing needs as the number of housing units with unmet needs times the average estimated cost to repair those units less repair funds already provided by FEMA and SBA. HUD relied on FEMA Individual Assistance data to estimate the number of affected owner-occupied and rental units and used SBA data on disaster loans to estimate repair costs. HUD developed five damage categories to determine the level of damage housing units sustained: minor-low, minor- high, major-low, major-high, and severe. Because both acts that appropriated the CDBG-DR funds require HUD to allocate funding to the “most impacted and distressed areas,” the agency only included owner- occupied and rental units that had major or severe damages in its estimate of unmet housing needs. To determine the average cost of repairs for owner-occupied and rental units in each damage category, HUD used SBA data rather than FEMA data. HUD said SBA damage assessments better reflect the full cost to repair a unit because the assessments are based on the total physical loss to the unit. In contrast, FEMA assesses damage based on the cost to make the unit habitable, and therefore its estimates are generally lower than SBA’s estimates. To estimate unmet needs, HUD then multiplied the number of units it identified as having major-low, major-high, and severe damage by corresponding SBA average cost-of-repair amounts (see table 5). To estimate the needs of owner-occupied and rental units for their unmet needs assessments, the four grantees generally used FEMA and SBA data but used different methodologies to analyze these data. Below is an overview of the methodology each of the 2017 CDBG-DR grantees used to estimate housing needs for owner-occupied and rental units. Florida. Florida included all SBA applicants and FEMA applicants with units that incurred minor damage as defined by HUD’s two lowest damage categories, neither of which was included in HUD’s estimate. Florida did not use HUD repair estimates; instead, it developed its own estimates using SBA data. Puerto Rico. Like Florida, Puerto Rico included all SBA applicants and FEMA applicants with minor damage. Puerto Rico also included an estimate of units with “potential unmet needs.” Puerto Rico calculated its own cost-of-repair estimates based on SBA data. Texas. Texas’ methodology was the same as HUD’s methodology. Specifically, Texas included FEMA applicants with major and severe damage and used the repair estimates HUD provided in the February 2018 Federal Register notice. U.S. Virgin Islands. The U.S. Virgin Islands included units that FEMA did not inspect and units with minor damage, neither of which HUD included in its estimate. The U.S. Virgin Islands used estimates HUD provided in an April 2018 memorandum to determine the repair costs. Because three of the grantees tailored their unmet needs estimates for their individual planning purposes, aggregating these estimates would not be appropriate because the estimates do not provide comparable measures of unmet housing needs. Although we did not conduct an extensive assessment of the estimates, we performed some limited analysis to illustrate the impact of some of the grantees’ methodological decisions. The three grantees’ decisions expanded the definition of unmet housing needs, which resulted in higher estimates compared to HUD’s methodology. Including FEMA applicants with minor damage. Florida, Puerto Rico, and the U.S. Virgin Islands included FEMA applicants with minor damages that fell into HUD’s two lowest categories of damage. Including these applicants increased the needs estimate for the U.S. Virgin Islands by approximately $431 million. Our analysis showed that including these applicants increased Puerto Rico’s needs estimate by at least $1.5 billion. Grantees said that including FEMA applicants with the two lowest levels of damages provided a more accurate representation of the needs for owner-occupied and rental units. For example, Puerto Rico’s action plan states that these applicants were unlikely to receive other federal or local assistance to repair their homes, and therefore would have needs. HUD officials told us that grantees have the discretion to use allocated funds to assist applicants with less severe damage as long as those individuals have unmet needs. Including SBA applicants that were denied assistance. Florida and Puerto Rico included SBA applicants whose units were not inspected because they were denied disaster loans, although the extent to which these units sustained damages was unknown. Florida estimated approximately $1.8 billion and Puerto Rico approximately $1.5 billion in housing needs for these SBA applicants. Florida and Puerto Rico officials told us that they included these applicants because being denied did not necessarily mean that these applicants did not experience losses. For example, SBA applicants can be denied loan assistance based on their inability to repay, despite potentially having unmet needs. Similarly, HUD officials explained that they consider applications that SBA has denied as a potential indicator of unmet needs. Including FEMA applicants without verified losses. Florida included FEMA applicants without verified losses and the U.S. Virgin Islands included units that FEMA did not inspect. Absent verified losses and inspections, they assumed the FEMA applicants had some level of unmet needs. Florida’s action plan states that it included FEMA applicants without verified losses, but the plan did not include the number of such applicants or their associated housing needs. The U.S. Virgin Islands’ action plan states that it included 3,774 such FEMA applicants in its estimate of damaged homes, but the plan did not include the associated repair costs. According to Florida and Virgin Islands officials, they included these applicants to account for what they determined was underrepresentation of impacted populations. According to HUD officials, grantees typically conduct their own inspections or rely on SBA inspections in an effort to capture more comprehensive damage estimates. Including owner-occupied and rental units with “potential unmet needs.” Puerto Rico included an estimate of “potential unmet housing needs” to account for owners and renters that did not apply to FEMA and FEMA applicants without verified losses. Absent applications or verified losses, Puerto Rico assumed that nonapplicants and applicants without verified losses had some level of unmet needs. Puerto Rico estimated these potential unmet needs to be approximately $5.8 billion. HUD officials told us that there were a significant number of FEMA applicants who were denied in Puerto Rico due to an inability to prove property ownership. In general, HUD officials stated that the methodologies HUD and grantees used to develop unmet needs estimates did not need to be the same. This is because HUD’s estimate of unmet needs was used to allocate funds to grantees and grantees’ estimates were used to target their funding. They also noted that there was more than one way to determine unmet needs and that it was acceptable for grantees to use different methodologies to reflect their local circumstances. Although grantees’ estimates of unmet needs do not affect the amount of CDBG- DR funds that they are allocated, the flexibility grantees have in defining unmet needs increases the importance of HUD’s review of these estimates. As discussed in the next section of this report, HUD’s review of these estimates was limited. HUD’s Review of Grantees’ Initial Steps Was Limited, and It Has Not Developed Monitoring or Workforce Plans HUD Does Not Have Adequate Guidance for Reviewing Financial Processes and Procedures and Assessments of Capacity and Unmet Needs HUD lacks adequate guidance for its staff to use when determining the adequacy of a grantee’s financial processes and procedures and assessments of its capacity and unmet needs. Financial processes and procedures. HUD staff use a checklist to assess a grantee’s financial controls, procurement processes, and procedures for prevention of duplication of payments to detect fraud, waste, and abuse of funds (financial certification checklist). The questions on this checklist focus on whether certain information required in the February 2018 Federal Register notice was included. For example, as figure 2 shows, the financial certification checklist asks HUD staff to determine whether a grantee has attached its procedures for preventing duplication of benefits and verifying all sources of disaster assistance received. However, it does not ask HUD staff to assess the adequacy of the grantee’s approach for verifying all sources of disaster assistance. In addition, the financial certification checklist, which is framed as a series of “yes” or “no” questions, does not include guidance that the HUD reviewer must consider. For example, the certification checklist asks whether the grantee has standards to maintain “adequate control” over all CDBG-DR funds but does not define what it means to maintain adequate control. HUD officials told us that HUD reviewers do assess the quality of grantees’ submissions during their reviews. They stated that they request additional information from grantees if they deem the information initially submitted to be incomplete or unclear. However, in the absence of additional guidance for HUD staff, it is unclear how they assess quality on a consistent basis. Capacity assessments. HUD’s checklist for reviewing management capacity (capacity checklist) assesses whether the grantee included certain information required in the February 2018 Federal Register notice. For example, the capacity checklist asks whether a grantee provided a timeline for addressing the gaps it identified in its capacity assessment. However, it does not require the reviewer to evaluate the adequacy of the assessment or the timeline (see fig. 3). Similarly, the capacity checklist asks whether the grantee planned to designate personnel for program management, procurement, monitoring, and other functions but does not require the reviewer to assess the adequacy of the number of personnel. One question asks whether the personnel will be “in proportion to applicant population” but does not cite the required proportion. As discussed above, HUD officials told us that HUD reviewers do assess the quality of grantees’ submissions during their reviews, but in the absence of additional guidance for staff, it was unclear how they determine that documents are adequate. Unmet needs assessments. HUD staff also use a checklist to assess the grantees’ action plans, including their assessments of unmet needs (see fig. 4). The questions ask the reviewer to determine whether the needs assessment covers housing, infrastructure, and economic revitalization and to estimate the portion of those three areas to be funded from other sources, as required in the February 2018 Federal Register notice. However, the reviewer is not required to evaluate the reliability of the grantees’ assessments or estimates, and HUD does not provide additional guidance for staff to help assess the reliability of the information provided. HUD officials said they have other documentation that supplements the checklists. However, we found that documentation lacked sufficient information for assessing the submissions. For example: February 2018 Federal Register notice. According to HUD officials, the notice is the primary source of guidance for HUD reviewers. They stated that the notice defines “proficient financial processes and procedures.” However, the February 2018 notice states that grantees must submit certain audits, financial reports, and their financial standards but does not describe how HUD reviewers should assess the quality of those financial standards. In addition, the vague language in the checklist often mirrors the February 2018 notice. For example, neither document tells staff how to determine whether “the overall effect of the standards provide for full and open competition.” Regulations for the traditional CDBG program. According to HUD officials, reviewers can consult existing federal regulations governing the development and review of plans required under the traditional CDBG program when reviewing grantees’ action plans, including unmet needs assessments. However, both the February 2018 and August 2018 Federal Register notices waive the requirement for an action plan under the CDBG regulation. The notices instead require CDBG-DR grantees to submit an action plan for disaster recovery specifically that includes an unmet needs assessment. Another reason HUD cited for not having additional guidance is the reviewers’ years of professional experience. A senior HUD official said the staff members who reviewed Florida and Texas’ submissions were senior CPD staff who had been CDBG-DR grant managers since at least 2014. The same senior official, a CPD specialist since 1998, told us that she reviewed the submissions from Puerto Rico and the U.S. Virgin Islands. However, experienced staff may leave their positions, while the guidance for reviewing grantees’ submissions would remain. The acts appropriating CDBG-DR funds for the 2017 disasters require HUD to certify that a grantee has proficient financial controls, processes, and procedures. In addition, both acts require grantees to submit action plans to the HUD Secretary. The February 2018 Federal Register notice requires that grantees demonstrate that they have capacity to effectively manage the CDBG-DR funds and that their action plans include an assessment of unmet needs. Further, federal internal control standards state that management should use quality information to achieve the entity’s objectives. For example, management is to obtain relevant data from reliable internal and external sources in a timely manner based on the identified information requirements. Federal internal control standards also state that management should (1) internally communicate the necessary quality information to achieve the entity’s objectives and (2) establish and operate monitoring activities to monitor the internal control system and evaluate the results. As discussed in the last section of this report, prior grantees’ lack of adequate financial processes and procedures and capacity led to challenges, such as improper payments and the need to acquire additional expertise. Further, all four grantees’ initial assessments showed that their CDBG-DR allocations will not meet their unmet needs. Having reliable estimates of unmet needs that will not be met with the appropriated $35.4 billion is important because Congress could use these estimates to determine if further appropriations are necessary. Further, grantees need accurate information to appropriately address unmet needs. Without additional guidance for HUD staff to use in assessing the quality of grantees’ submissions, HUD cannot provide reasonable assurance that its reviews of these submissions are thorough and consistent. HUD Lacks Documentation Supporting Its Conclusion That Grantees’ Submissions Were Sufficient In their reviews of the 2017 grantees’ financial processes and procedures and assessments of capacity and unmet needs, HUD’s reviewers did not document their conclusions. According to a HUD official, the final completed checklists are the official records of the agency’s certification of grantees’ financial processes and procedures and its review of capacity and unmet needs assessments. However, the checklists do not require a description of the basis for answering “yes” to a question. The checklists require HUD reviewers to describe the basis for their conclusion for “no” answers only. As a result, the final checklists that we reviewed, which showed a “yes” to each question, did not explain how the reviewer concluded that grantees’ submissions were sufficient. A HUD official told us that outside of the official administrative record, there is documentation on the agency’s communication with grantees. However, because this documentation was not readily available for all four grantees, HUD provided examples of written feedback given to one grantee. Our review of this documentation showed variation in the extent to which the reviewer requested information about the quality of the information provided. In written feedback that HUD provided to the grantee on its capacity assessment, the HUD reviewer asked for more comprehensive analysis of staffing needs and to include a rationale for the number of staff to be assigned to each function. Yet, other feedback HUD provided focused on whether certain information was included rather than on the quality of the information. For example, when reviewing the grantee’s financial processes and procedures, the reviewer pointed out that the grantee had not shown that it had addressed prior audit findings. In another instance, the reviewer asked the grantee to include additional information in the section of its action plan on unmet needs, but did not focus on the grantee’s methodology. According to a HUD official, this documentation was not readily available for each grantee because it is not part of the official administrative record. Even if readily available, such documentation likely would not substantiate HUD’s conclusions that grantees’ submissions and estimates were sufficient. CPD’s monitoring handbook states that staff must document the basis for their conclusions during a monitoring review because “monitoring conclusions must be clear to persons unfamiliar with the participant, program, or technical area.” In addition, federal internal control standards require management to design control activities to achieve objectives in response to risk. One example of a control activity is clearly documenting transactions and other significant events in a manner that allows the documentation to be readily available for examination. According to a HUD official, documentation is limited and not readily available because CPD staff have many responsibilities in addition to the review of grantees’ submissions, such as assisting in the monitoring of prior CDBG-DR grants. However, it is important that HUD prioritize the documentation of its reviews. Without documenting the basis for its conclusions when reviewing grantees’ submissions, stakeholders and decision makers lack information on why HUD concluded that grantees’ financial processes and procedures and capacity and unmet needs assessments were adequate. HUD also misses an opportunity to leverage this information later to mitigate risk and inform its monitoring of grantees. HUD Does Not Have a Comprehensive Monitoring Plan for the 2017 CDBG-DR Grants HUD determined that the 2017 CDBG-DR grants posed high risk due to the size of the grants, but did not have a comprehensive plan to monitor these grants. First, HUD had not identified any unique risk factors associated with the 2017 grants that required additional attention. For example, HUD had not analyzed the potential risk of awarding a large grant to an entity that had little or no experience administering CDBG-DR funds. The agency also had not used any potential risks identified during its reviews of grantees’ financial processes and capacity assessments to inform its monitoring. Second, although HUD had plans to conduct onsite monitoring, it had not defined the scope of this monitoring. HUD provided a monitoring schedule that showed that the agency intended to conduct two monitoring visits and two technical assistance visits each to Florida, Texas, and the U.S. Virgin Islands in fiscal year 2019. Although the schedule shows only one monitoring visit for Puerto Rico, HUD officials told us that they also plan to conduct two monitoring visits and two technical assistance visits to Puerto Rico. Regarding the scope of monitoring visits, HUD officials said that staff consider where the CDBG- DR grantee is in the recovery process when identifying areas to be reviewed during monitoring. For example, they said that they tend to focus on grantees’ efforts to hire staff and develop policies and procedures during the first year and on grantees’ implementation of specific activities in the second year. Although HUD had these tentative plans for the early years of the grants, the agency had not documented them. According to HUD officials, as of November 2018 HUD had not developed a comprehensive monitoring plan because it had not yet completed the annual risk analysis process that it uses to determine the extent of monitoring for programs such as CDBG and CDBG-DR. According to HUD officials, this process is undertaken during the first quarter of each fiscal year. HUD guidance states that the purpose of this analysis is to provide the information needed for HUD to effectively target its resources to grantees that pose the greatest risk to the integrity of CDBG-DR, including identification of the program areas to be covered and the depth of the review. In comments on the draft report, HUD stated that it had completed its risk analysis and updated its monitoring schedule to include all the grantees it planned to visit in fiscal year 2019. HUD also stated that it had begun identifying monitoring strategies for all monitoring reviews that would occur from March 2019 through May 2019 and would develop the remaining strategies after the initial monitoring reviews. However, the risk analysis is of limited usefulness for new CDBG-DR grants because, based on HUD guidance, the risk analysis assumes that the grant has been active for several years. For example, a reviewer is to select the high-risk category if, within the past 3 grant years, the grantee had received two or more findings that are open, overdue, and unresolved; sanctions have been imposed on the grantee; or the grantee had not been monitored—all considerations that currently are moot for the 2017 grantees. Further, the risk analysis does not formally incorporate information HUD gleaned from its reviews of grantees’ financial processes and capacity assessments. For example, the risk analysis worksheet does not include questions about the extent to which HUD’s review of a grantee’s procurement processes and procedures raised any concerns. According to the February 2018 Federal Register notice, HUD will undertake an annual risk analysis and conduct on-site monitoring. Further, federal internal control standards state that management should establish and operate monitoring activities and evaluate results. The standards suggest that as part of monitoring, management identify changes that have occurred or are needed because of changes in the entity or environment. However, HUD does not have a monitoring plan that identifies the specific risk factors for each grantee and outlines the scope of its monitoring. A comprehensive monitoring plan would help HUD ensure that its oversight of grantees’ compliance with grant requirements focused on grantees’ areas of greatest risk. HUD Has Not Conducted Workforce Planning to Determine the Staff It Needs to Oversee CDBG- DR HUD has not conducted workforce planning to determine the number of staff it needs to monitor the large 2017 CDBG-DR grants and other outstanding grants. The growth in the number and dollar amount of CDBG-DR grants has created workforce challenges for HUD. The more than $35 billion in CDBG-DR funds Congress appropriated for the 2017 hurricanes was almost as much as HUD’s entire budget for fiscal year 2018. In addition, Congress appropriated more CDBG-DR funds to help with recovery from the 2018 Hurricanes Florence and Michael, and will likely appropriate more. As of October 2018, CPD’s Disaster Recovery and Special Issues Division had 24 permanent full-time staff. However, division officials told us that staffing had not increased at a rate commensurate with the increase in CDBG-DR grants due to budget constraints. Although the 2017 grants would be their priority for monitoring, they said that they still had a responsibility to oversee other grants. HUD officials told us that they planned to hire additional staff for the Disaster Recovery and Special Issues Division but that they had not finalized their hiring plans. In October 2018, a CPD official told us that in fiscal year 2018 HUD approved the hiring of 17 limited-term hires to be paid with supplemental disaster funds appropriated for HUD salaries and expenses. Division officials also told us that HUD had approved two permanent hires in fiscal year 2018, a financial analyst and a team leader for oversight of the Puerto Rico grantee. For fiscal year 2019, the CPD official said HUD was considering hiring five additional permanent staff for the division but that if approved, the division had estimated that it would need five more staff. In November 2018, division officials said that the number of additional staff we were told had been approved for fiscal year 2018 seemed high and that as of November 2018, HUD had not finalized its hiring plans for the division. In comments on the draft report, HUD stated that the division had developed a staffing plan to address long- term oversight and management of the CDBG-DR portfolio and, as of March 1, 2019, expected to fill 14 positions over the next 3 months. In addition, it stated that the agency had identified an approach to secure 20 additional positions to support CDBG-DR, and expected the agency’s financial and human capital officials to approve it in the next few weeks. Federal internal control standards state that management should design control activities, including management of human capital, to achieve objectives and respond to risks. Management is to continually assess the knowledge, skills, and ability needs of the entity so that the entity is able to obtain a workforce that has the required knowledge, skills, and abilities to achieve organizational goals. In previous work on human capital, we identified key principles for effective strategic workforce planning, including determining the critical skills and competencies needed to achieve current and future programmatic results and developing strategies that are tailored to address gaps in number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies. However, as of March 1, 2019, HUD had not hired any additional staff; provided documentation showing that the number of staff it planned to hire would be sufficient to oversee current CDBG-DR funds and funds appropriated for Hurricanes Florence and Michael; or determined that staff have the needed knowledge, skills, or abilities. HUD did not have this information because it had not conducted strategic workforce planning. According to HUD officials, they were in the process of evaluating the division’s organizational structure. Without strategic workforce planning that determines if the number of staff HUD plans to hire is sufficient to oversee the growing number of CDBG-DR grants, identifies the critical skills and competencies needed, and includes strategies to address any gaps, HUD will not be able to identify the staffing resources necessary to oversee CDBG-DR grants. HUD and CDBG-DR Grantees Face Challenges with Program Design and Administration Due to the lack of permanent statutory authority for CDBG-DR, CDBG-DR appropriations require HUD to customize grantee requirements for each disaster. The ad hoc nature of CDBG-DR has created challenges for CDBG-DR grantees, such as lags in accessing funding and varying requirements. CDBG-DR grantees have also experienced administrative challenges not related to the lack of permanent statutory authority, such as challenges with grantee capacity, procurement, and improper payments. Lack of Permanent Statutory Authority Has Led to Challenges Such as Lags in Accessing Funding and Varying Requirements Although Congress has used CDBG to meet unmet disaster recovery needs since 1993, it has not established permanent statutory authority for CDBG-DR. Because of its flexibility, Congress has relied on CDBG and provided numerous supplemental appropriations for more than $86 billion in CDBG-DR funds to HUD. When Congress appropriates CDBG-DR funds, it also grants HUD broad authority to waive CDBG program requirements and establish alternative requirements for CDBG-DR funds via Federal Register notices. For example, in consecutive notices for disasters that occurred from 2001-2016, HUD waived the requirement that 70 percent of CDBG funds received by the state over a 1- to 3-year period be for activities that benefit persons of low and moderate income. For disasters from 2004-2017, it issued a waiver permitting states to directly administer CDBG-DR funds, rather than distributing all funds to local governments as is required under the traditional CDBG program. Also, since 2001 HUD has waived the requirement for CDBG action plans and instead required grantees to submit to HUD an action plan for disaster recovery. Because CDBG-DR is not a permanently authorized program, HUD officials stated that they have not established permanent regulations. Legislation was proposed in the 115th Congress that would have permanently authorized the CDBG-DR program, but was not enacted. According to HUD officials, they provided technical drafting assistance on this bill. As of February 2019, Congress had not permanently authorized CDBG-DR or any other program to meet unmet disaster needs. Unlike CDBG-DR, other federal disaster assistance programs, such as those administered by FEMA and SBA, are permanently authorized. In 1988, the Stafford Act created permanent statutory authority for much of the disaster assistance system in place today. Under this act, FEMA has multiple mechanisms for providing assistance. For example, FEMA’s Individual Assistance program provides various forms of help following a disaster, such as financial assistance for housing, unemployment, and crisis counseling assistance. In the late 1950s, the Small Business Act permanently authorized the SBA Disaster Loan Program, which provides low-interest direct loans to businesses, homeowners, and renters to repair or replace property. A recent report on climate change supports a growing need for a permanent program to address unmet disaster needs. According to a 2018 report from the U.S. Global Change Research Program, the frequency and intensity of extreme weather and climate-related events are expected to increase. The report noted that as hurricane damage can be attributed to warmer atmosphere and warmer, higher seas, there is a need to rebuild to more resilient infrastructure and develop new frameworks for disaster recovery. In part because Congress has not established permanent statutory authority for CDBG-DR or some other program to address unmet needs, GAO, the HUD OIG, and some of the 2017 grantees have cited a number of challenges. These include lags in accessing funding and varying requirements. Lags in accessing funding. For earlier hurricanes, it took at least a month for HUD to issue the Federal Register notices that outlined the CDBG-DR requirements for each disaster. For the 2017 disasters, it took longer. As noted previously, these notices lay out the steps that grantees must take before they can enter into grant agreements with HUD and begin expending funds. As shown in figure 5, it took 45 days for HUD to issue the requisite Federal Register notice after the first appropriation for the 2005 Gulf Coast hurricanes, 35 days after the first appropriation for Hurricane Sandy, and 154 days (or 5 months) after the first appropriation for the 2017 hurricanes. According to HUD officials, they delayed issuance of the first notice for the 2017 hurricanes because they expected a second appropriation and wanted to allocate those funds in the same notice. After HUD issued the Federal Register notices, it generally took the grantees months to complete all of the required steps to enter into grant agreements. For example, it took each of the 2017 grantees over 6 months to execute grant agreements with HUD. Two 2017 grantees that we interviewed suggested that the CDBG-DR process could be shortened if there were an established set of rules for states to follow instead of waiting months for a new Federal Register notice to be published for each allocation. One grantee told us that CDBG-DR should be codified as a formal program with basic rules in place so that grantees do not have to wait months for a notice to be published before they begin planning. In a May 2018 hearing on CDBG- DR, a 2017 grantee testified that disaster recovery could be greatly expedited if HUD had written regulations that governed CDBG-DR allocations. The official stated that states would not have to wait for the Federal Register notice to be published to begin designing activities and developing action plans. Similarly, for our January 2010 report on the Gulf Coast hurricanes, HUD officials told us that a permanently authorized CDBG-DR program would allow HUD to issue permanent regulations and require less need for Federal Register notices and the use of waivers after each disaster, thereby allowing funds to be available for providing assistance sooner. As part of our current review, HUD officials reiterated that a permanently authorized CDBG-DR program would allow HUD to issue permanent regulations. They stressed that for a permanently authorized CDBG-DR program to be effective, Congress would need to provide HUD the flexibility to waive traditional CDBG statutory requirements and adopt alternative requirements to help address recovery needs. Varying requirements. CDBG-DR grant requirements vary from notice to notice. In a July 2018 report, the HUD OIG found that as of September 2017, HUD used 61 notices to oversee 112 active disaster recovery grants totaling more than $47.4 billion, and would issue additional notices for funding provided in 2017 and 2018. The HUD OIG also noted that as of February 2017, Louisiana had seven open grants and had to follow 45 Federal Register notices, and that Texas had 6 open grants and had to follow 48 Federal Register notices. Officials from one of the 2017 grantees we interviewed said it was challenging to manage seven different CDBG-DR grants, each with different rules. As an example, they noted that 2015 grant funds cannot be used on levees, while funds from other years can be. To help manage these different requirements, they stated that they must tie each grant to the relevant public law in their grant management system. To further ensure compliance with the various notices, their legal department prepares a new template for the agreement that the states signs with subrecipients for each public law. Officials from another 2017 grantee stated that it was difficult to build an infrastructure for the management of current and future CDBG-DR funds, as the rules often could be different for each allocation. They also noted that variations across different allocations can make it more difficult for grantees to manage and comply with differing requirements. According to HUD officials, the requirements have varied due to differences in appropriations language and policies across administrations and changes made in response to input from the HUD OIG. In addition, the July 2018 HUD OIG report identified 59 duplicative or similar requirements in most of the notices that could benefit from a permanent framework. For example, the following rules or waivers were consistently repeated: allowing states to directly administer grants and carry out eligible activities, requiring grantees to submit an action plan, requiring grantees to review for duplication of benefits, allowing states to use subrecipients, and allowing flood buyouts. The HUD OIG recommended that the Office of Block Grant Assistance work with its Office of General Counsel to codify CDBG-DR in regulations. HUD disagreed with this recommendation, stating that it lacked statutory authority to create a permanent CDBG-DR program. In commenting on the report, HUD acknowledged that the current process of changing appropriations requirements, which results in waivers and alternative requirements, can be challenging. It further stated that congressional direction would be needed for a more standard, regulation-governed program. Further, we and others have cited four additional challenges that could be addressed in a statute permanently authorizing CDBG-DR or another disaster assistance program for unmet needs. Lag between a disaster and appropriation of CDBG-DR funds. In a July 2015 report on Hurricane Sandy, we found that the unpredictable timing of the appropriation for CDBG-DR challenged grantees’ recovery planning. As shown in figure 6, the first CDBG-DR supplemental appropriation for the Gulf Coast hurricanes was enacted 4 months after the first Gulf Coast hurricane occurred. Less time elapsed between Hurricane Sandy and Hurricane Harvey (the first of the 2017 hurricanes) and Congress’ appropriation of funds, 3 months and 2 weeks, respectively. In contrast, a presidential disaster declaration activates the provision of funds from FEMA’s Disaster Relief Fund. The SBA Disaster Loan Program is also activated by a presidential disaster declaration. Congress funds both programs through annual appropriations. Lag in spending funds once grant agreements have been signed. Once grantees have entered into grant agreements with HUD, it can take years for them to implement activities and expend all of their CDBG-DR funds. There is no consensus on the amount of time it should take grantees to expend their funds. Congress has established obligation and expenditure deadlines, such as through a provision in the Disaster Relief Appropriations Act, 2013. In that act, which applies to 47 grants, grantees are required to spend the funds within 24 months of obligation unless the Office of Management and Budget (OMB) provides a waiver. Similarly, the appropriations for the 2017 disasters also must be expended within 24 months of the date of obligation, and OMB is authorized to provide a waiver of this requirement. In addition, legislation has been proposed that would require funds to be expended within 6 years, with the possibility of an extension up to 3 years upon a waiver by OMB. Since 2015, HUD has imposed a requirement that grantees expend their funds within 6 years of signing a grant agreement. According to HUD officials, they chose 6 years because their research showed that most expenditure activity occurs within the first 6 years of the grant. However, of the 50 grants awarded in fiscal years 2012 and 2013 that are at or approaching the original 6-year mark, 9 grantees (18 percent) had expended less than half of the funds. Some of these grantees have received extensions that allow their grants to remain open until September 2022. According to HUD, a number of factors can delay recovery efforts, including subsequent disasters, litigation, and limited constructions seasons due to weather. See appendix III for more information on these grants. Housing programs that are not aligned with unmet needs. In past work, we found that CDBG-DR grantees are not required to align their housing activities with the needs of the affected communities. In a January 2010 report on the Gulf Coast hurricanes, we found that states used their broad discretion and additional flexibility to decide what proportion of their CDBG-DR funds went to homeowner units and rental units. In Louisiana and Mississippi, more homeowner units were damaged than rental units, but the proportional damage to rental stock was generally greater. However, 62 percent of damaged homeowner units were assisted and 18 percent of rental units were assisted. We recommended that Congress consider providing more specific direction regarding the distribution of disaster-related CDBG assistance that states are to provide for homeowners and renters. Since the Gulf Coast hurricanes, Congress has appropriated funding for subsequent disasters; however, as of February 2019, no appropriations had addressed this issue. Coordination with multiple federal agencies. In our July 2015 report on Hurricane Sandy, we found that different federal disaster response programs are initiated at different times, making it challenging for state and local officials to determine how to use federal funds in a comprehensive manner. In response to a survey that we conducted for that report, 12 of 13 states and cities reported that navigating the multiple funding streams and various regulations was a challenge that affected their ability to maximize disaster resilience opportunities. For example, state officials we interviewed for that report noted the redundancy of some federal requirements for receiving disaster assistance such as the duplication of environmental reviews, which are required by both HUD and FEMA. In our January 2010 report on the Gulf Coast hurricanes, we noted that a Department of Homeland Security study indicated that experts should discuss how challenges associated with the different federal efforts that provide disaster recovery assistance—such as CDBG- DR and those administered by FEMA—could be addressed. The study also suggested that experts explore new methods for delivering assistance. In our June 2009 report on CDBG-DR, we also found that guidance for the Gulf Coast disaster recovery was insufficient and that conflicting federal decisions hindered coordination of CDBG-DR and FEMA’s Hazard Mitigation Grant Program funds. We recommended that HUD coordinate with FEMA to ensure that new guidance clarified the potential options, and limitations, available to states when using CDBG disaster assistance funds alongside other disaster-related federal funding streams. HUD issued the guidance, and the recommendation was closed in November 2011. Without permanent statutory authority for a disaster assistance program that meets verified unmet needs, grantees will likely continue to encounter the challenges associated with needing customized grant requirements for each disaster, such as funding lags and varying requirements. Permanent statutory authority could also improve coordination among federal agencies that administer disaster funds. Grantees Have Faced Administrative Challenges, Such as Building Capacity and Avoiding Improper Payments In addition to the challenges experienced because CDBG-DR is not permanently authorized, reports on prior disasters cited CDBG-DR administrative challenges such as building capacity, avoiding improper payments, and following procurement processes. Grantee capacity. Grantees have experienced difficulties establishing the necessary capacity to manage large CDBG-DR grants. An Urban Institute testimony described constraints on grantees’ comprehensive capacity building. Specifically, it noted levels of expertise and program management as a repeated source of challenges, citing limitations on the availability of skilled staff. In addition, a paper on large-scale disaster recovery reported that large-scale CDBG-DR programs are significantly larger than traditional CDBG programs, and that many grantees need to hire private contractors to fill gaps in expertise and operational capacity. We also found in our June 2009 report on Gulf Coast disaster recovery that Louisiana and Mississippi lacked sufficient capacity to administer and manage CDBG-DR programs of such unprecedented size. As discussed previously, the 2017 grantees plan to hire more staff to administer CDBG-DR funds. However, officials of one grantee and HUD officials said they are all competing for the same small pool of potential applicants with CDBG-DR expertise. HUD officials said grantees in Puerto Rico and the U.S. Virgin Islands face the additional challenge of relocating potential candidates, and in the case of Puerto Rico finding bilingual candidates. Improper payments. Our prior reports and those of the HUD OIG have identified improper payments as an ongoing challenge for HUD and CDBG-DR grantees. In February 2015, we found that HUD’s policies and procedures did not address all key requirements for estimating improper payments for Hurricane Sandy CDBG-DR funds. To help ensure that HUD produced reliable estimates of its improper payments, we recommended that HUD revise its policies and procedures by (1) requiring payments to federal employees to be included in populations for testing as required by the Improper Payments Information Act of 2002, as amended, and (2) including steps to assess the completeness of the population of transactions used for selecting the samples to be tested. HUD concurred with our recommendation and has since updated its policies and procedures to require that payments to federal employees be included in the improper payment testing for the program. However, because it has not yet taken steps to ensure that all grantee files are included in the population for testing improper payments, this recommendation remained open as of February 2019. The HUD OIG also has conducted numerous audits of the internal controls of prior CDBG-DR grantees, a number of which resulted in findings related to improper payments. For example, in an August 2017 report on the State of New Jersey, the OIG found that the state disbursed Sandy CDBG-DR funds to homebuyers who did not meet all of the program eligibility requirements. It also found in a December 2016 report that the City of New York disbursed more than $18.2 million in CDBG-DR funds for state sales tax on program repairs and maintenance services that the city was not legally required to pay under New York state law. In a July 2016 report on the administration of SBA and CDBG-DR disaster assistance, the Congressional Research Service noted that the availability and timing of disaster assistance from different sources can result in agencies providing duplicative assistance. In addition, according to SBA data we reviewed for our July 2010 report on the Gulf Coast hurricanes, SBA determined that 76 small businesses approved for loans under Louisiana’s Business Recovery Grant and Loan Program, funded by CDBG-DR, received duplicate benefits under SBA’s Disaster Loan Program. In the appropriations acts for the 2017 disasters, Congress required federal agencies, including HUD, to submit their plans for ensuring internal control over disaster relief funding to Congress, among others. HUD submitted its plan to Congress on November 2, 2018. As previously noted, we are conducting a separate review on, among other things, HUD’s internal control plan. Procurement. The HUD OIG has issued nearly 20 audits on disaster recovery grantees that contained findings related to procurement, including reviews of grantees that received funds to recover from the Gulf Coast hurricanes and Hurricane Sandy. In a September 2017 report, the HUD OIG found that HUD did not provide sufficient guidance and oversight to ensure that state disaster grantees followed proficient procurement processes. The OIG focused on whether HUD staff had ensured that the grantee had adopted federal procurement standards or had a procurement process that was equivalent to those standards. It made four recommendations to help ensure that products and services are purchased competitively at fair and reasonable prices in future disaster allocations. In a September 2016 report, the HUD OIG described the results of an initiative by the Council of the Inspectors General on Integrity and Efficiency to review funds provided by the Disaster Relief Appropriations Act, 2013. This review was conducted by the HUD OIG and the OIGs for seven other agencies that received funds for Hurricane Sandy and other disasters under the act. The HUD OIG pointed out a range of contracting issues that HUD grantees faced, including that they billed outside the scope of work, lacked competitive procedures or full and open competition, and had unsupported labor costs. It attributed these challenges to HUD and the grantees (1) not understanding federal contracting regulations and cost principles and (2) lacking internal controls over procurement processes. As a result, the HUD OIG stated that HUD and grantees did not know whether they received the best value and greatest overall benefit from their various disaster relief procurement contracts, amendments, and change orders. The OIG concluded that the Council of the Inspectors General on Integrity and Efficiency should work with HUD to ensure the agency, grantees, and contractors complied with federal contracting requirements. The HUD OIG also recommended in a May 2018 report that Texas adhere more closely to federal procurement regulations in applying for and expending CDBG-DR grants. It recommended that HUD require the grantee to (1) ensure that its procurement and expenditure policies and procedures are implemented and working as designed and (2) ensure that warnings about false statements and false claims are included in all of its contract-related forms. Texas responded that it would continue to strengthen its current program structure. Monitoring. In our June 2009 report on CDBG-DR guidance for the Gulf Coast disaster recovery, we found that in addition to HUD’s four to five on-site monitoring and technical assistance visits per year, a number of state officials needed clarification of federal regulations, environmental requirements, and waivers related to the use of CDBG-DR funds in disaster recovery. Although HUD had field offices in both Louisiana and Mississippi, the CDBG-DR grant management responsibilities were handled by HUD headquarters staff. Grantees in both states emphasized that an additional onsite presence from HUD would have been beneficial to their recovery efforts. In addition, in a May 2018 report on CPD’s monitoring of grantees’ compliance with requirements contained in the Disaster Relief Appropriations Act, 2013, the HUD OIG found a lack of monitoring of grantees’ drawdown transactions. The OIG recommended that CPD monitor these transactions to ensure that grantees appropriately record transactions. HUD agreed to open an investigation to review the transactions before responding to the recommendation. Conclusions CDBG has been widely viewed as a convenient, expedient, and accessible tool for meeting needs in disaster-impacted communities that are not met by other federal and private sources, but CDBG-DR has proven to be slow for HUD and grantees to implement. Over a year after Congress first appropriated CDBG-DR funds for recovery from the 2017 hurricanes, grantees have generally not drawn down these funds to aid disaster hurricane victims because they continue to plan and design their activities. While it is important to provide disaster assistance promptly, HUD also needs to ensure that grantees are well positioned to administer the funds. Before expending funds, HUD required grantees to submit planning documentation, but its review of this documentation was limited. Specifically, HUD did not have adequate guidance for staff to use when assessing the adequacy of grantees’ financial controls, procurement processes, and grant management procedures and of their capacity and unmet needs assessments. HUD also did not maintain documentation to substantiate staff’s conclusions that the grantees’ submissions were sufficient. By developing additional guidance for staff to use in evaluating the quality of grantees’ financial processes and procedures and capacity and unmet needs assessments, HUD can provide better assurance that its reviews are thorough and consistent. Further, without documenting the basis for its conclusions when reviewing future grantees’ submissions, stakeholders and decision makers lack information on why HUD concluded that grantees’ financial processes and procedures and capacity and unmet needs assessments were adequate. HUD also misses an opportunity to leverage this information later to mitigate risk and inform its monitoring of grantees. HUD’s monitoring of the 2017 grantees will be critical given challenges that the HUD OIG has identified with grantees’ procedures and our concerns about HUD’s reviews of grantees’ initial submissions. But HUD did not have a monitoring plan that reflected the specific risk factors of each grantee and outlined the scope of its monitoring. A comprehensive monitoring plan would help HUD ensure that its oversight of grantees’ compliance with grant requirements focused on grantees’ areas of greatest risks. Further, HUD did not yet have the staff in place to effectively oversee CDBG-DR funds. Without strategic workforce planning that determines if the number of staff the agency will be able to hire is sufficient to oversee the growing number of CDBG-DR grants, identifies the critical skills and competencies needed, and includes strategies to address any gaps, HUD will not be able to identify the staffing resources necessary to oversee CDBG-DR grants. Finally, if the federal government continues to use the CDBG program for federal disaster assistance, grantees will likely encounter many of the same challenges they have in the past—including lags in accessing funding, requirements that may vary for each disaster, and difficulties coordinating with multiple federal agencies. Establishing permanent statutory authority for a disaster assistance program that meets verified unmet needs in a timely manner would provide a consistent framework for administering funds for unmet needs going forward. The program could be administered either by HUD or another agency that had authority to issue associated regulations. Such a statute and regulations could create consistent requirements for grantees and specify how the program would fit into the federal government’s disaster assistance framework. The importance of establishing permanent statutory authority for such a program is underscored by the expected increase in the frequency and intensity of extreme weather and climate-related events. Matter for Congressional Consideration Congress should consider legislation establishing permanent statutory authority for a disaster assistance program administered by HUD or another agency that responds to unmet needs in a timely manner and directing the applicable agency to issue implementing regulations. Recommendations for Executive Action We are making the following five recommendations to HUD: The Assistant Secretary for Community Planning and Development should develop additional guidance for HUD staff to use when assessing the adequacy of the financial controls, procurement processes, and grant management procedures that grantees develop. (Recommendation 1) The Assistant Secretary for Community Planning and Development should develop additional guidance for HUD staff to use when assessing the adequacy of the capacity and unmet needs assessments that grantees develop. (Recommendation 2) The Assistant Secretary for Community Planning and Development should require staff to document the basis for their conclusions during reviews of grantees’ financial controls, procurement processes, and grant management procedures and capacity and unmet needs assessments. (Recommendation 3) The Assistant Secretary for Community Planning and Development should develop and implement a comprehensive monitoring plan for the 2017 grants. (Recommendation 4) The Assistant Secretary for Community Planning and Development should conduct workforce planning for the Disaster Recovery and Special Issues Division to help ensure that it has sufficient staff with appropriate skills and competencies to manage a growing portfolio of grants. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to HUD for comment. In written comments, which are summarized below and reproduced in appendix IV, HUD partially agreed with two of our recommendations and generally agreed with the remaining three. HUD partially agreed with the draft report’s first recommendation to develop standards for HUD staff to use when assessing the adequacy of the financial controls, procurement processes, and grant management procedures that grantees develop. HUD disagreed that it needed to develop standards for financial processes and procedures, stating that such standards already exist. Specifically, HUD pointed to the February 2018 Federal Register notice, which states that a grantee has proficient financial policies and procedures if it submitted to HUD certain information for its review. In the draft report, we acknowledged that the notice required grantees to submit information such as certain audits, financial reports, and their financial standards. However, we concluded that the notice does not describe how HUD reviewers should assess the quality of those financial standards. HUD agreed that providing additional guidance to staff on defining the specific conditions that must exist within these documents would improve its proficiency determination. This was the intent of the recommendation included in the draft report. However, to avoid confusion, we revised the recommendation and related report language to further clarify our intent by substituting “additional guidance” for “standards.” HUD also partially agreed with our second recommendation to develop standards for HUD staff to use when assessing the adequacy of grantees’ capacity and unmet needs assessments. Similar to our first recommendation, HUD stated that the standards for HUD staff to use when assessing the adequacy of these assessments are included in the February 2018 Federal Register notice. Specifically, HUD noted that it states that HUD will determine the grantee’s implementation plan, which contains its capacity assessment, to be adequate if it addresses the items required in the notice. HUD also stated that the notice directed grantees to develop a needs assessment to understand the type and location of community needs and to target limited resources to those areas with the greatest need. In the draft report, we acknowledged that the notice required grantees to submit (1) an implementation plan that describes, among other things, their capacity to carry out the recovery and how they will address any capacity gaps for HUD and (2) an action plan for disaster recovery that includes an assessment of unmet needs to help grantees understand the type and location of community needs and to target their CDBG-DR funds to those areas with the greatest need. However, we concluded that the notice does not describe how HUD reviewers should assess the adequacy of these assessments. HUD agreed that providing additional guidance to HUD staff on defining the specific conditions that must exist within the documents grantees submit to HUD would improve the review of grantee capacity. HUD also agreed that there was an opportunity to improve the consistency of HUD’s review of grantees’ action plans, including their unmet needs assessments. Because providing additional guidance to HUD staff was the intent of the recommendation in the draft report, we revised the recommendation and related report language to clarify our intent by substituting “additional guidance” for “standards.” HUD generally agreed with our remaining three recommendations. HUD agreed with our third recommendation to document the basis for conclusions during reviews of grantees’ financial controls, procurement processes, and grant management procedures and capacity and unmet needs assessments, stating that it will require staff to better document their analysis. HUD also agreed with our fourth recommendation to develop and implement a comprehensive monitoring plan for the 2017 grants, stating that such a plan is necessary to effectively manage the growing portfolio of CDBG-DR grants. It provided a monitoring schedule for fiscal year 2019 that it characterized as a monitoring plan, and noted that it had begun identifying monitoring strategies for all monitoring reviews that would occur from March 2019 through May 2019. It also said it would develop the remaining strategies after the initial monitoring reviews. However, HUD still needs to develop a plan that identifies the specific risk factors of each grantee and outlines the scope of its monitoring. Similarly, HUD agreed with our fifth recommendation to conduct workforce planning for the Disaster Recovery and Special Issues Division. It stated that the division had developed a staffing plan to address long-term oversight and management of the CDBG-DR portfolio and, as of March 1, 2019, expected to fill 14 positions over the next 3 months. In addition, it stated that it had identified an approach to secure 20 additional positions to support CDBG-DR and expected to finalize this approach in the next few weeks once it was approved by HUD’s financial and human capital officials. We added this updated information to the report. While developing a staffing plan is a good first step, HUD still needs to conduct workforce planning to determine if the number of staff they will be able to hire is sufficient to oversee the growing number of CDBG-DR grants, identify the critical skills and competencies needed, and develop strategies for addressing any gaps. HUD also provided the following comments on our findings. Regarding the discussion of unmet needs assessments, HUD noted that the draft report does not acknowledge that the second appropriation for 2017 disasters directed HUD to provide a minimum of $11 billion for Puerto Rico and the U.S. Virgin Islands for unmet needs, which made HUD’s standard methodology for determining the allocation based on unmet needs data moot. HUD stated that this information is critical to understanding the allocation of funds toward unmet needs associated with 2017 disasters. Our review of the unmet needs assessments focused on the first CDBG-DR appropriation of $7.4 billion, for which HUD used its standard methodology to allocate the funds. We focused on this initial allocation because HUD had reviewed and approved the grantees’ unmet need estimates for these funds. In response to HUD’s comment, we added language to the report that $11 billion was to be allocated to Puerto Rico and the U.S. Virgin Islands where we make reference to the second CDBG-DR appropriation of $28 billion. Regarding the discussion of our prior work that found that CDBG-DR grantees are not required to align their housing activities with the needs of the affected communities, HUD stated the agency had implemented requirements that directed grantees to ensure that CDBG-DR funding allocations are reasonably proportionate to the total remaining unmet needs for housing, infrastructure, and economic revitalization. It also noted that the February 2018 Federal Register notice directs grantees to propose an allocation of CDBG-DR funds that primarily considers unmet housing needs. The focus of our discussion was the status of our recommendation that Congress consider providing more specific direction on the distribution of CDBG-DR funds. Although we acknowledged in the draft report that HUD instructed the 2017 grantees to primarily use their initial CDBG- DR allocation to meet unmet housing needs, we did not do so in the section of the draft report that discussed this prior work. In response to HUD’s comment, we added similar language in that section. Regarding our discussion of prior HUD OIG reports on grantee procurement practices, HUD said there has been a protracted disagreement between HUD and the HUD OIG regarding the procurement requirements that may be imposed on CDBG-DR recipients, specifically the definition of “equivalent.” HUD stated that the most recent resolution of this disagreement came in a January 10, 2017, decision memorandum from the former HUD Deputy Secretary, supported by a legal opinion from HUD’s Office of General Counsel. According to HUD, these documents supported CPD’s position that states have the authority to follow their own procurement standards. However, according to the HUD OIG’s December 2018 semiannual report, the HUD OIG disagreed with this assessment and referred this issue to the Deputy Secretary on March 31, 2017. The report noted that, as of the end of fiscal year 2018, the HUD OIG had not received a decision. We revised the report to state that HUD and the HUD OIG have an ongoing disagreement. Regarding a HUD OIG report on Florida that we cited, HUD said it was evident that the state’s financial policies and capacities were functioning effectively because the state independently corrected a bookkeeping error prior to the HUD OIG audit. However, the HUD OIG noted in the report that Florida corrected the error the OIG identified during the audit. Florida agreed with the finding and accepted the recommendation. Therefore, we made no change to the report. Further, HUD noted that the draft report cites recommendations from a number of prior HUD OIG audits that had been closed or where fundamental disagreement existed between HUD and the HUD OIG. In the few instances where we did not provide the status of HUD OIG recommendations to HUD, we added their status to the report. Regarding our analysis of the status of 2012 and 2013 CDBG-DR grants, HUD stated that the draft report included a simplified analysis of CDBG-DR grant performance that dismissed HUD’s determination that disbursements from a CDBG-DR grant are substantially completed 6 years after the effective date of the agreement. It noted that our analysis excluded grants that were closed out and included grants that should not have been included because they had a contract-effective date of mid-2015 or later. However, our analysis that HUD commented on draws from its own publicly available monthly report entitled “Monthly CDBG-DR Grant Financial Report.” Based on HUD’s comments, the report appears to be missing key information on the timing of the grants—namely, some grants identified as 2012 and 2013 grants had effective dates of 2015 or later. Further, many of the grants that HUD said were unfairly included in our analysis were designated as “slow spenders” in HUD’s own monthly report. We reviewed the additional documentation HUD provided and updated our analysis. HUD also provided technical comments, which we incorporated as appropriate. We considered three comments to be more than technical in nature. First, HUD stated that the draft report (1) was critical of grantee capacity challenges, implying that the varying requirements in the numerous Federal Register notices further tax a grantee’s capacity, and (2) suggested that permanent regulatory authority for CDBG-DR would begin to address these issues. However, the draft report identified grantee capacity as an administrative challenge that CDBG- DR grantees face that is not related to the lack of permanent statutory authority. Second, HUD stated that the primary cause of the “ad hoc nature” of the CDBG-DR program and grantee capacity challenges is the unpredictability of disasters and the uniqueness of each recovery effort, not the lack of permanent statutory authority. It said that each congressional appropriation includes unique statutory provisions aimed at making incremental program improvements that can only be implemented through a new Federal Register notice. We recognize that each disaster is unique, but as our past work and that of the HUD OIG has shown, there are certain challenges associated with meeting customized grant requirements for each disaster—such as funding lags, varying requirements, and coordination with multiple programs— that could be addressed if Congress considered permanently authorizing a disaster assistance program that meets unmet needs. Third, HUD stated that CDBG-DR funds are distinct from FEMA and SBA response and recovery resources because FEMA and SBA disaster programs have a narrower scope. HUD noted that CDBG-DR funds aid in a community’s long-term recovery from a catastrophic disaster, which requires substantial time for planning the community- wide recovery effort. We recognize that long-term recovery takes time, but we maintain that this does not prohibit Congress from considering legislation establishing permanent statutory authority for a disaster assistance program that responds to unmet needs. Because we believe the draft report adequately addressed the various issues HUD raised, we made no changes in response to these comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of Housing and Urban Development, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology Our objectives were to examine (1) the status of the 2017 Community Development Block Grant Disaster Recovery (CDBG-DR) grants; (2) the steps the 2017 CDBG-DR grantees have taken to establish financial processes and procedures, build capacity, and estimate unmet needs; (3) the extent to which the Department of Housing and Urban Development (HUD) has reviewed the steps that grantees have taken and developed plans for future monitoring; and (4) the challenges that HUD and grantees have faced in administering grants. We focused our review on the states of Florida and Texas and the U.S. territories of Puerto Rico and the U.S. Virgin Islands—the states and territories most directly affected by Hurricanes Harvey, Irma, and Maria and that received over $1 billion in CDBG-DR funds to address unmet recovery needs. For all of our objectives, we visited Puerto Rico and Texas to interview officials at the Puerto Rico Department of Housing and Texas General Land Office, respectively, which are the 2017 CDBG-DR grantees in those jurisdictions. During our visit to Puerto Rico, we also met with Puerto Rico’s Central Office of Recovery, Reconstruction and Resilience, which was created to provide administrative oversight of all programs related to disaster recovery. We visited these two grantees because they were the 2017 grantees that received the largest amounts of CDBG-DR funds. We also conducted telephone interviews with officials from the U.S. Virgin Islands Housing Finance Authority and the Florida Department of Economic Opportunity, the 2017 CDBG-DR grantees in those jurisdictions. To determine the status of the 2017 CDBG-DR grants, we reviewed relevant laws and the Federal Register notices allocating the CDBG-DR funds and interviewed HUD officials to determine the steps grantees were required to take before signing a grant agreement and expending their 2017 CDBG-DR funds. We reviewed key documents—such as documentation on financial processes and procedures, implementation plans, and action plans—to determine when they were submitted and approved. To determine how much CDBG-DR funding the 2017 grantees had drawn down, we examined data from the Disaster Recovery Grant Reporting system as of January 2019 (the most recent month available during our review). To assess the reliability of these data, we reviewed relevant documentation on the system and interviewed officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purpose of reporting CDBG-DR draw down information. To determine the steps the 2017 CDBG-DR grantees have taken to establish financial processes and procedures, build capacity, and estimate unmet needs, we reviewed grantees’ documents, such as their organizational charts and capacity assessments, to determine how grantees plan to administer the CDBG-DR grants. In addition, we identified and reviewed relevant HUD Office of the Inspector General (OIG) reports to determine whether the office had previously identified concerns about these grantees’ financial processes and procedures and capacity. To determine how grantees calculated their unmet housing needs for homeowners and renters, we determined how HUD calculated grantees’ unmet needs by reviewing the methodology outlined in the Federal Register notices allocating the CDBG-DR funds and interviewing HUD officials. We focused on the calculation HUD used to determine unmet housing needs because the February 2018 Federal Register notice required grantees to primarily use their initial CDBG-DR allocation to address their unmet housing needs. We further focused on the housing needs of homeowners and renters because they constituted the largest portion (ranging from 47 percent in Texas to 99 percent in Florida) of grantees’ total estimates of housing needs. To determine how grantees calculated the housing needs estimates of homeowners and renters and the activities grantees planned to fund with the CDBG-DR grants, we reviewed grantees’ descriptions of their methodologies in the action plans they were required to develop for their initial CDBG-DR allocation. Although we did not conduct an extensive review of the grantees’ methodologies for estimating the unmet housing needs of homeowners and renters, we compared their methodologies to HUD’s methodology (described in Federal Register notices), identifying any differences. To examine the extent to which HUD has reviewed the steps that grantees have taken and developed plans for future monitoring, we reviewed HUD documents such as the completed checklists it used to review (1) documentation grantees submitted for certification of their financial controls, procurement processes, and grant management procedures, (2) grantees’ implementation plans, which contained a capacity assessment, and (3) grantees’ action plans for disaster recovery, including their unmet needs assessments. We compared these checklists against relevant statutory and regulatory requirements and internal control standards. In addition, we reviewed examples of unofficial working documents that HUD provided, such as a grantee’s response to HUD questions on the documentation that it had submitted. Further, to determine HUD’s monitoring of the 2017 CDBG-DR grantees, we reviewed HUD documents such as the Office of Community Planning and Development’s monitoring handbook and monitoring schedule for fiscal year 2019 and interviewed HUD officials. We compared HUD’s monitoring policies and procedures against relevant internal control standards. Finally, we interviewed HUD officials about their resource needs, hiring plans, and plans to monitor current and future CDBG-DR grants. We compared HUD’s hiring plans against relevant internal control standards and best practices for workforce planning we have previously identified. To determine the challenges that HUD and grantees have faced in administering grants, we conducted a literature search for reports on CDBG-DR funds used to recover from the 2005 Gulf Coast hurricanes (Katrina, Rita, and Wilma) and Hurricane Sandy. We focused on these hurricanes because Katrina, the costliest of the three Gulf Coast hurricanes, and Sandy were among the top five costliest hurricanes on record in the United States. We searched for GAO, HUD OIG, and Congressional Research Service reports and other literature such as government reports, peer-reviewed journals, hearings and transcripts, books, and association publications. To identify GAO reports, we used the search engine on GAO’s public website and searched for relevant terms such as “community development block grant,” “Sandy,” “Katrina,” and “Gulf Coast” from August 2005 (the month of the 2005 hurricanes) to April 2018 (the date of the search). To identify HUD OIG reports, we reviewed disaster-related reports the HUD OIG made available on its public webpages titled “Disaster Oversight Highlights,” “Superstorm Sandy,” and “Hurricane Katrina.” To identify Congressional Research Service reports, we used its public website’s search engine and searched for the terms “community development block grant” and “disaster.” To identify the other literature sources, we searched the following: ABI/INFORM®, Econ Lit, National Technical Information Service, and 20 other databases through GAO’s ProQuest subscription; Nexis; and Congressional Quarterly. We used terms such as “Community Development Block Grant,” “CDBG,” “disaster,” “Katrina,” “Sandy,” “challenge,” and “barrier” and limited the publication date range to between 2005 and 2018. Our searches initially yielded 157 sources. We screened out 23 based on their abstracts and an additional 103 sources after reviewing their full content. We excluded studies that related to the traditional CDBG program rather than CDBG-DR and those that provided general background on CDBG-DR. We determined that the remaining 31 sources were relevant for our purposes and reviewed them to determine if they identified any challenges that HUD and CDBG-DR grantees faced in administering prior CDBG-DR funds. Specifically, we considered any description of concerns with the administration and oversight of CDBG- DR to be a challenge. Using a standard form, one analyst reviewed each source, identified relevant challenges, and assigned the relevant challenges to a category. A second analyst reviewed the identification and categorization. Where there were differences in the review of the first and second analyst, the two conferred and entered a final decision. We also interviewed HUD officials and the 2017 CDBG-DR grantees to obtain their perspectives on the challenges in administering the 2017 grants. To determine the time it took grantees to receive CDBG-DR funds (one of the challenges we identified through our literature review), we reviewed information from the Disaster Recovery Grant Reporting system, HUD notices, and other sources to obtain the dates for the appropriations, allocations, and grant agreement for the Gulf Coast hurricanes, Hurricane Sandy, and the 2017 hurricanes. To determine the time it took grantees to expend their CDBG-DR funds (another challenge we identified through our literature review), we analyzed expenditure data in the Disaster Recovery Grant Reporting system for grants made in fiscal years 2012 and 2013, as of January 1, 2019. We selected these grants because HUD officials told us that grantees generally expend the majority of their CDBG-DR funds within 6 years of signing a grant agreement, and the 2012 and 2013 grantees are approaching this milestone. To assess the reliability of the Disaster Recovery Grant Reporting system data, we reviewed relevant documentation on the system and interviewed officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purpose of reporting grant agreement dates and CDBG-DR expenditures. We conducted this performance audit from January 2018 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Approved Community Development Block Grant Disaster Recovery Activities of the Four Largest 2017 Grantees The February 2018 Federal Register notice allocating the initial $7.4 billion in Community Development Block Grant Disaster Recovery (CDBG-DR) funds appropriated for the 2017 disasters requires grantees to use the funds primarily to address unmet housing needs. The initial action plans for the four largest 2017 CDBG-DR grantees—Florida, Texas, Puerto Rico, and the U.S. Virgin Islands—outline the various activities they plan to implement to address unmet needs. These include home buyout and rehabilitation programs to address unmet housing needs, workforce training and business recovery grants to address unmet economic revitalization needs, and the provision of matching funds for FEMA-assisted infrastructure projects to address unmet infrastructure needs. Florida’s Approved CDBG- DR Activities Florida focused its February 2018 CDBG-DR allocation on addressing unmet housing and economic revitalization needs (see table 6). Texas’ Approved CDBG- DR Activities Texas allocated approximately 45 percent of its February 2018 CDBG-DR allocation to the City of Houston and Harris County to directly administer their own CDBG-DR housing and infrastructure activities. Texas plans to use the majority of the remaining funds to address unmet housing needs in other areas affected by Hurricane Harvey (see table 7). Puerto Rico’s Approved CDBG-DR Activities Puerto Rico plans to use over 75 percent of its February 2018 CDBG-DR allocation to address unmet housing and economic revitalization needs (see table 8). U.S. Virgin Islands’ Approved CDBG-DR Activities The U.S. Virgin Islands’ plans to use about 42 percent of its February 2018 CDBG-DR allocation to address unmet housing and economic revitalization needs (see table 9). Appendix III: Status of 2012 and 2013 Community Development Block Grant Disaster Recovery Grants Congress appropriates Community Development Block Grant Disaster Recovery (CDBG-DR) funds to help states recover from federally declared disasters. Once Congress appropriates CDBG-DR funds, the Department of Housing and Urban Development (HUD) is responsible for allocating the funds to designated grantees in affected areas. According to HUD officials, most expenditure activity in CDBG-DR grants occurs within the first 6 years of the grant. As shown in table 10, of the 50 grants at or approaching the 6-year mark, 9 grantees had expended less than half of the funds. Appendix IV: Comments from the Department of Housing and Urban Development Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Paige Smith (Assistant Director), Josephine Perez (Analyst in Charge), Meredith Graves, Raheem Hanifa, Joe Maher, John McGrail, Marc Molino, Tovah Rom, and Michael Silver made key contributions to this report.
Why GAO Did This Study The 2017 hurricanes (Harvey, Irma, and Maria) caused an estimated $265 billion in damage, primarily in Texas, Florida, Puerto Rico, and the U.S. Virgin Islands. As of February 2019, Congress had provided over $35 billion to HUD for CDBG-DR grants to help communities recover. Communities may use these funds to address unmet needs for housing, infrastructure, and economic revitalization. GAO was asked to evaluate the federal government's response to the 2017 hurricanes. In this initial review of CDBG-DR, GAO examined, among other things, (1) the status of the 2017 grants, (2) HUD's review of the initial steps grantees have taken and its plans for future monitoring, and (3) challenges HUD and grantees face in administering grants. GAO reviewed documentation from the four largest 2017 CDBG-DR grantees and HUD. GAO also reviewed prior work on CDBG-DR and interviewed officials from HUD and the four grantees. What GAO Found As of September 2018, the four states and territories that received the most 2017 Community Development Block Grant Disaster Recovery (CDBG-DR) funds had signed grant agreements with the Department of Housing and Urban Development (HUD). Before signing the agreements, HUD certified the grantees' financial processes and procedures. It also approved the grantees' assessments of their capacity to carry out the recovery and of unmet needs (losses not met with insurance or other forms of assistance). Before funding begins to reach disaster victims, the grantees need to take additional steps, such as finalizing plans for individual activities. As of January 2019, Texas had drawn down about $18 million (of $5 billion) for administration and planning only, and Florida had drawn down about $1 million (of $616 million) for administration, planning, and housing activities. Puerto Rico and the U.S. Virgin Islands had not drawn down any of the $1.5 billion and $243 million, respectively, they had been allocated. HUD lacks adequate guidance for staff reviewing the quality of grantees' financial processes and procedures and assessments of capacity and unmet needs, and has not completed monitoring or workforce plans. The checklists used to review grantees' financial processes and procedures and assessments ask the reviewer to determine if the grantee included certain information, such as its procurement processes, but not to evaluate the adequacy of that information. In addition, the checklists, which include a series of “yes” or “no” questions, do not include guidance that the HUD reviewer must consider. HUD also does not have a monitoring plan that identifies the risk factors for each grantee and outlines the scope of monitoring. Further, HUD has not developed a workforce plan that identifies the critical skills and competencies HUD needs and includes strategies to address any staffing gaps. Adequate review guidance, a monitoring plan, and strategic workforce planning would improve HUD's ability to oversee CDBG-DR grants. Without permanent statutory authority and regulations such as those that govern other disaster assistance programs, CDBG-DR appropriations require HUD to customize grant requirements for each disaster in Federal Register notices—a time-consuming process that has delayed the disbursement of funds. In a July 2018 report, the HUD Office of Inspector General found that as of September 2017, HUD used 61 notices to oversee 112 active CDBG-DR grants. Officials from one of the 2017 grantees told us that it was challenging to manage the multiple CDBG-DR grants it has received over the years because of the different rules. CDBG-DR grantees have faced additional challenges such as the need to coordinate the use of CDBG-DR funds with other disaster recovery programs that are initiated at different times and administered by other agencies. HUD officials said that permanently authorizing CDBG-DR would allow HUD to issue permanent regulations for disaster recovery. Permanent statutory authority could help address the challenges grantees face in meeting customized grant requirements for each disaster, such as funding lags, varying requirements, and coordination with multiple programs. The expected increase in the frequency and intensity of extreme weather events underscores the need for a permanent program to address unmet disaster needs. What GAO Recommends Congress should consider permanently authorizing a disaster assistance program that meets unmet needs in a timely manner. GAO also makes five recommendations to HUD, which include developing guidance for HUD staff to use in assessing grantees, developing a monitoring plan, and conducting workforce planning. HUD generally agreed with three recommendations and partially agreed with two, which GAO clarified to address HUD's comments.
gao_GAO-18-619
gao_GAO-18-619_0
Background This section provides an overview of 1) DOE’s administration of its advanced fossil energy R&D program, and 2) DOE’s Loan Guarantee Program (LGP). DOE’s Administration of Its Advanced Fossil Energy R&D Projects Within DOE, FE carries out DOE’s program for fossil energy R&D, which includes federal research, development, and demonstration efforts on advanced power generation; power plant efficiency; water management; and carbon capture and storage (CCS) technologies. CCS is a process that involves capturing man-made CO at its source and storing it permanently underground. The program for fossil energy R&D also includes the development of technological solutions for the development of U.S. unconventional oil and gas domestic resources, such as from shale formations. FE also oversees the operations, infrastructure, and R&D at NETL, among other things. NETL officials told us that NETL has dual roles: it serves as project manager for advanced fossil energy R&D projects that receive federal assistance, and, as a DOE national laboratory, it also conducts applied research. FE and NETL collaborate on the selection and administration of the awards for advanced fossil energy R&D projects, according to DOE officials. DOE’s efforts to administer its program for advanced fossil energy R&D take place across a spectrum of activities, including providing financial assistance for large demonstration projects. In the 1980s and early 1990s, DOE’s fossil energy R&D program primarily focused on reducing emissions of harmful pollutants from coal-fired power plants, particularly sulfur dioxide and nitrogen oxide. For example, DOE began its large demonstration projects of advanced coal technologies in the mid-1980s; this work focused on R&D to mitigate acid rain and to reduce the pollutants released from coal combustion. More recently, DOE has provided funding for advanced fossil energy R&D to reduce COemissions by developing beneficial uses for COfrom coal-fired power plants, and to improve methods for CCS, among other things. As we have previously reported, CCS is a key technology that shows potential for reducing CO Specifically, CCS technologies separate and capture CO from other gases produced when combusting or gasifying coal, compress it, then transport it to underground geologic formations such as saline aquifers—porous rock filled with brine—or oil and natural gas reservoirs, where the captured CO in large quantities: the Boundary Dam plant in Canada and the Petra Nova plant in Texas. Both plants retrofitted CCS technology to existing plants. A third fossil-fueled, electricity-generating operation, the Kemper County Energy Facility in Mississippi, was scheduled to begin CCS operations in 2016, but cost overruns and delays in construction and operations led to the suspension of the plant’s CCS component in June 2017. Each of these power plants using CCS systems may be described as a first-of-its-kind venture, using technologies developed at a pilot scale ramped up to commercial scale. It is not unusual for projects in the demonstration phase of the R&D process to experience higher-than-anticipated costs, delays, and other challenges, according to a 2017 Congressional Research Service report. DOE generally uses announcements of opportunities for federal financial assistance to competitively solicit potential applicants of advanced fossil energy R&D projects. According to DOE officials, the department sets priorities for its advanced fossil energy R&D funding each year based in part on the amount appropriated for FE R&D and on FE’s R&D plans, as well as any direction that Congress may have specified for certain types of technology R&D. DOE’s advanced fossil energy R&D projects typically lasted for multiple years. DOE sets milestones for technical progress for each year of a project to ensure that funding recipients accomplish a specific R&D objective or set of objectives, according to DOE officials. The recipient may submit some form of report on its progress on the R&D as well as accomplishments to DOE for review and approval to continue. DOE officials told us they review the progress of the recipient at each phase and the project continuation is subject to the recipient’s technical progress, the recipient’s compliance with all of the other terms—including any financial terms—of the agreement, and the availability of DOE’s funds, based on congressional appropriations. DOE’s Loan Guarantee Program The LGP was originally designed to address a fundamental impediment to innovative and advanced energy projects: securing enough affordable financing to survive the period between developing innovative technologies and commercializing them. As we have previously reported, these projects have risks, such as technology risk—the risk that the new technology will not perform as expected—and execution risk—the risk that the borrower or project will not perform as expected. Because the risks that commercial lenders must assume to support new technologies can put the cost of private financing out of reach, companies may not be able to commercialize innovative technologies without the federal government’s financial support. Federal loan guarantee programs such as the LGP can help companies obtain financing because the federal government agrees to reimburse the lender for the guaranteed amount if a borrower defaults. Section 1703 of EPAct authorizes DOE to provide loan guarantees for projects that avoid, reduce, or sequester air pollutants or man-made emissions of greenhouse gases and employ new or significantly improved technologies as compared to commercial technologies in service in the United States at the time the guarantee is issued. EPAct describes several categories of projects that are eligible for guarantees under the program, including, among others, renewable energy systems, efficient end-use energy technologies, advanced nuclear facilities, advanced fossil energy technology, and CCS technologies. DOE’s Loan Programs Office, which administers the LGP, had issued three loan guarantees under Section 1703 supporting nuclear technologies as of August 2018, but none supporting advanced fossil energy or any other technologies. DOE Provided $2.66 Billion in Funding for 794 Advanced Fossil Energy R&D Projects Started from Fiscal Years 2010 through 2017 DOE Provided $1.12 Billion in Funding for Nine Large Demonstration Projects Started from Fiscal Years 2010 through 2017 DOE provided $2.66 billion in funding for 794 advanced fossil energy R&D projects started from fiscal years 2010 through 2017. These 794 projects included 9 later-stage large demonstration projects and 785 other advanced fossil energy R&D projects. DOE provided $1.12 billion in funding to nine large projects aimed at demonstrating the commercial viability of CCS technologies. DOE provided $1.54 billion in funding to 785 other R&D projects for both coal and oil and gas technologies, mostly to universities and industry, located in 46 states and the District of Columbia. For nine large demonstration projects started from fiscal years 2010 through 2017, DOE provided $1.12 billion in funding. These projects received that funding from appropriations from the American Recovery and Reinvestment Act of 2009 (Recovery Act) and supported efforts to reduce the financial and technical risks of commercial CCS, according to a 2017 report by the Congressional Research Service. Six demonstration projects researched CCS technologies using coal, while three used other fuels, namely methane, ethanol, and petcoke. Recipients were generally required to provide a certain percentage of the cost of each R&D project, called cost share. Specifically, to receive funding, recipients of funding for the nine large demonstration projects agreed to pay at least $610 million in cost share for the demonstration projects. Three of those demonstration projects remained active at the end of fiscal year 2017. Four projects had their support withdrawn by DOE, and two were withdrawn by the recipient. These projects ended due to several factors such as a lack of technical progress, the closure of the Recovery Act appropriations account on September 30, 2015, and changing economic conditions—such as decreased natural gas prices which resulted in changes in the relative prices of coal and natural gas. The nine large demonstration projects represented over 40 percent of the $2.66 billion in advanced fossil energy R&D funding for the 794 projects (see fig. 1). Of the $1.12 billion in funding for the advanced fossil energy demonstration projects, DOE provided $616 million in funding for three large demonstration projects that started in fiscal year 2010 and that remained active as of the end of fiscal year 2017. Petra Nova Parish Holdings of Texas has a demonstration project underway that has retrofitted an existing coal-fired power plant in Texas with post-combustion carbon capture technology, according to DOE documentation. The objective of this project is to demonstrate the ability to capture 90 percent of the CO According to DOE documentation, DOE’s involvement with the project is scheduled to conclude in December 2019. The Petra Nova project captured and stored its first 1 million metric tons of CO in November 2017, according to DOE officials. Archer Daniel Midlands of Illinois had a demonstration project underway to capture CO per year using dehydration and compression processes and sequester it in the Mt. Simon Sandstone formation (a saline reservoir) in Illinois. DOE provided $141 million in funding for the project from fiscal years 2010 through 2017. DOE’s involvement with the project is scheduled to conclude in September 2019, according to DOE documentation. During calendar year 2017, the project captured and stored over 500,000 metric tons of CO emitted from two large steam-methane reformers, which produce hydrogen from methane, for its demonstration project in Texas. The captured gas is compressed and sent via pipeline to oil fields in eastern Texas to be used for enhanced oil recovery and thereby sequestered, according to DOE documentation. DOE provided $284 million in funding for the project from fiscal years 2010 through 2017. DOE’s involvement under this demonstration project’s award concluded the last day of fiscal year 2017. DOE Provided $1.54 Billion in Funding for 785 Other R&D Projects Started from Fiscal Years 2010 through 2017 to Support Advanced Fossil Energy DOE provided $1.54 billion in funding for 785 other advanced fossil energy R&D projects started from fiscal year 2010 through 2017. For these 785 R&D projects, DOE provided: on average, $2.0 million per project; a median of $0.8 million per project; less than $5 million to 91.8 percent (721) of the 785 projects; and less than $1 million to 58.1 percent (456) of the projects. For projects started from fiscal years 2010 through 2017, total funding for projects by fiscal year started ranged from less than $100 million to more than $300 million (see fig. 2). As noted earlier, recipients of DOE’s R&D funding were generally required to provide cost share to support the cost of each R&D project. For 661 of the 785 projects, the initially agreed-upon dollar amount to be covered by recipients was $617 million in cost-share. Recipients did not provide a cost-share for the remaining 124 of the 785 projects, which were predominantly grants without cost share requirements, according to DOE officials. According to DOE data, DOE provided the largest amount of funding for projects started in 2010 because DOE received a supplemental appropriation for fossil energy R&D through the Recovery Act. DOE provided funding for 72 of the coal technologies research projects— totaling $237 million—using appropriations from the Recovery Act, according to DOE data. Of the 785 R&D projects for which DOE provided funding, most advanced fossil energy projects researched coal technologies rather than oil and gas, and recipients of the funding were generally universities and industry groups that were distributed across the country. Most of DOE’s 785 Advanced Fossil Energy Projects Researched Coal Technologies Of the 785 projects, 698 (about 89 percent) involved coal technologies, receiving $1.40 billion (about 91 percent) of the $1.54 billion in funding DOE provided for the projects. The remaining projects and funding supported R&D for oil and gas technologies, according to DOE’s categorization of the projects by fuel type (see table 1). Within each fuel type, projects researched various technology types, such as R&D on coal gasification systems and the mitigation of methane emissions from natural gas infrastructure. The funding for the 785 R&D projects ranged from $5,000 for a research conference (oil and gas) to $125 million for a research facility focused on next-generation CCS technologies (coal). so that it does not change phases, but rather undergoes drastic density changes over small ranges of temperature and pressure. Such cycles have shown the potential for increased heat-to- electricity conversion efficiencies, high power density, and simplicity of operation compared to existing steam-based power cycles. $5,000 for a research conference to $29 million for the University of Texas at Austin’s active project on the deep-water characterization and scientific assessment of gas hydrates. Specifically, DOE identified the following four categories as oil and gas-related research areas: Gas Hydrates: The development of technologies to find, characterize, and recover methane from gas hydrates through field testing, numerical simulation, and laboratory experimentation, among other things. For example, DOE provided the University of California-San Diego $350,000 in funding for a 3-year active project to characterize the baselines and changes in gas hydrate systems. Natural Gas Infrastructure: The monitoring of the U.S. natural gas pipeline network, which includes more than 300,000 miles of interstate and intrastate transmission pipelines. For example, DOE provided the University of Pittsburgh $1.2 million in funding for a 3-year active project on multi-functional fiber sensors for pipeline monitoring and methane detections. Onshore Unconventional Resources: The production of hydrocarbons―primarily natural gas―from shale formations. For example, DOE provided the Ground Water Protection Council, of Oklahoma, $13 million for an 8-year project for data management and regulatory approaches related to hydraulic fracturing and geologic sequestration of CO The R&D projects in this area included research on geologic uncertainty prediction of oil and gas, and improvement of subsea systems reliability through automation and advanced technology. Recipients Were Generally Universities and Industry Groups That Were Distributed Across the Country The recipients of the funding for the 785 advanced fossil energy R&D projects were mostly universities and industry groups that were located in 47 states and the District of Columbia. Of these recipients, approximately 51 percent were universities; 43 percent were industry groups; and 5 percent were other entities, including other federal agencies, such as the U.S. Geological Survey (see table 2). While university recipients received funding for a majority of projects, industry recipients received a majority of the funding (see table 3). Recipients were located in 47 states and the District of Columbia. The three states with the highest number of projects with recipients located in their states were Texas (100), California (61), and Ohio (53). The three states where recipients received the most funding were Texas (about $169 million), Alabama (about $161 million), and California (about $152 million) (see fig. 3). DOE Made No Loan Guarantees for Advanced Fossil Energy from Fiscal Year 2006 through August 2018 Although DOE issued three solicitations for applications for advanced fossil energy loan guarantees—most recently in fiscal year 2014, for up to $8 billion in loan guarantees—DOE had not guaranteed any loans for advanced fossil energy as of August 2018. Specifically, the 2006 and 2008 advanced fossil energy solicitations were for projects that involved coal-based power generation and that would incorporate CCS, coal gasification, or other beneficial uses of carbon, among other things. However, neither solicitation resulted in any loan guarantees, in part because during this timeframe of the late 2000s, natural gas prices fell, causing a shift in the market, which led to such coal-related projects no longer being economically competitive, according to DOE officials. According to the fiscal year 2014 solicitation, applicants could use any fossil fuel—including coal, oil, or natural gas—that would reduce, avoid, or sequester greenhouse gases. In response to the 2014 advanced fossil energy solicitation, DOE officials told us that DOE had received 19 applications total. According to DOE officials: Five fossil energy applicants were actively moving through the process of review as of August 2018. For example, in January 2018, one applicant issued a press release stating that it was pursuing a $1.9 billion loan guarantee to support the development of infrastructure for a proposed underground storage facility for natural gas liquids and intermediates. Nine fossil energy applicants had been idle or not following up with the Loan Programs Office. Three applicants did not meet certain eligibility requirements. Two companies withdrew their applications—one in 2014, and one in 2018. Of the five advanced fossil energy applicants actively in the process of DOE review, DOE offered a conditional commitment to guarantee up to $2 billion in loans to one applicant—Lake Charles Methanol—in December 2016. As we have previously reported, a conditional commitment is one where DOE commits to issue a loan guarantee if the applicant satisfies specific requirements. According to information on the DOE website, the Lake Charles Methanol plant in Louisiana would produce methanol from the gasification of petcoke, and capture and transport the CO to Texas for enhanced oil recovery. According to DOE documentation, the Lake Charles project planned to leverage the work and experience gained from the earlier DOE demonstration project by Leucadia Energy. Agency Comments We provided a draft of this report to DOE for review and comment. DOE provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Energy, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology In this report, we describe 1) the Department of Energy’s (DOE) funding of advanced fossil energy research and development (R&D) projects started from fiscal years 2010 through 2017 and the types of projects and recipients that received funding, and 2) DOE’s loan guarantees, if any, for advanced fossil energy projects from fiscal year 2006 through August 2018. You asked us to review DOE’s funding for advanced fossil energy projects. To address the first objective, we reviewed relevant laws, regulations, and DOE guidance. We analyzed DOE advanced fossil energy R&D project data for fiscal years 2010 through 2017. We focused our review on advanced fossil energy R&D projects that received funding through the Office of Fossil Energy’s (FE) National Energy Technology Laboratory (NETL) because the 794 projects represent all of the advanced fossil energy R&D projects in our scope started from fiscal years 2010 through 2017. We used fiscal year 2010 as the start date because DOE officials told us that DOE’s current data management system came into use for the R&D projects that started in fiscal year 2010. We used fiscal year 2017 as the end date because that was the most recent complete year for which data were available. DOE provided us with a spreadsheet that included key project information—such as the name of the recipient of the R&D funding and the project start date—as well as obligations data for each project started for the period of our review (fiscal years 2010 through 2017), by the fiscal year during which the project was started, by summing the obligations for the project from each year. We reported on DOE’s funding for these R&D projects; DOE generally provided financial assistance for these projects through grants or cooperative agreements. In addition, NETL’s in-house R&D work was outside of the scope of our review. To assess the reliability of the funding data, as well as the specific project information for the 794 R&D projects, we interviewed data specialists at DOE Headquarters, FE, and NETL and reviewed DOE internal guidance for the maintenance of agency data. We found the data to be sufficiently reliable for our purposes. We also reviewed DOE websites and documentation, including fact sheets, and interviewed officials from FE and NETL. To characterize the kinds of groups that received advanced fossil energy R&D funding, we developed the following definitions for coding each recipient: University: any institution of higher education, such as a public or non-profit private college, junior college, or university. Federal financial assistance means assistance that non-federal entities receive or administer in the form of grants, property, cooperative agreements, food commodities, direct appropriations, or other assistance, and can also include loans, loan guarantees, interest subsidies, and insurance, depending on the context, but does not include amounts received as reimbursement for services rendered to individuals in accordance with OMB- issued guidance. 2 C.F.R. § 200.40. See also 31 U.S.C. § 7501(5). A grant agreement is generally defined as a legal instrument of financial assistance between a federal awarding agency and a non-federal entity that is used to enter into a relationship the principal purpose of which is to transfer anything of value from the federal awarding agency to the non-federal entity to carry out a public purpose authorized by law, and not to acquire property or services for the federal awarding agency’s direct benefit or use. 2 C.F.R. § 200.51. A cooperative agreement is distinguished from a grant in that it provides for substantial involvement between the federal awarding agency and the non-federal entity in carrying out the activity contemplated by the federal award. 2 C.F.R. § 200.24. For purposes of our report, we use the term awards to refer to both grants and cooperative agreements. organized primarily for profit. Industry includes some organizations that were founded as non-profit corporations but call themselves “companies” and/or describe “serving clients.” Other: any entity not associated with a university or industry. Other includes groups such as other federal government agencies, as well as non-profit corporations and other entities which we could not identify conclusively as either industry or universities. We used these three categories, and their definitions, to guide us in the coding process. After developing these definitions, three analysts independently coded each recipient as a university, industry, or other. Our method was to examine the identifying information on each recipient’s website and decide which category best described the entity. We also had an independent analyst check the coding category that we had assigned to each recipient and verify that we had made a reasonable coding decision. To describe the status of DOE’s advanced fossil energy loan guarantees, we reviewed relevant laws, regulations, and guidance, as well as past GAO reports describing DOE’s administration of the loan program. We also reviewed summary information that DOE provided on applications for loan guarantees for advanced fossil energy projects. We analyzed information that DOE provided on applications for loan guarantees for advanced fossil energy projects under the Loan Guarantee Program (LGP) and other related information for fiscal year 2006 through August 2018. We used fiscal year 2006 as the start date because it was the first year that DOE issued an advanced fossil energy project solicitation—an announcement of opportunities for loan guarantees for advanced fossil energy projects—and we used August 2018 as the end date in order to provide the most up-to-date information as possible. We also reviewed the advanced fossil energy project solicitations DOE issued during this timeframe. To assess the reliability of the summary information, we interviewed LGP staff who maintain the information for the advanced fossil energy applications, and reviewed DOE documentation. We found the data to be sufficiently reliable for our purposes. In addition, we interviewed officials from the Loan Programs Office who work on the LGP. We conducted this performance audit from March 2017 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, Karla Springer (Assistant Director), Rebecca Makar (Analyst-in-Charge), TC Corless, Cindy Gilbert, Carol Henn, Kirk Menard, Patricia Moye, Sheryl Stein, and Sara Sullivan made key contributions to this report. Also contributing to this report were Carolyn Blocker, Marcia Carlsen, Nirmal Chaudhary, Jaci Evans, Ryan Gottschall, Keesha Luebke, and John Yee.
Why GAO Did This Study One aspect of DOE's mission is to secure U.S. leadership in energy technologies. To that end, DOE funds R&D for energy projects, including for advanced fossil energy (innovative technologies for coal, natural gas, and oil). DOE provides funding for R&D projects, including large projects designed to demonstrate the commercial viability of technologies. Also, DOE is authorized to make loan guarantees to support certain energy projects through its Loan Guarantee Program, which is administered by its Loan Programs Office. GAO was asked to review DOE's funding for advanced fossil energy projects. This report describes DOE's funding for advanced fossil energy R&D projects started from fiscal years 2010 through 2017 and the types of projects and recipients that received funding, among other objectives. For purposes of this report, GAO used the term funding to mean obligations. GAO analyzed relevant laws, regulations, and guidance; DOE data on R&D funding for fiscal years 2010 through 2017; and DOE documents. GAO also interviewed DOE officials in the Office of Fossil Energy, the National Energy Technology Laboratory, and the Loan Programs Office. What GAO Found The Department of Energy (DOE) provided $2.66 billion in funding, or obligations, for 794 research and development (R&D) projects started from fiscal years 2010 through 2017 to develop advanced fossil energy technologies. Such technologies include processes for converting coal into synthesis gas composed primarily of carbon monoxide and hydrogen, and recovering methane from gas hydrates. Of the $2.66 billion, DOE provided $1.12 billion in funding for 9 later-stage, large demonstration projects, which were to assess the readiness for commercial viability of carbon capture and storage (CCS) technologies. CCS involves capturing man-made carbon dioxide at its source and storing it permanently underground. DOE provided the remaining $1.54 billion in funding for 785 other projects in amounts that were relatively small—over half were for less than $1 million. Six demonstration projects researched CCS technologies using coal, while three used other fuels. The nine demonstration projects received funding ranging from $13 million to $284 million. As shown in the figure, three projects implementing CCS technologies were active as of the end of fiscal year 2017. Also, DOE withdrew its support for four projects, and two projects were withdrawn by the recipients—all before completion. These projects did not reach completion due to several factors, such as a lack of technical progress, or changes in the relative prices of coal and natural gas that made the projects economically unviable. Of the 785 other projects, about 89 percent involved R&D of coal technologies, such as coal gasification—the conversion of carbon-containing material into synthesis gas. The other 11 percent of the 785 projects involved R&D of oil and gas technologies, such as the development of technologies to find, characterize, and recover methane from gas hydrates. What GAO Recommends GAO is not making any recommendations.
gao_GAO-18-199
gao_GAO-18-199_0
Background Forms and Distribution of Arsenic Arsenic is a naturally occurring element that is widely distributed in the earth’s crust in two general forms—organic and inorganic. It commonly enters the body through ingestion of food or water. Most data reported for arsenic in food describe the levels of total arsenic because analyses that provide information about the forms of arsenic present are more difficult to perform, and relatively few laboratories are able to perform these analyses. Data on the levels of specific forms of arsenic, however, are becoming increasingly important because, according to the Agency for Toxic Substances and Disease Registry, the two forms have different toxicities, with inorganic arsenic being considered the more toxic form. Further, foods may have different proportions of organic and inorganic arsenic as well as different levels of total arsenic. According to the European Food Safety Authority, plants generally contain low levels of both total and inorganic arsenic, but rice may contain significant levels of total arsenic and inorganic arsenic. Levels of arsenic in groundwater, a major source of drinking water in many parts of the world, may be high in some areas; essentially all the arsenic in drinking water is inorganic arsenic. The form and level of arsenic in rice may vary depending on the geographic region where rice is grown, conditions under which rice is grown, variety of the rice, and rice milling practices. In the United States—where, according to USDA, approximately 80 percent of the rice consumed domestically is grown—rice is primarily grown in six states: Arkansas, California, Louisiana, Mississippi, Missouri, and Texas. In 2016, the latest year for which USDA data were available, about 47 percent of the rice grown in the United States was grown in Arkansas, and about 21 percent was grown in California. The amount of arsenic rice absorbs varies by geographic region because of differing levels of arsenic in the soil and other factors. Arsenic levels in the soil vary both naturally and as a result of human activity. Natural processes that contribute to arsenic levels in the soil may include bedrock weathering, because arsenic is present in many rock-forming minerals. Human activities that contribute to arsenic levels in the soil may include the use of arsenic-based pesticides and animal drugs, the mining and smelting of metal, and coal combustion. Figure 1 shows the results of a 2013 U.S. Geological Survey sampling of soils to measure the levels of arsenic in the contiguous United States. In addition, the figure shows the outlines of rice-growing counties based on 2016 data from USDA. Compared to other plants, rice absorbs more arsenic from the environment, in part because of the physiology of rice. For example, rice may readily absorb certain compounds of arsenic because, among other reasons, these compounds are similar in size to compounds containing silicon, an essential nutrient for rice. The conditions under which rice is grown may also cause it to absorb more arsenic than other plants. For instance, rice is often grown in flooded fields to control pests, grasses, and diseases, among other reasons. However, flooded conditions may promote the formation of arsenic compounds that may be easily absorbed by the rice plant. Even under the same growing conditions, some varieties of rice tend to have higher levels of arsenic in their grain, on average, than others, owing to a need for longer growing periods, among other factors. In addition, the concentrations of the two forms of arsenic may vary within the rice grain. While organic arsenic may be distributed throughout the rice grain, most of the inorganic arsenic is found in the bran layer. As seen in figure 2, the process of milling rice removes the bran layer; thus, levels of inorganic arsenic in white, or milled, rice may be lower than those in brown, or whole grain, rice. Federal Agencies’ Responsibilities for Rice A number of federal agencies are responsible for ensuring the safety and quality of rice and for assessing the human health effects of ingestion of arsenic in rice. Within HHS, FDA has overall responsibility for implementing provisions of the Federal Food, Drug, and Cosmetic Act, as amended. Specifically, FDA is responsible for determining whether food, including rice, is deemed to be adulterated (i.e., whether it bears or contains any poisonous or deleterious substance that may render it injurious to health). Under its regulations, FDA may issue guidance to establish a level of a contaminant that a food should not exceed. FDA would consider case-by-case whether a food that contains the contaminant is adulterated. For example, in 2013, FDA issued draft guidance for arsenic in apple juice, on the basis of its risk assessment that estimated the long-term cancer risk posed by inorganic arsenic. According to FDA, its Center for Food Safety and Applied Nutrition is responsible for regulatory and research programs that address the health risks associated with foodborne contaminants and is aided in this role by the Office of Regulatory Affairs, which is responsible for field-based activities such as inspections, sampling, and testing of regulated products. The Center for Food Safety and Applied Nutrition also conducts industry outreach and educates consumers, among other things. Other agencies within HHS may also conduct research, collect data, and provide information on the health effects of arsenic. For example, the National Institutes of Health (NIH) sponsor research on the health effects of ingestion of arsenic. The Centers for Disease Control and Prevention (CDC) administer the National Health and Nutrition Examination Survey, which, among other things, collects data about diet and exposure to certain substances, such as arsenic. Under the Superfund Amendments and Reauthorization Act of 1986, the Agency for Toxic Substances and Disease Registry prepares toxicological profiles for certain hazardous substances, including arsenic. Agencies within USDA conduct and sponsor research to advance food safety and to help farmers market rice and manage the risk of growing it. Within USDA, ARS and NIFA conduct and sponsor research, to, among other things, maintain an adequate, nutritious, and safe supply of food to meet human nutritional needs and requirements. NIFA also distributes capacity grants that support research and extension programs at land- grant universities, which provide science-based information to farmers. The Agricultural Marketing Act of 1946 authorizes the Federal Grain Inspection Service (FGIS) to establish quality standards, including standards for rice. FGIS also offers inspection services for rice farmers and processors upon request. The Risk Management Agency manages the Federal Crop Insurance Corporation, which offers crop insurance to farmers for over 100 different crops, including rice. For the 2018 crop year, the rice crop insurance provisions generally require that the rice be flood-irrigated (i.e., intentionally covered with water at a uniform and shallow depth throughout the growing season). Other agencies play a role in managing the risk of arsenic. EPA regulates the presence of certain substances, such as arsenic, in drinking water under the Safe Drinking Water Act and conducts toxicological assessments. In 2001, EPA issued a rule limiting the level of arsenic in drinking water to 10 parts per billion (ppb) to protect consumers from the health effects of long-term exposure. Under its Integrated Risk Information System program, EPA conducts assessments that provide toxicity values—such as for increased cancer risk due to lifetime ingestion of a specified quantity of a substance. In accordance with congressional direction, EPA submitted a plan for developing a draft assessment and preliminary assessment materials for inorganic arsenic to NRC for review. In 2013, NRC released an interim report, which provided guidance to EPA and included a preliminary survey of the scientific literature. In addition, in accordance with Executive Order 13272, the Small Business Administration’s Office of Advocacy helps agencies assess the potential impacts of draft rules on small businesses—which could include members of the rice industry—small governmental jurisdictions, and small organizations. Entities outside of the federal government have recently proposed or established limits or guidance for arsenic in rice. For example, in 2017, the Codex Alimentarius, an international standard-setting body, published a code of practice that provides guidance for preventing and reducing arsenic contamination in rice, as well as communicating the risk to stakeholders. In 2014 and 2016, the Codex Alimentarius established a standard for inorganic arsenic of 200 ppb for white rice and 350 ppb for brown rice. In 2015, the European Commission issued a regulation limiting inorganic arsenic in various rice-based foods, including limits of 200 ppb in white rice, 250 ppb in brown rice, and 100 ppb in rice destined for food for infants and young children. Enterprise Risk Management Enterprise risk management allows agencies to assess threats and opportunities that could affect the achievement of their goals. In a 2016 report, we updated our 2005 risk management framework to (1) reflect changes to OMB’s Circular A-123, which requires agencies to implement enterprise risk management; (2) incorporate recent federal experience; and (3) identify essential elements of federal enterprise risk management. Beyond traditional internal controls, enterprise risk management promotes risk management by considering its effect across the entire organization and how it may interact with other identified risks. Additionally, it also addresses other topics such as setting strategy, governance, communicating with stakeholders, and measuring performance, and its principles apply at all levels of the organization and across all functions—such as those related to managing the risk of arsenic in rice. The six essential elements of enterprise risk management that we identified in December 2016 are as follows: Align risk management process with goals and objectives. Ensure the process maximizes the achievement of agency mission and results. Identify risks. Assemble a comprehensive list of risks, both threats and opportunities, that could affect the agency’s ability to achieve its goals and objectives. Assess risks. Examine risks, considering both the likelihood of the risk and the impact of the risk to help prioritize risk response. Respond to the risks. Select risk treatment response (based on risk appetite), including acceptance, avoidance, reduction, sharing, or transfer. Monitor risks. Monitor how risks are changing and whether responses are successful. Communicate and report on risks. Communicate risks with stakeholders and report on the status of addressing the risks. NRC and Recent Key Scientific Reviews Reported Evidence of Associations between Ingestion of Arsenic and Adverse Human Health Effects NRC, in its 2013 report, and recent key scientific reviews reported evidence of associations between long-term ingestion of arsenic and adverse human health effects. NRC identified stronger evidence of these associations at higher arsenic levels—defined by NRC as 100 ppb or higher in drinking water—than at lower levels, which are more common in the United States. NRC reported greater uncertainty regarding the associations with some health effects at lower levels of arsenic and noted that research on the health effects of ingestion of lower levels of arsenic is ongoing. Many of the studies on which NRC based its conclusions were focused on the ingestion of arsenic from drinking water, but other studies were based on arsenic from all sources, including dietary sources such as rice. Further, NRC reported that evidence from CDC dietary surveys and related academic studies suggests that food, particularly rice, may be a significant source of inorganic arsenic, especially when arsenic levels in drinking water are lower; however, consumption of rice and levels of arsenic in rice vary widely, making it difficult to estimate arsenic intake from rice. NRC reported strong evidence of causal associations—that is, a potential cause and effect—between the long-term ingestion of arsenic from water or dietary sources, such as rice, and the following five health effects: Skin diseases. Skin lesions. Skin lesions due to arsenic ingestion predispose a person to some skin cancers and may indicate increased susceptibility to other cancer and noncancer diseases. Skin lesions have a well-established dose-response relationship with arsenic in drinking water. Skin cancer. Arsenic is an established skin carcinogen, according to NRC. NRC stated that almost all published studies found evidence of an association between arsenic ingestion and nonmelanoma skin cancers. Lung cancer. Arsenic from drinking water is an established lung carcinogen in humans, according to NRC. NRC cited studies conducted in Argentina, Chile, Japan, Taiwan, and the United States that reported associations between high levels of arsenic ingestion and lung cancer. NRC reviewed several studies that examined ingestion of lower levels of arsenic, some of which found evidence of an association, while others did not. Cardiovascular disease. NRC stated that many studies found a causal association between the ingestion of arsenic and cardiovascular disease and mortality. Studies suggest that the ingestion of lower levels of arsenic in drinking water and possibly in food is associated with cardiovascular disease, but additional evidence is needed to fully understand the relationship. Bladder cancer. Arsenic is an established bladder carcinogen in humans, according to NRC. NRC cited a 2012 assessment by the International Agency for Research on Cancer that indicated higher mortality from bladder cancer in populations that are exposed to high levels of arsenic compared to those that are not based on studies in Argentina, Chile, and Taiwan. NRC reported that there was moderate evidence of association between the long-term ingestion of various levels of arsenic from water or dietary sources such as rice, and adverse health effects, although some studies found evidence of an association and others did not. Adverse health effects include, for example, neurodevelopmental toxicity and pregnancy outcomes related to infant illness, disease, or injury. NRC also reported that there was limited evidence of an association between the long-term ingestion of arsenic from water and dietary sources and adverse health effects, such as liver and pancreatic cancer and renal disease. We analyzed 14 scientific reviews, published since NRC’s 2013 report, from January 2015 through early June 2017, that generally have supported NRC’s conclusions that long-term ingestion of arsenic is associated with the above-mentioned health effects. Two reviews reporting additional evidence related to cardiovascular disease suggested that there may be a threshold—an arsenic level below which there is no significant occurrence of cardiovascular disease. However, one of these reviews noted that the number of studies they examined was small, among other limitations. Regarding lung cancer, another recent review proposed a dose-response relationship, which NRC identified as a gap in the understanding of this adverse health effect. However, this review noted that the studies it included did not distinguish between the risk of lung cancer in smokers and non-smokers, which NRC reported may be a key confounding factor. The review also cited other limitations, including the small number of studies it used to model this relationship. See appendix II for additional information about the reviews we identified. FDA and USDA Have Taken Actions to Manage the Risk to Human Health from Arsenic in Rice FDA and USDA have taken actions to manage the risk to human health from arsenic in rice, including assessing the type and prevalence of health effects that may result from long-term ingestion. These efforts were generally consistent with the six essential elements for managing risk, which we have found could help agencies assess threats that could affect the achievement of their goals. Specifically, FDA has taken actions that were consistent with five of the six essential elements, including: (1) aligning risk management process with goals and objectives, (2) identifying risks, (3) assessing risks, (4) responding to the risks, and (5) monitoring risks. However, FDA has not fully taken action on the sixth element of communicating and reporting on risks. FDA issued a risk assessment in 2016 for public comment and a draft guidance limiting the levels of arsenic in infant rice cereal, but it has not updated or finalized these key documents. USDA has taken actions consistent with five of the six essential elements but has not taken actions to monitor the risk because of its more limited, nonregulatory role. Aligning Risk Management Process with Goals and Objectives FDA and USDA have aligned their actions to manage the risk to human health from arsenic in rice to goals in their strategic plans. According to FDA officials, FDA’s actions align with three of the six goals identified in the 2015–2018 research strategic plan for FDA’s Center for Food Safety and Applied Nutrition, including advancing diet and health research that contributes to the development of science-based policies and communication strategies. Regarding USDA’s actions, ARS officials stated that their research on arsenic in rice aligned with four goals in ARS’s fiscal year 2012–2017 strategic plan, such as protecting food from pathogens, toxins, and chemical contamination during production, processing, and preparation. NIFA officials stated that the research they sponsored on arsenic in rice aligned with one of the sub-goals in NIFA’s fiscal year 2014–2018 strategic plan: to reduce the incidence of foodborne illness and provide a safer food supply. FGIS officials provided documentation showing that their actions aligned with one of the goals in their fiscal year 2016–2020 strategic plan: provide the environment for fair and competitive market practices between agricultural producers and buyers. FDA’s and USDA’s actions were consistent with the essential element of aligning risk management actions to their strategic plans. Identifying Risks Total Diet Study The Food and Drug Administration’s (FDA) Total Diet Study, which began testing for arsenic in 1991, is an ongoing program that monitors the levels of about 800 contaminants and nutrients in the average U.S. diet. To conduct the study, FDA buys, prepares, and analyzes about 280 kinds of foods and beverages from representative areas of the country and estimates the average amounts of contaminants and nutrients the entire U.S. population, some subpopulations, and each person consumes annually. The sampling plan calls for purchasing each type of food four times a year, each time in a different region. Within each region, FDA purchases each food product from three different stores and combines them into a composite sample, for a total of four estimates each year. FDA makes results of the study, from 1991 through 2015, available to the public in electronic form on its website. FDA and USDA have taken actions to identify the risk of arsenic in rice. FDA has identified the risk of arsenic in rice through the Total Diet Study—an annual testing of contaminants and nutrients in food. As part of conducting the Total Diet Study, FDA collects samples of certain foods, including rice, and tests them for a variety of toxic chemicals, including total arsenic. From 2014 through 2015, the most recent years for which data are available, FDA tested six different categories of rice-based foods for arsenic. FDA officials told us that they identified arsenic in rice as a priority based, in part, on the results of the Total Diet Study, which indicated that rice had higher levels of arsenic compared to other foods. Some university researchers we interviewed stated that the Total Diet Study would be more helpful if it measured inorganic arsenic or had a more robust methodology. For example, one university researcher noted that the number of samples in the Total Diet Study is not big enough to be nationally representative. FDA officials told us that starting with the fiscal year 2018 Total Diet Study, they plan to begin testing rice-based foods for inorganic arsenic, increase the number of samples they collect, and make other improvements to the sampling methodology. USDA officials have taken actions to identify the risk of arsenic in rice through a variety of research programs. ARS officials told us that they have conducted research on arsenic in rice under four national programs: (1) plant genetic resources, genomics, and genetic improvement; (2) water availability and watershed management; (3) human nutrition; and (4) food safety. For example, ARS researchers are examining whether changes in soil chemistry as a result of organic or conventional management practices affect arsenic levels in rice. NIFA officials stated that NIFA sponsors research on arsenic in rice through formula-based grants to universities and through competitive grants, such as those offered through the Agriculture and Food Research Initiative. To identify what research to undertake, ARS officials told us that they typically meet with industry to identify its highest priorities. For example, ARS officials from the Delta Water Management Research Unit in Arkansas stated that they started researching arsenic in rice after participating in a joint ARS-USA Rice Federation conference in 2012. FGIS officials told us that contaminants such as arsenic may affect the quality of a grain, such as rice, and hence its value. They stated that they work closely with the grain industry to develop new standards and tests to meet industry’s needs. Assessing Risks FDA and USDA have taken actions to assess the risk of arsenic in rice. In 2012, FDA published its current method to detect inorganic arsenic in rice. FDA officials told us that this method, though useful, is time- consuming and expensive, and the agency continues to develop other methods to reduce cost and time. For example, in 2017, FDA developed another method to detect inorganic arsenic in wine and rice that takes less time than its current method. FDA officials told us they have an ongoing research project on a field-deployable method based on a commercially-available digital arsenic test kit for detecting arsenic in drinking water called the Arsenator. In addition, FDA has been using laser ablation, the process of removing a material from a solid using a laser beam so that it can be measured, as a way to study arsenic distribution in rice. From 2011 through 2014, FDA conducted targeted sampling of more than 1,400 rice-based foods—including rice, rice beverages, cereals, and snacks—for inorganic arsenic. This targeted sampling and a literature review of articles published before February 2015 informed a risk assessment of arsenic in rice that FDA issued for public comment in April 2016. Specifically, the risk assessment used the results of the targeted sampling to identify levels of inorganic arsenic in rice and examined available scientific information to provide quantitative estimates of lung and bladder cancer risk—that is, the number of expected lung and bladder cancer cases per million people that may be attributable to long- term ingestion of inorganic arsenic in rice and a qualitative assessment of other adverse health effects. The risk assessment also analyzed alternative approaches to reducing the risk of arsenic in rice, such as instituting limits on the allowable level of arsenic in various rice-based foods, limiting the amount and frequency of consumption of rice, and cooking practices. FDA’s actions have helped assess the risk of arsenic in rice, although some stakeholders we interviewed have identified limitations to FDA’s actions. For example, one rice producer noted that because FDA’s current detection method is time-consuming and expensive, it is not widely used—companies only use it when tests for total arsenic reveal that the levels exceed the limit for inorganic arsenic that their customers request. Some stakeholders noted that the evidence FDA used to assess the risk of the ingestion of low levels of arsenic, which may be more relevant for rice consumption, is more uncertain. USDA agencies have also taken actions to assess the risk by conducting research to develop faster and less expensive methods to detect inorganic arsenic in rice. In 2016, ARS developed a method using hydride generation, which uses an acid to convert the inorganic arsenic into a gas that can be detected by an instrument. ARS officials stated that they have conducted research on the hydride generation method for more than 5 years and were able to further refine the method with funding from the Rice Foundation. Stakeholders from the rice industry and a university researcher we interviewed noted that, while the hydride generation method is faster and cheaper than FDA’s current detection method, it is too time-consuming and expensive for commercial purposes. For example, rice mills could not keep pace with trucks lining up to unload rice if they use the hydride generation method. However, ARS officials stated that researchers may use it if they need to analyze thousands of samples and are willing to trade off some accuracy for speed and cost. In addition, FGIS conducted some of its own development work on the Arsenator. Agency officials said that they began research on the Arsenator to help provide a rapid and inexpensive method of detecting inorganic arsenic at FGIS official testing locations that could include rice mills but have suspended their efforts because representatives of the rice industry have told them that these tests are not necessary. Responding to the Risks FDA and USDA have taken actions to respond to the risk of arsenic in rice. In 2016, FDA issued draft guidance, which proposed an action level, recommending that the rice industry not exceed a level of 100 ppb inorganic arsenic in infant rice cereal, and FDA has conducted research on cooking methods that may reduce arsenic. In its draft guidance, FDA stated that it used its risk assessment, among other considerations, to identify the level of inorganic arsenic in infant rice cereal. FDA further noted that it selected 100 ppb because of the potential for human health risks associated with inorganic arsenic and because such a level is achievable with the use of current good manufacturing practices— specifically, selecting sources of rice or rice-derived ingredients with lower inorganic arsenic levels. FDA officials told us that they focused on infant rice cereal because infants are at a higher risk of experiencing some of the health effects of ingesting inorganic arsenic, such as neurodevelopmental effects, and because the diet of infants is less varied than that of adults. FDA officials noted that the proposed guidance sets a limit for infant rice cereal that is generally consistent with the limit set by the European Commission and that other types of rice sold in the United States also generally meet the Codex Alimentarius standards. University researchers and a group representing consumers we interviewed stated that FDA’s draft guidance is a good first step, but that FDA should establish limits for arsenic in other rice products, such as rice crackers and other foods that children eat. FDA officials noted that the next most susceptible group would likely be toddlers and young children, but because their diet is more diverse than that of infants, rice-based foods make up a smaller portion of their diet. FDA requested public comments on certain aspects of the draft guidance, such as its feasibility, and noted that when it is finalized, it will represent FDA’s current thinking on this topic. The public comments were due to FDA in July 2016, although FDA noted that the public may comment on its guidance at any time. University researchers and stakeholders from the rice industry we interviewed stated that FDA’s draft guidance has become a de facto industry standard for infant rice cereal. In 2016, FDA also published research on the effect that cooking methods, such as cooking rice in excess water, may have on reducing the level of arsenic in rice. FDA officials told us that they provided advice to consumers on cooking methods that could reduce arsenic in rice on the FDA website but said FDA will not direct manufacturers to change the cooking instructions for rice because the alternative methods may reduce the nutritional value of the rice. Within USDA, ARS and NIFA have sponsored published and ongoing research that can help respond to the risk, such as research on ways to reduce the uptake of arsenic by rice through new rice varieties, water management practices, and soil additives, as well as research on the genetic mechanisms underlying the uptake and transport of arsenic in the rice plant. For example, ARS has been conducting research on rice varieties that can improve yield and grain quality, including lower levels of arsenic, at the Dale Bumpers National Rice Research Center in Arkansas for more than 30 years. In 2016, university and ARS researchers published a study showing that growing rice using a water management practice called alternate wetting and drying could decrease the levels of arsenic. Under this practice of growing rice, shown in figure 3 below, fields are periodically drained and re-flooded during the growing season. ARS officials stated that the alternate wetting and drying water management practice has been adopted to a limited extent in Arkansas, but pointed out that other benefits, such as reducing water use, may have been more influential to its adoption than the lowering of arsenic levels. They noted that there are a number of challenges that may preclude widespread use, including inadequate water-pumping capacity and the lack of crop insurance coverage for the practice. In addition, in 2015, university researchers and an ARS researcher, with a grant from NIFA, published a study on the effects of adding iron oxide to the soil on the levels of arsenic in rice; they found that iron oxide resulted in significant reduction of arsenic for the two varieties of rice that the study examined. Monitoring Risks FDA, which is responsible for ensuring the safety of rice and rice-based foods, has taken actions to monitor the risk of arsenic in rice. USDA has not done so, because of its more limited, nonregulatory role. FDA has a compliance program designed to monitor over 1,400 products annually, including foods that are most likely to contribute to the dietary intake of toxic elements, among other contaminants. In fiscal years 2015 and 2016, FDA monitored the risk of arsenic by assessing the levels in rice and rice-based foods under this compliance program, and FDA officials told us that they plan to continue to do so in fiscal years 2017 and 2018. FDA officials told us that they generally test the rice for total arsenic but have recently analyzed some samples for inorganic arsenic based on factors such as the level of total arsenic found. FDA considers whether to conduct follow-up actions, including enforcement actions, on a case-by- case basis. As a result of its monitoring in 2016 and 2017 FDA considered, but did not take, two enforcement actions for arsenic in infant rice cereal. FDA officials stated that the inorganic arsenic level in one case was close to the 100 ppb limit and within the margin of error of the detection method, and in the second case, FDA determined during its follow-up to the initial sample that the manufacturer destroyed the remaining product. USDA agencies have not monitored arsenic in rice. The Food Safety and Inspection Service is USDA’s regulatory agency for food safety, but officials have told us they have not taken actions in this area because rice is not under the agency’s jurisdiction. ARS maintains a food composition database, but it does not monitor rice for contaminants such as arsenic because, according to ARS officials, that is not the database’s purpose. FGIS officials stated that they do not have an arsenic testing program for rice at this time. They told us that they considered establishing a testing program for rice intended for export at the request of the rice industry. However, FGIS officials stated that they suspended their efforts when industry determined that it did not need a testing program. Communicating and Reporting on Risks FDA and USDA have taken actions to communicate and report on the risk of arsenic in rice to the public. FDA has issued a risk assessment and draft guidance on arsenic in infant rice cereal, but it has not updated or finalized these documents. FDA’s 2016 risk assessment report provides information about the risk from long-term ingestion of arsenic in rice, and its draft guidance on arsenic in infant rice cereal includes a link to an FDA website with information for consumers, including pregnant women and parents. FDA has requested comments and received 22 public comments from 17 individuals and organizations on both documents. The comments have addressed a range of issues, including the methodology FDA used in its risk assessment; the 100 ppb limit and scope of the agency’s draft guidance; and the effectiveness of the agency’s communication to the public. However, FDA has not publicly issued versions of the guidance or the risk assessment that address these comments. In our prior work, we have found that sharing risk information and incorporating feedback from internal and external stakeholders can help organizations identify and better manage risks, as well as increase transparency and accountability to Congress and taxpayers. In the risk assessment, FDA stated that it will provide an update after considering public comments and any newly-available information. For example, FDA officials told us that they plan to consider newly-available information, such as any updates to EPA’s Integrated Risk Information System assessment for inorganic arsenic, and may update the risk assessment as a result. With regard to public comments, FDA officials told us that they do not intend to make any changes to the approach or findings of the risk assessment and that they are still considering whether to make changes to the draft guidance as a result of public comments. FDA officials stated that they are still reviewing comments and that, before publication, the guidance would have to undergo interagency review. FDA officials also stated that the agency is not required to provide a response to comments in the final guidance. Further, FDA officials stated that the agency does not need to finalize the guidance in order to sample foods for a contaminant or to take enforcement action when contamination may pose a health hazard. Stakeholders we interviewed stated that updating the risk assessment and finalizing the draft guidance would improve FDA’s communication of the risk. For example, some stakeholders we interviewed told us that the information used in the risk assessment—both regarding the health effects of arsenic and the levels of arsenic in rice—may need to be updated to incorporate the results of more recent research. Further, two stakeholders we interviewed—one representing the rice industry and the other representing consumers—noted that it is not clear to them what actions FDA can take based on the draft guidance. However, FDA officials could not give us a timeline for when they plan to update the risk assessment or finalize the guidance. By developing a timeline for updating the risk assessment on arsenic in rice to incorporate any newly- available information, FDA could help clarify when it will take action. Developing a timeline for finalizing the draft guidance on arsenic in infant rice cereal could also help FDA improve the transparency of its decisions—such as by clarifying the effectiveness of the draft guidance. USDA has taken actions that can help communicate and report on the risk of arsenic in rice. ARS officials told us that they have communicated the results of their research on arsenic in rice in a number of ways, such as through presentations at conferences and through outreach to farmers, including in cooperation with extension programs at universities. For example, USDA researchers demonstrated automated irrigation systems that can be used for the alternate wetting and drying water management practice. In 2017, ARS researchers contributed to the development of a bulletin in conjunction with University of Arkansas researchers that contains recommended practices about irrigation methods that can reduce the levels of arsenic in rice. ARS officials told us that their communication efforts could help increase farmers’ interest and adoption of methods they have researched. They also stated that they work with extension programs because these programs have good access to farmers. FDA Coordinated Several Risk Management Actions with USDA and Other Federal Agencies to Varying Extents FDA coordinated with USDA and other federal agencies on the actions to manage the risk of arsenic in rice for which coordination would be expected, to varying extents. FDA coordinated with USDA and several other federal agencies, including CDC, EPA, and NIH, on the development of the risk assessment and draft guidance on arsenic in infant rice cereal, but USDA raised concerns about the extent of the coordination. FDA and USDA coordinated to a limited extent to develop faster and less expensive methods to detect arsenic in rice. FDA Coordinated Its Risk Assessment and Draft Guidance with Several Federal Agencies, but USDA Raised Concerns about the Extent of Coordination FDA coordinated with several federal agencies on the development of the risk assessment and draft guidance on arsenic in infant rice cereal. According to FDA officials, in developing the risk assessment, FDA initially coordinated with EPA on two noncancer health effects—adverse pregnancy outcomes and developmental neurotoxicology effects in young children—to ensure consistency with the work EPA was doing to update its Integrated Risk Information System assessment for arsenic. When FDA completed the draft of the noncancer section of its risk assessment, the agency provided it to EPA and NIH’s National Institute of Environmental Health Sciences for review. FDA incorporated comments from EPA and NIH in the risk assessment document, which EPA, CDC, and NIH subsequently reviewed. From December 2014 through June 2015, the risk assessment and draft guidance underwent HHS’s clearance process. Through this process, CDC and NIH, along with HHS’s Assistant Secretary for Legislation and its Office of the Assistant Secretary for Planning and Evaluation, reviewed the documents, and FDA revised the risk assessment and draft guidance to address their comments. CDC, EPA, and NIH officials told us that they were generally satisfied with FDA’s coordination efforts and the extent to which FDA addressed their comments. For example, CDC officials said that the agency provided FDA several rounds of comments, and by the end of the process, all of its comments had been considered. OMB also chose to review FDA’s risk assessment and draft guidance on arsenic in infant rice cereal through its interagency review process. According to FDA officials, as part of this process, which occurred from May 2015 through March 2016, FDA coordinated with EPA again, as well as with OMB’s Office of Information and Regulatory Affairs and the U.S. Trade Representative within the Executive Office of the President, the Small Business Administration’s Office of Advocacy, and USDA. Officials from the Small Business Administration’s Office of Advocacy said that they were generally satisfied with the review process and characterized the outcome as typical in that some, but not all, of their suggested changes were accepted. However, USDA officials raised concerns about FDA involving them too late in the coordination process and about the extent to which FDA addressed their comments. From May 2015 through July 2015, USDA conducted its first review of these documents and provided FDA with comments. USDA had offered to provide FDA with feedback on versions of the risk assessment on several occasions earlier in the process, but FDA did not accept USDA’s offers, according to a USDA official. As discussed below, FDA chose to engage USDA later in the process. In their comments, USDA officials expressed concerns regarding uncertainties and data limitations in the risk assessment and draft guidance on arsenic in infant rice cereal. USDA also raised questions about whether sufficient data on the link to adverse health effects existed to warrant the draft guidance. Furthermore, USDA stated that because the documents focus solely on rice, instead of addressing risks to the diet as a whole, FDA needs to share clear, consistent, and understandable messages with the public to alleviate fear and misunderstanding related to the risk posed by arsenic in rice. According to USDA officials, FDA did not adequately address their comments in the revised documents, including FDA’s communication strategy. However, according to a senior USDA official, in its response to USDA’s comments, FDA maintained that, overall, the comments it received from its external peer reviewers—five university researchers—were supportive of the risk assessment and that based on the peer review, FDA did not change its findings or conclusions. According to this USDA official, FDA also noted that there are insufficient data to accurately quantify the risk from arsenic in rice to pregnant women or children but that it decided moving forward with the draft guidance on arsenic in infant rice cereal would be prudent. FDA and USDA did not agree on USDA’s role in developing the risk assessment and the point at which they should begin coordinating on the risk assessment. FDA officials told us that FDA generally considers agencies’ expertise in determining whether and when to include them in the development of risk assessments and related documents. FDA did not see USDA as having a role in developing the risk assessment; rather, FDA officials told us that they reached out to USDA after the risk assessment was drafted, when the agency began to consider how to reduce the levels of arsenic in rice during the growing process and the feasibility of industry meeting its draft guidance on arsenic in infant rice cereal. The officials said that FDA met with USDA officials on numerous occasions and invited them to attend additional meetings with various stakeholders. However, according to a senior USDA official, USDA has relevant scientific and technical expertise that should have played a role in developing the risk assessment. According to this official, if FDA had involved USDA earlier in the development process, FDA may have addressed USDA’s comments to a greater extent. We have shown in prior work that agencies can facilitate their collaborative efforts by developing a mechanism for interagency coordination, and a key issue to consider when developing such a mechanism is whether participating agencies have clarified their roles and responsibilities. FDA officials stated that they were not aware of the existence of any mechanism for coordinating risk assessments of contaminants in food, including arsenic in rice, which among other things, could clarify the roles and responsibilities of participating agencies. FDA officials told us that they followed a 2002 report listing guiding principles when developing the risk assessment, but this report, which broadly applies to all foodborne contaminants, did not specify the process FDA should follow to coordinate its risk assessment. However, our review of this 2002 report shows that it recommends that FDA encourage active participation and communication with other agencies and stakeholders and collaboration, when appropriate, as part of its risk assessment development process. Although FDA did reach out to USDA, those meetings were after the completion of the risk assessment. By developing a mechanism for working with relevant agencies to identify their roles and responsibilities for coordinating risk assessments of contaminants in food, including arsenic in rice, FDA could have better assurance that it fully utilizes the expertise of all participating federal agencies. FDA and USDA Coordinated on Developing Methods to Detect Arsenic in Rice to a Limited Extent FDA and USDA’s FGIS and ARS coordinated on the development of detection methods to a limited extent. Officials from FDA and FGIS told us that they began to coordinate in March 2016, when they discovered, in the course of ongoing coordination in another area, that they were each working independently on developing a faster and less expensive detection method using the Arsenator. According to FDA officials, FDA became aware of FGIS’s interest in developing methods to detect arsenic in rice during a Codex Alimentarius meeting that researchers from both agencies attended. Therefore, the avoidance of potentially duplicative effort occurred as a result of an informal discussion that occurred during this meeting. With regard to ARS, FDA officials told us that FDA did not coordinate with ARS on the development of the hydride generation method but that FDA used its own validated method to provide ARS with actual arsenic concentrations of samples to help ARS test its method. According to ARS officials, ARS did not coordinate with FDA or FGIS when developing its method on hydride generation. According to an FDA official, FDA did not coordinate the development of its current method to detect inorganic arsenic in rice, the faster method for wine and rice, or the laser ablation method with FGIS, ARS, or any other federal agency. We have shown in prior work that many of the meaningful results that the federal government seeks to achieve, such as those related to protecting food and agriculture, require the coordinated efforts of more than one federal agency. ARS officials told us that from their perspective, there was no reason to coordinate because ARS, FDA, and FGIS are trying to meet different needs with their research. Further, ARS officials told us that coordinating with FDA would blur the distinction between ARS’s scientific role and FDA’s regulatory role and may imply that ARS has regulatory responsibilities or expertise. However, all three agencies share a crosscutting strategic interest in developing methods for detecting foodborne contaminants, including arsenic in rice. The strategic plans for ARS and FDA’s Center for Food Safety and Applied Nutrition include outcomes and strategies related to the development of detection methods for chemical contaminants or residues. Further, FGIS’s strategic plan includes a strategy of developing innovative tests to measure grain quality, and according to FGIS officials, they have considered testing inorganic arsenic as part of measuring grain quality. According to FGIS officials, once they began coordinating with FDA on the Arsenator, they saw value in coordinating and did so for about 9 months before suspending work on the detection method. We have noted in prior work that interagency mechanisms to coordinate programs that address crosscutting issues may reduce potentially duplicative efforts. However, neither FDA nor USDA has such a mechanism to coordinate the development of methods to detect arsenic in rice or other methods to detect contaminants in food. FDA officials told us that the agency works with USDA research agencies on food safety in an informal manner, and USDA officials told us that they are not aware of any mechanism for coordination and that coordination with FDA generally occurs at the secretarial level because it cuts across a number of USDA agencies. Recently, we also found another example in which FDA and USDA did not coordinate in developing detection methods for other contaminants in foods. FDA and another USDA agency—the Food Safety and Inspection Service—did not coordinate in developing detection methods for drug residues in seafood. By developing a mechanism to coordinate their crosscutting efforts to develop faster and less expensive methods for detecting contaminants in food, including arsenic in rice, FDA and USDA could enhance their ability to use their resources efficiently and avoid engaging in unnecessary and potentially duplicative efforts. Conclusions NRC and key recent scientific reviews have indicated that long-term ingestion of arsenic may pose a significant risk to human health, and FDA and USDA have taken various actions to manage the risk to human health of arsenic in rice. Their actions are generally consistent with the essential elements we have identified for managing risk, which can help agencies assess threats that could affect the achievement of their goals. For example, both agencies have conducted research on arsenic detection methods, and FDA has issued for public comment a risk assessment on the human health effects from the long-term ingestion of arsenic in rice. In addition, according to FDA officials, because infants are at a higher risk of experiencing some of the health effects of ingesting arsenic, such as neurodevelopmental effects, and the diets of infants are less varied than that of adults, FDA issued a draft guidance regarding arsenic in infant rice cereal. However, FDA officials have not provided a specific timeline for updating the risk assessment in response to newly- available information or for finalizing the draft guidance for infant rice cereal in response to public comments. Both of these documents could help communicate to the public the risk of arsenic in rice, and updating or finalizing them could also help FDA demonstrate its commitment to increasing transparency and accountability by addressing public comments and clarifying its enforcement authority, among other things. FDA coordinated the development and review of these key documents with several federal agencies, and these agencies were generally satisfied with FDA’s coordination efforts. However, USDA raised concerns about being involved too late in the process and the extent to which its comments were addressed. By developing a mechanism for working with relevant agencies to identify their roles and responsibilities for coordinating risk assessments of contaminants in food, including arsenic in rice, FDA could better ensure that it fully utilizes their expertise. Furthermore, FDA and USDA coordinated on the development of arsenic detection methods to a limited extent. Developing a mechanism to coordinate their crosscutting efforts to develop methods to detect contaminants in food, including arsenic in rice, could help FDA and USDA manage their resources and avoid engaging in unnecessary and potentially duplicative efforts. Recommendations for Executive Action We are making a total of five recommendations, including four to FDA and one to USDA. Specifically: The Commissioner of FDA should develop a timeline for updating the risk assessment on arsenic in rice. (Recommendation 1) The Commissioner of FDA should develop a timeline for finalizing the draft guidance on arsenic in infant rice cereal. (Recommendation 2) The Commissioner of FDA should develop a mechanism for working with relevant agencies to identify their roles and responsibilities for coordinating risk assessments of contaminants in food, including arsenic in rice. (Recommendation 3) The Commissioner of FDA should work with USDA to develop a mechanism to coordinate the development of methods to detect contaminants in food, including arsenic in rice. (Recommendation 4) The Secretary of Agriculture should work with FDA to develop a mechanism to coordinate the development of methods to detect contaminants in food, including arsenic in rice. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to EPA, HHS, OMB, and USDA for their review and comment. HHS and USDA provided written comments, which are summarized below and reproduced in appendix III and appendix IV, respectively. In addition, EPA, HHS, and USDA provided technical comments, which we incorporated as appropriate. OMB did not comment. In its comments, HHS generally agreed with our findings and three of the four recommendations directed to it and partially agreed with the other recommendation. Specifically, HHS partially agreed with our first recommendation for FDA to develop a timeline for updating the risk assessment on arsenic in rice, noting that the evolving nature of science precludes it from committing to a specific timeline. We recognize that new scientific studies continue to add to the understanding of the risk of arsenic. However, we continue to believe that FDA should demonstrate its commitment to increasing transparency and accountability by developing a timeline to update the risk assessment, potentially in conjunction with finalizing the draft guidance on arsenic in infant rice cereal. Such an update may state that recent scientific studies or public comments have not resulted in a change to FDA’s assessment of the risk. HHS generally agreed with our findings about the actions it has taken to manage the risk from arsenic in rice and the extent of its coordination with USDA and other agencies. HHS noted that it anticipates developing a final guidance establishing an action level of 100 ppb of inorganic arsenic in infant rice cereal by the end of 2018, which will be consistent with our recommendation. HHS also noted that it will consider ways to enhance mechanisms—such as the Interagency Risk Assessment Consortium—to collaborate and coordinate in the development of risk assessments with agencies that have regulatory responsibility or specific expertise. Further, HHS stated that FDA agrees that a mechanism for better coordinating with USDA on the development of methods to detect contaminants in foods would be worthwhile. FDA will consider whether and how existing mechanisms, such as the lnteragency Residue Control Group and the annual meeting with USDA's ARS and the Food Safety and Inspection Service on food safety research, could be used to improve collaboration with USDA on method development. HHS’s plans to enhance or use existing interagency mechanisms may be responsive to our recommendations if they focus on enhancing coordination with other agencies that have expertise or similar goals in the areas of risk assessments and methods to detect foodborne contaminants. In its comments, USDA generally agreed with our findings and the one recommendation we directed to it. Specifically, USDA generally agreed with our findings about the extent to which FDA coordinated with USDA on the development of methods to detect contaminants in food, including arsenic in rice. It also generally agreed with our recommendation that USDA work with FDA to develop a mechanism to do so and stated that the USDA Office of the Chief Scientist will facilitate this effort. Further, USDA noted that the Interagency Risk Assessment Consortium may be an appropriate mechanism for addressing GAO’s recommendations. USDA’s proposal has the potential to be responsive to our recommendation if it focuses on enhancing coordination with FDA regarding the development of detection methods for foodborne contaminants. As agreed with your office, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Agriculture and Health and Human Services; the Administrator of EPA; the Director of OMB; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-3841 or morriss@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report examines (1) what the National Research Council (NRC) and recent key scientific reviews have reported about the effects of ingestion of arsenic on human health, (2) the extent to which the Food and Drug Administration (FDA) and U.S. Department of Agriculture (USDA) have managed the risk to human health from arsenic in rice, and (3) the extent to which FDA has coordinated with USDA and other federal agencies on actions to manage the risk. In this report, we use the term arsenic to refer to either total arsenic or inorganic arsenic. We use the term rice to encompass rice grain and products made with rice, such as infant rice cereal. To determine what NRC and recent key scientific reviews have reported about the effects of ingestion of arsenic on human health, we analyzed NRC’s 2013 report on inorganic arsenic and 14 reviews of the scientific literature published from January 2015 through early June 2017 on the human health effects of ingestion of arsenic. We conducted a literature search of several research databases, such as PubMed and Toxline, to identify reviews that (1) were focused on the effects of ingestion of arsenic on human health; (2) were peer-reviewed; (3) relied on human, rather than animal, studies; (4) provided conclusions or summary statements related to more than one study, rather than just listing individual study findings; (5) included an abstract; and (6) were written in English. We assessed the scientific and statistical credibility, reliability, and methodological soundness of the reviews. We also contacted some of the authors for additional methodological information. Methodological information included, for example, criteria for selecting the studies used in the review; meta-analyses; or meta-regression approach. It also included limitations that the authors cited for the studies they reviewed or for any analyses they conducted. We excluded articles for which we could not clearly determine the methodology. We also reviewed the authors’ statements regarding conflicts of interest and determined that none of the articles should be excluded for this reason. We did not examine the references cited by these reviews as part of our analysis. We also did not examine the studies cited by the NRC. The studies we reviewed are listed in appendix II. To determine the extent to which FDA and USDA have managed the risk to human health from arsenic in rice, we examined relevant provisions in the Federal Food, Drug, and Cosmetic Act, as amended; the Federal Agriculture Improvement and Reform Act of 1996; and other relevant laws, regulations, and policies. We also used the essential elements for managing risk as identified in our prior work on enterprise risk management. These include: (1) align the risk management process with goals and objectives, (2) identify risks, (3) assess risks, (4) respond to the risks, (5) monitor the risks, and (6) communicate and report on the risks. We identified information on agency actions for managing the risk from arsenic in rice by collecting documentation and interviewing officials from FDA and USDA and we reviewed the information in light of the requirements, policies, and elements. We assessed FDA’s and USDA’s reported actions to determine the extent to which each agency’s actions aligned with these elements. In assessing FDA’s and USDA’s actions against these essential elements, we used the terms “consistent” and “partially consistent” to reflect the extent to which each agency’s actions aligned with an essential element. A determination of “consistent” meant that the agency provided evidence that it had taken major actions in alignment with that essential element. A determination of “partially consistent” meant that the agency provided evidence that it had taken some actions in alignment with that essential element. We also interviewed 17 stakeholders to obtain their views on the extent to which FDA’s and USDA’s actions managed the risk, including university researchers (academics) specializing in relevant fields such as epidemiology and soil chemistry, representatives of a consumer organization, and representatives of the rice industry, including rice mills and farms. We identified stakeholders based on suggestions from agency officials and other stakeholders; through our site visit in Arkansas’ rice agricultural research and production areas and rice mills; and based on the stakeholders’ unique perspective or qualifications, such as membership in the NRC Committee on Inorganic Arsenic. The views we obtained from these interviews are not generalizable to all university researchers or consumer or rice industry organizations but they provide illustrative examples of the views of such stakeholders. Table 1 lists information about the 17 stakeholders we interviewed. To determine the extent to which FDA has coordinated with USDA and other federal agencies on actions to manage the risk to human health from arsenic in rice, we identified relevant actions and examined whether FDA developed interagency collaborative mechanisms, which we have previously reported could help to facilitate coordination between agencies. To identify actions for which the agencies shared similar goals in their strategic plans or relevant expertise and for which FDA would be expected to coordinate with USDA and other federal agencies, we reviewed relevant provisions in the Federal Food, Drug, and Cosmetic Act, as amended; the Federal Agriculture Improvement and Reform Act of 1996; other relevant laws, regulations, and policies; the current science and research strategic plan for FDA’s Center for Food Science and Applied Nutrition and current strategic plans for USDA’s Agricultural Research Service (ARS) and Federal Grain Inspection Service (FGIS); and information about the agencies’ missions from their websites. These actions were the development of FDA’s risk assessment and draft guidance on arsenic in rice and FDA’s and USDA’s efforts to develop detection methods for arsenic in rice. We interviewed FDA officials and reviewed documentation they provided to identify the other federal agencies and offices with which FDA coordinated the development and review of its risk assessment and draft guidance on arsenic in infant rice cereal and the development of methods for detecting arsenic in rice. These agencies and offices included ARS, the Centers for Disease Control and Prevention, the Environmental Protection Agency (EPA), FGIS, National Institutes of Health’s National Institute of Environmental Health Sciences, the Department of Health and Human Services’ Assistant Secretary for Legislation and Office of the Assistant Secretary for Planning and Evaluation, Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs, the Small Business Administration’s Office of Advocacy, and the U.S. Trade Representative. To determine the extent to which FDA coordinated its risk assessment and draft guidance on arsenic in rice with USDA and other federal agencies, we obtained and reviewed FDA’s framework for conducting risk assessments; reviewed agencies’ comments on these documents; interviewed FDA officials regarding FDA’s efforts to coordinate with other agencies; and interviewed officials from the Centers for Disease Control and Prevention; EPA; the National Institutes of Health; OMB; the Small Business Administration’s Office of Advocacy; and USDA regarding the nature of their comments, their experiences coordinating with FDA, and the extent to which FDA addressed their comments. To examine the extent to which FDA and USDA coordinated the development of arsenic detection methods, we obtained and reviewed documents, including those describing the detection methods that FDA, ARS, and FGIS have developed or have under development, and we interviewed officials from these agencies regarding their efforts to develop these methods and coordinate their development efforts. We also interviewed officials from these agencies to gather their views on the effectiveness of these coordination efforts. We then examined whether FDA had interagency collaborative mechanisms for the development of its risk assessment and draft guidance, and its efforts with USDA to develop arsenic detection methods. We also examined whether participating agencies clarified their roles and responsibilities. Our prior work identified this as a key issue for agencies to consider when implementing coordination mechanisms. We selected this practice because it was relevant to the challenges the agencies faced. We conducted this performance audit from December 2016 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Recent Reviews of the Health Effects of Ingestion of Arsenic The following list identifies recent key reviews of the health effects of ingestion of arsenic that we analyzed. Amadi, C.N., Z.N. Igweze, and O.E. Orisakwe. “Heavy Metals in Miscarriages and Stillbirths in Developing Nations.” Middle East Fertility Society Journal, vol. 22, no. 2 (2017): 91-100. Bardach, A.E., A. Ciapponi, N. Soto, M.R. Chaparro, M. Calderon, A. Briatore, N. Cadoppi, R. Tassara, and M.I. Litter. “Epidemiology of Chronic Disease Related to Arsenic in Argentina: A Systematic Review.” The Science of the Total Environment, vol. 538, (2015): 802-16. Karagas, M.R., A. Gossai, B. Pierce, and H. Ahsan. “Drinking Water Arsenic Contamination, Skin Lesions, and Malignancies: A Systematic Review of the Global Evidence.” Current Environmental Health Reports, vol. 2, no. 1 (2015): 52-68. Khanjani, N., A. Jafarnejad, and L. Tavakkoli. “Arsenic and Breast Cancer: A Systematic Review of Epidemiologic Studies.” Reviews on Environmental Health (2017). Lamm, S.H., H. Ferdosi, E.K. Dissen, J. Li, and J. Ahn. “A Systematic Review and Meta-Regression Analysis of Lung Cancer Risk and Inorganic Arsenic in Drinking Water.” International Journal of Environmental Research and Public Health, vol. 12, no. 12 (2015): 15498-15515. Mayer, J.E. and R.H. Goldman. “Arsenic and Skin Cancer in the USA: The Current Evidence regarding Arsenic-Contaminated Drinking Water.” International Journal of Dermatology, vol. 55, no. 11 (2016): e585-e591. Milton, A.H., S. Hussain, S. Akter, M. Rahman, T.A. Mouly, and K. Mitchell. “A Review of the Effects of Chronic Arsenic Exposure on Adverse Pregnancy Outcomes.” International Journal of Environmental Research and Public Health, vol. 14, no. 6 (2017). Phung, D., D. Connell, S. Rutherford, and C. Chu. “Cardiovascular Risk from Water Arsenic Exposure in Vietnam: Application of Systematic Review and Meta-Regression Analysis in Chemical Health Risk Assessment.” Chemosphere, vol. 177 (2017): 167-175. Quansah, R., F.A. Armah, D.K. Essumang, I. Luginaah, E. Clarke, K. Marfoh, S.J. Cobbina, et al. “Association of Arsenic with Adverse Pregnancy Outcomes/Infant Mortality: A Systematic Review and Meta- Analysis.” Environmental Health Perspectives, vol. 123, no. 5 (2015): 412-21. Robles-Osorio, M.L., E. Sabath-Silva, and E. Sabath. “Arsenic-Mediated Nephrotoxicity.” Renal Failure, vol. 37, no. 4 (2015): 542-7. Sidhu, M.S., K.P. Desai, H.N. Lynch, L.R. Rhomberg, B.D. Beck, and F.J. Venditti. “Mechanisms of Action for Arsenic in Cardiovascular Toxicity and Implications for Risk Assessment.” Toxicology, vol. 331 (2015): 78-99. Sung, T., J. Huang, and H. Guo. “Association between Arsenic Exposure and Diabetes: A Meta-Analysis.” BioMed Research International, (2015). Tsuji, J.S., M.R. Garry, V. Perez, and E.T. Chang. “Low-Level Arsenic Exposure and Developmental Neurotoxicity in Children: A Systematic Review and Risk Assessment.” Toxicology, vol. 337, (2015): 91-107. Von Stackelberg, K., E. Guzy, T. Chu, and B.C. Henn. “Exposure to Mixtures of Metals and Neurodevelopmental Outcomes: A Review.” Risk Analysis, vol. 35, no. 6 (2015): 971-1016. Appendix III: Comments from the Department of Health and Human Services Appendix IV: Comments from the U.S. Department of Agriculture Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Anne K. Johnson (Assistant Director), Ruth Solomon (Analyst in Charge), Kevin Bray, Stephen Cleary, Ellen Fried, Juan Garay, Rebecca Parkhurst, Beverly Peterson, Anne Rhodes-Kline, Sara Sullivan, Kiki Theodoropoulos, Sarah Veale, and Khristi Wilkins made key contributions to this report.
Why GAO Did This Study Arsenic, an element in the earth's crust, can be harmful to human health and may be present in water and certain foods, such as rice. Rice may be more susceptible to arsenic contamination than other crops due to the flooded conditions in which it is typically grown. FDA and USDA work to address food safety risks. FDA's responsibilities for rice include regulatory and research programs; USDA's include research programs. GAO was asked to review issues related to arsenic and rice. GAO examined (1) what NRC and recent key scientific reviews have reported about the effects of ingestion of arsenic on human health, (2) the extent to which FDA and USDA have managed the risk to human health from arsenic in rice, and (3) the extent to which FDA has coordinated with USDA and other federal agencies on actions to manage the risk. GAO analyzed a 2013 NRC report on inorganic arsenic, 14 reviews of scientific studies on the human health effects of ingesting arsenic published from January 2015 to June 2017, and agency documents; interviewed agency officials; and compared good practices with actions FDA and USDA took to manage risk and that FDA took to coordinate. What GAO Found The National Research Council (NRC) of the National Academy of Sciences, in 2013, and more recent key scientific reviews reported evidence of associations between long-term ingestion of arsenic and adverse human health effects, such as cardiovascular disease. Many of the studies NRC reviewed as part of its survey of the scientific literature examined the ingestion of arsenic in drinking water, but others looked at arsenic from all sources, including dietary sources such as rice. NRC stated that evidence suggests that food, particularly rice, may be a significant source of inorganic arsenic, the more toxic of the two forms of arsenic; however, consumption of rice and levels of arsenic in rice vary widely, making it difficult to estimate arsenic intake from rice. NRC identified stronger evidence for some health effects at higher levels of arsenic—defined by NRC as 100 parts per billion or higher in drinking water—than at lower levels, which are more common in the United States, and noted that research on the health effects of ingesting lower levels of arsenic is ongoing. The Food and Drug Administration (FDA) and the U.S. Department of Agriculture (USDA) have taken actions to manage the risk of arsenic in rice to human health, including assessing the type and prevalence of health effects that may result from long-term ingestion of arsenic in rice. FDA also has taken action to publicly communicate and report on the risk. In 2016, FDA issued a risk assessment about the human health effects from long-term ingestion of arsenic in rice and draft guidance recommending industry not exceed a level of 100 parts per billion of inorganic arsenic in infant rice cereal. FDA noted it issued this guidance because infants face a higher risk owing to their less-varied diets. However, FDA has not updated the risk assessment, which was informed by a review of scientific studies published before February 2015, or finalized the draft guidance. In prior work, GAO has found that sharing risk information and incorporating stakeholder feedback can help organizations identify and better manage risks, as well as increase transparency and accountability to Congress and taxpayers. FDA officials stated that they may update the risk assessment based on newly-available information and consider public comments before finalizing the draft guidance. However, FDA officials could not provide a specific timeline for either. By developing such a timeline, FDA could help clarify when it will take action and improve the transparency of its decisions. FDA coordinated with USDA and other federal agencies on actions to manage the risk of arsenic in rice to varying extents. For example, FDA and USDA coordinated on developing arsenic detection methods for rice to a limited extent, although both agencies have crosscutting strategic goals for developing detection methods for foodborne contaminants, including arsenic. GAO has noted in prior work that developing interagency mechanisms to coordinate crosscutting issues may reduce potentially duplicative efforts. FDA and USDA officials stated that they coordinated on an informal basis but have no mechanism for coordinating more formally. By developing a coordination mechanism, FDA and USDA could enhance their ability to use their resources efficiently and avoid potentially duplicative efforts. What GAO Recommends GAO is making five recommendations, including that FDA develop a timeline for updating its risk assessment and finalizing its draft guidance and that FDA and USDA develop a coordination mechanism for developing methods to detect foodborne contaminants, including arsenic. FDA and USDA generally agreed with the recommendations.
gao_GAO-19-31
gao_GAO-19-31_0
Background OPA amended the Clean Water Act and established provisions expanding and consolidating the federal government’s authority to prevent and respond to oil spills. This includes providing the federal government with the authority to perform cleanup immediately after a spill using federal resources, monitor the response efforts of the spiller, or direct the spiller’s cleanup activities. OPA also established a “polluter pays” system, placing the primary burden of liability and costs of oil spills on the responsible party for the vessel or facility from which oil is discharged. Under this system, the responsible party assumes, up to a specified limit, the burden of paying for spill costs, including both removal costs (for cleaning up the spill) and damage claims (for restoring the environment and paying compensation to parties economically harmed by the spill). OPA authorized the use of the Oil Spill Liability Trust Fund to fund up to $1 billion per spill incident for pollution removal costs and damages resulting from oil spills and mitigation of a substantial threat of an oil spill in navigable U.S. waters when a responsible party cannot or does not pay for the cleanup. After the Deepwater Horizon oil spill, the Resources and Ecosystems Sustainability, Tourist Opportunities, and Revived Economies of the Gulf Coast States Act of 2012 (RESTORE Act) established a new trust fund for programs, projects, and activities that restore and protect the environment and economy of the Gulf Coast region as well as the RESTORE Council, which is to summarize its activities for each calendar year’s activities in an annual report to Congress. In addition, NOAA finalized regulations in 1996 for assessing natural resource damages resulting from a discharge or substantial threat of a discharge of oil. The NRDA regulations recognize that OPA provides for designating federal, state, and tribal officials as natural resource trustees and authorizes them to make claims against the parties responsible for the injuries23, 24 Under NRDA regulations, a trustee council’s work usually occurs in three steps: (1) a pre-assessment phase, (2) the restoration planning phase, and (3) the restoration implementation phase. During the pre-assessment phase the trustees are to determine whether they have jurisdiction to pursue restoration. In the restoration planning phase the trustees are to evaluate information on potential injuries and use that information to determine the need for, type of, and scale of restoration. Finally, the restoration implementation phase describes the process for implementing restoration. The NRDA regulations define injury as an observable or measurable adverse change in a natural resource or impairment of a natural resource service. 15 C.F.R. 990.11. federal and state trustees entered into legal settlements with responsible parties to resolve certain claims. The Exxon Valdez Trustee Council is in the restoration implementation phase, while the Deepwater Horizon Trustee Council is in both the restoration planning and implementation phases. The National Oil and Hazardous Substances Pollution Contingency Plan, commonly known as the National Contingency Plan, contains the federal government’s framework and operative requirements for preparing and responding to discharges of oil and releases of hazardous substances, pollutants, and contaminants. It establishes that federal oil spill response authority is determined by the location of the spill: the Coast Guard has response authority in the U.S. coastal zone, and EPA covers the inland zone. In addition, NOAA is to provide scientific analysis and consultation during oil spill response activities in the coastal zones. Exxon Valdez Oil Spill The Exxon Valdez oil spill in Alaska’s Prince William Sound in 1989 contaminated portions of national wildlife refuges, national and state parks, a national forest, and a state game sanctuary—killing or injuring thousands of sea birds, marine mammals, and fish and disrupting the ecosystem in its path. In October 1991, the U.S. District Court for the District of Alaska approved a civil settlement and criminal plea agreement among Exxon, the federal government, and the state of Alaska for recovery of natural resource damages resulting from the oil spill. Exxon agreed to pay $900 million in civil claims in 11 annual payments and $125 million to resolve various criminal charges. In August 1991, the federal government and the state of Alaska signed a memorandum of agreement and consent decree to act as co-trustees in collecting and using natural resource damage payments from the spill. The 1991 memorandum states that all decisions related to injury assessment, restoration activities, or other use of the natural resource damage payments are to be made by unanimous agreement of the trustees. According to the memorandum, the trustees are to use the natural resource damage payments to restore, replace, rehabilitate, enhance, or acquire the equivalent of the natural resources injured as a result of the oil spill and the reduced or lost services provided by such resources. The memorandum also recognized that EPA was designated to coordinate restoration activities on behalf of the federal government. In 1992, the trustees established the Exxon Valdez Trustee Council to ensure coordination and cooperation in restoring the natural resources injured, lost, or destroyed by the spill. In 1994, the Exxon Valdez Trustee Council prepared a restoration plan for use of the funds, which consisted of five categories: (1) general restoration; (2) habitat protection and acquisition; (3) monitoring and research; (4) restoration reserve; and (5) public information, science management, and administration. The restoration plan noted that in addition to restoring natural resources, funds may be used to restore reduced or lost services (including human uses) from injured natural resources, which includes subsistence, commercial fishing, recreation, and tourism services. The Exxon Valdez Trustee Council is advised by members of the public and a panel of scientists, and its Executive Director manages the day-to-day administrative functions. The Exxon Valdez Trustee Council has published documents that are on the council’s public website, such as the Injured Resources and Services list (current as of 2014), lingering oil updates (current as of 2016), annual reports (current as of 2018), and annual project work plans (current as of 2018). Deepwater Horizon Oil Spill The Deepwater Horizon oil spill in the Gulf of Mexico in 2010 resulted in the tragic loss of 11 lives and a devastating environmental impact and affected the livelihoods of thousands of Gulf Coast citizens and businesses. In April 2016, BP, the federal government, and the five Gulf Coast states agreed to a settlement resolving multiple claims for federal civil penalties and natural resource damages related to the spill totaling up to $14.9 billion. Under the terms of the consent decree for the settlement, BP must pay up to $8.8 billion in natural resource damages under OPA, which includes $1 billion BP previously committed to pay for early restoration projects, and up to $700 million to address injuries that were unknown to the trustees as of July 2, 2015, including for any associated Natural Resource Damage assessment and planning activities, or to adapt, enhance, supplement, or replace restoration projects or approaches that the trustees initially selected. BP is to make these payments into the Deepwater Horizon Oil Spill Natural Resource Damages Fund managed by the Department of the Interior (Interior), to be used jointly by the federal and state trustees of the Deepwater Horizon Trustee Council for restoration of injured or lost natural resources. Two additional, separate restoration funds are to receive money from the BP civil and criminal penalties: (1) the Gulf Coast Restoration Trust Fund established under the RESTORE Act is to receive 80 percent of the $5.5 billion Clean Water Act civil penalty paid by BP to support environmental restoration and economic recovery projects in the Gulf Coast region and (2) the Gulf Environmental Benefit Fund managed by the nonprofit National Fish and Wildlife Foundation is to receive $2.394 billion in criminal penalties. For more information on the amount and distribution of the BP civil and criminal payments, see figure 1. Prior to reaching the settlement in 2016, BP signed an agreement in April 2011 to provide $1 billion toward early restoration projects in the Gulf of Mexico to address injuries to natural resources caused by the spill. Early restoration projects may be developed prior to the completion of the injury assessment, which can take months or years to complete. Payments by BP for early restoration projects are counted towards its liability for the $8.8 billion in natural resource damages resulting from the spill. The designated trustees are to administer these payments for natural resources, according to OPA. The designated trustees include federal officials from Interior, NOAA, the U.S. Department of Agriculture, and EPA, as well as state officials from the five Gulf States that were affected by the spill—Alabama, Florida, Louisiana, Mississippi, and Texas. In February 2016, the Deepwater Horizon Trustee Council finalized the Programmatic Damage Assessment and Restoration Plan (programmatic restoration plan) that provided the council’s injury assessment and proposed a framework for identifying and developing project-specific restoration plans. The five goals of the programmatic restoration plan are to (1) restore and conserve habitat; (2) restore water quality; (3) replenish and protect living coastal and marine resources; (4) provide and enhance recreational opportunities; and (5) provide for monitoring, adaptive management, and administrative oversight to support restoration implementation. According to the 2016 programmatic restoration plan, the Deepwater Horizon Trustee Council is to coordinate with other Deepwater Horizon restoration programs, such as those funded by the RESTORE Act, the National Fish and Wildlife Foundation, and other entities. The 2016 programmatic restoration plan established Trustee Implementation Groups for each of the seven designated restoration areas—one for each of the five Gulf States, the Region-Wide implementation group, and the Open Ocean implementation group. Each trustee implementation group is to plan, decide on, and implement restoration activities, including monitoring and adaptive management, for the funding that the consent decree allocated to its restoration area. Federal trustees serve in all the trustee implementation groups, and state trustees serve on the Region-Wide implementation group and the trustee implementation groups for their states; decisions are to be made by consensus. The Deepwater Horizon Trustee Council is to coordinate the work of the trustee implementation groups by establishing standard procedures and practices to ensure consistency in developing and implementing restoration activities. Interagency Coordinating Committee on Oil Pollution Research OPA created the interagency committee to provide a comprehensive, coordinated federal oil pollution research program and promote cooperation with industry, universities, research institutions, state governments, and other nations through information sharing, coordinated planning, and joint funding of projects. It also designated member agencies and authorized the President to designate other federal agencies as members of the interagency committee. As of November 2018, the interagency committee consisted of 15 federal members representing independent agencies, departments, and department components. OPA directs that a representative from the Coast Guard serve as the chair, and the interagency committee charter designates that a representative from NOAA, EPA, or the Bureau of Safety and Environmental Enforcement (BSEE) serve as the vice-chair and that the committee’s Executive Director provide staff support. The interagency committee’s charter notes that it shall meet at least semi-annually or at the decision of the chair. According to OPA, the chair’s duties include reporting biennially to Congress on the interagency committee’s activities related to oil pollution research, development, and demonstration programs. OPA also required the interagency committee to prepare and submit a research and technology plan, which has been updated periodically. In September 2015, the interagency committee released the research and technology plan for fiscal years 2015 through 2021. This research and technology plan updates the interagency committee’s 1992 plan, revised in 1997, and provides a new baseline of the nation’s oil pollution research needs. The plan is primarily directed at federal agencies with responsibilities for conducting or funding such research, but it can also serve as a research planning guide for nonfederal stakeholders such as, industry, academia, state governments, research institutions, and other nations, according to interagency committee documents. The 2015 research and technology plan established a common language and planning framework to enable researchers and interested parties to identify and track research in four classes or categories that represent general groupings of oil spill research: Prevention: Research that supports developing practices and technologies designed to predict, reduce, or eliminate the likelihood of discharges or minimize the volume of oil discharges into the environment. Preparedness: Research that supports the activities, programs, and systems developed prior to an oil spill to improve the planning, decision-making, and management processes needed for responding to and recovering from oil spills. Response: Research that supports techniques and technologies that address the immediate and short-term effects of an oil spill and encompasses all activities involved in containing, cleaning up, treating, and disposing of oil to (1) maintain the safety of human life, (2) stabilize a situation to preclude further damage, and (3) minimize adverse environmental and socioeconomic effects. Injury assessment and restoration: Research that involves collecting and analyzing information to (1) evaluate the nature and extent of environmental, human health, and socioeconomic injuries resulting from an incident; (2) determine the actions needed to restore natural resources and their services to pre-spill conditions; and (3) make the environment and public whole after interim losses. Trustee Councils Use Restoration Trust Funds for Approved Activities, Which Are Largely Completed for Exxon Valdez and in the Early Stages for Deepwater Horizon In response to the Exxon Valdez and Deepwater Horizon oil spills and by forming trustee councils, federal and state trustees have used the restoration trust funds to authorize money for activities in accordance with approved restoration plans. The Exxon Valdez Trustee Council has largely completed restoration work and authorized approximately $985 million, roughly 86 percent of the restoration trust fund, primarily on habitat protection and general restoration, research, and monitoring activities. As a result of these restoration activities and natural recovery, the majority of the injured natural resources and human services in the spill area has recovered or is recovering, according to the council’s assessment. However, the Exxon Valdez Trustee Council continues to monitor the lack of recovery of Pacific herring and the presence of lingering oil in the spill area. The Deepwater Horizon Trustee Council is completing early restoration work and initial post-settlement restoration planning. It has authorized approximately $1.1 billion for restoration activities, roughly 13 percent of the restoration trust fund, and spent $368 million, roughly 5 percent of the restoration trust fund, primarily on habitat protection and enhancing recreation, such as building boat ramps and other recreational facilities. The Exxon Valdez Trustee Council Has Used 86 Percent of the Restoration Trust Fund, and Most Injured Natural Resources Have Recovered Exxon’s payments to the restoration trust fund totaled approximately $900 million, and the interest earnings, as of January 2016, totaled $247 million. From 1992 to 2018, the Exxon Valdez Trustee Council authorized the expenditure of approximately $985 million or 86 percent of the roughly $1.15 billion in principal funds plus interest from the restoration trust fund, primarily on habitat protection ($445 million) and general restoration, research, and monitoring of injured natural resources ($234 million). The remaining unspent restoration trust fund balance as of January 2018 was $210 million, split evenly between the habitat investment subaccount for future habitat protection activities and the research investment subaccount for future general restoration activities (see fig. 2). According to the Exxon Valdez Trustee Council, as of January 2018, it had spent approximately $445 million to protect and enhance habitat, including acquiring 628,000 acres of lands and interest in lands. As outlined in the trustee council’s 1994 restoration plan, the habitat program is intended to minimize further injury to resources and services and allow recovery to continue with the least interference by authorizing funds for federal and state resource agencies to acquire title or conservation easements on ecologically valuable lands. For example, in 2017 the Exxon Valdez Trustee Council authorized about $5.5 million to acquire a conservation easement on 1,060 acres at the northeastern end of Kodiak Island in the Gulf of Alaska, known as Termination Point. The trustee council authorized funds for this acquisition to (1) protect the property from timber logging and development and (2) provide habitat and feeding areas for marine birds injured by the spill, such as marbled murrelets and pigeon guillemots. According to the Exxon Valdez Trustee Council, habitat acquisitions prevent additional injury to species during recovery, promote restoration of spill-affected resources and services, and are the primary tool for acquiring equivalent resources harmed by the spill. The habitat program also supports habitat enhancement projects, which, according to the Exxon Valdez Trustee Council, aim to repair human- caused harm to natural resources, their habitats, and the services they provide to humans. For example, the trustee council authorized $2.2 million to the Alaska Department of Natural Resources to stabilize stream bank vegetation and install elevated steel walkways to provide less- damaging access to the Kenai River, a popular fishing destination. The Exxon Valdez Trustee Council has spent roughly $234 million from October 1992 to January 2018 on hundreds of general restoration, monitoring, and research activities. As outlined in the 1994 restoration plan, general restoration includes activities that manipulate the environment, manage human use, and reduce marine pollution. Research and monitoring activities also provide information on the status and condition of resources and services, including (1) whether they are recovering, (2) whether restoration activities are successful, and (3) factors that may be constraining recovery, according to the 1994 plan. For example, since 2012, the trustee council has authorized money for a program called Gulf Watch Alaska that provides long-term monitoring data on the status of environmental conditions—such as waters temperature and salinity—and the marine and nearshore ecosystems. Gulf Watch Alaska provides data to federal, state, and tribal agencies, as well as the public, that informs resource conservation programs and aid in the management of species injured by the spill. According to the trustee council, its expenditures for research projects have resulted in hundreds of peer-reviewed scientific studies and increased knowledge about the marine environment that benefits the injured resources. The Exxon Valdez Trustee Council has spent roughly $89 million from October 1992 to January 2018 on administration, science management, and public information. According to the 1994 restoration plan, expenditures under this category cover the cost to (1) prepare work plans, (2) negotiate habitat purchases, (3) provide independent scientific review, (4) involve the public, and (5) operate the restoration program. Although the Exxon Valdez Trustee Council set a target of 5 percent administrative costs in the 1994 restoration plan, according to a written statement that the trustee council provided, administrative costs averaged around 6 percent from 1994 through 2001. The trustees and council staff we interviewed told us that in hindsight the 5 percent target was unrealistic as it did not reflect the actual administrative costs at that time, although such costs were included in project budgets or were absorbed by federal and state agencies. Therefore, in 2012, the Exxon Valdez Trustee Council changed the way it accounted for administrative costs and has included these costs in the administrative budget. According to the trustee council, under the new accounting policy, administrative costs were recalculated and estimated at around 19 percent for the period 2002 through 2018. The remaining $210 million Exxon Valdez restoration trust fund balance is held by the Alaska Department of Revenue in two interest-bearing subaccounts. As of January 2018, the research subaccount and the habitat subaccount each held approximately $105 million. In the 1994 restoration plan, the Exxon Valdez Trustee Council established the need for a restoration reserve to ensure that restoration activities could continue to be supported after the final annual payments from the Exxon Corporation were received in September 2001. According to the 1994 restoration plan, the trustee council planned to set aside $12 million per year for a period of 9 years into the restoration reserve, totaling $108 million plus interest. In 1999, the Exxon Valdez Trustee Council resolved to transfer the estimated remaining balance of $170 million to the restoration reserve and split the money into two subaccounts. Since 2002, the trustee council is to make allocations for its annual work plans and ongoing habitat acquisition using these accounts. In 2010, the trustee council established a 20-year strategic plan to spend the remaining trust funds using four 5-year incremental work plans. In November 2010, the trustee council issued a call for project proposals for the first 5-year work plan, for fiscal years 2012 through 2016. Although the Exxon Valdez Trustee Council solicited invitations on a 5-year cycle, it has authorized money for each project annually. In a written statement, the trustee council also stated that it continues to pursue and acquire from willing sellers remaining parcels of land that prior studies have identified as high-priority habitat. According to the Exxon Valdez Trustee Council’s long-term spending scenario, both of the subaccounts are expected to be depleted by 2032 or earlier as determined by the market’s performance. The Status of Restoration Efforts According to the Exxon Valdez Trustee Council’s 2014 restoration plan update—its most recent assessment of injured resources and services— all but 5 of the 32 natural resources and human services identified as injured by the spill have recovered, are recovering, or are very likely recovered. In the 1994 restoration plan, the trustee council established a list of resources and services that suffered injuries from the spill, and developed specific, measurable recovery objectives for each injured resource and service. The Exxon Valdez Trustee Council has periodically assessed the status of those resources, most recently in 2014. As of the 2014 assessment, the following 4 resources were listed as not recovering: (1) marbled murrelets, (2) Pacific herring, (3) pigeon guillemots, and (4) one group of killer whales. In addition, the recovery of Kittlitz’s murrelets was listed as unknown. According to the Exxon Valdez Trustee Council, the status of these resources in 2018 is largely similar to their status in 2014 except that one population of pigeon guillemots has likely increased as a result of a predator-control project that the council supported. However, the overall status of this species has not been determined. In a written statement, the trustees stated that the trustee council plans to initiate its next assessment of injured resources in late 2018. The Exxon Valdez Trustee Council remains particularly concerned about the health of the Pacific herring population and the presence of lingering oil. According to the trustee council’s 2014 restoration plan update, Pacific herring are considered an ecologically and commercially important species that in addition to being fished for human consumption is a source of food for various marine species. The assessment noted a combination of factors, including disease, predation, and poor recruitment of additional fish to the stock through growth or migration, appear to have contributed to the continued suppression of herring populations. As a result, the herring fishery has been closed for 23 of the 29 years since the oil spill and has not met the trustee council’s recovery objective. To address concerns regarding the Pacific herring, the trustee council plans to authorize additional money for ongoing Pacific herring research and monitoring through the anticipated end date for the fund in fiscal year 2032, for an estimated total cost of roughly $23 million over 20 years. The Exxon Valdez Trustee Council also has concerns regarding the presence of lingering oil in the spill area. According to a March 2016 report for the trustee council, approximately 27,000 gallons of lightly weathered oil from the Exxon Valdez spill remains, located along almost 22 miles of shoreline at a small number of subsurface sites, where oxygen and nutrients are at levels too low to support microbial degradation. In May 2018, we accompanied researchers working with the trustee council to the spill area and observed the excavation of three pits that revealed lingering oil roughly 6 inches below the surface of the beach, as captured in figure 3. According to the researchers, oil previously recovered from this location was identified as belonging to the Exxon Valdez oil spill. Evidence of exposure to lingering oil was observed as recently as 2009 in a variety of marine species, including sea otters and harlequin ducks, according to the 2016 lingering oil report. The report also noted that the most recent studies show that the sea otter and harlequin duck populations have recovered and that lingering oil is no longer causing ecological damage. Further, studies demonstrated that minimally intrusive remediation of the oil would only be effective at a small number of sites, according to the 2016 report. Therefore, although the trustee council has decided not to pursue remediation of the oil, it stated that it has authorized money for projects to study the effects of oil and lingering oil totaling over $16 million and will continue to monitor the oil to document its physical and chemical changes over time. The Exxon Valdez Trustee Council expects that lingering oil will persist for decades; however, its representatives said that the evidence indicates that there are no current biological effects of the oil. The Exxon Valdez Trustee Council’s priorities for future spending are outlined in the 2014 restoration plan update, and in addition to long-term herring research and lingering oil, the priorities include long-term monitoring of marine conditions and injured resources, shorter-term harbor restoration projects, and habitat protection. The Deepwater Horizon Trustee Council Has Used 13 Percent of the Restoration Trust Fund, and Most Restoration Activities Are in the Initial Planning Phase Since the federal and state governments reached a final settlement with BP in 2016 and the Deepwater Horizon Trustee Council finalized a programmatic restoration plan, four trustee implementation groups have issued initial independent restoration plans. Specifically, the Alabama, Louisiana, Mississippi, and Texas trustee implementation groups have issued initial restoration plans. According to the Deepwater Horizon Trustee Council, the trustee implementation groups covering Florida, Open Ocean, and Region-Wide restoration are in the midst of a multiyear planning effort and anticipate issuing initial restoration plans in 2019 or later. The trustee implementation groups are responsible for developing and approving restoration plans and resolutions, which, when approved, authorize money to be spent on restoration projects. This process includes soliciting project ideas, submitting proposed plans for public comment, and ensuring compliance with applicable laws and regulations, such as the National Environmental Policy Act. According to the trustee council, there is no specific timetable for approving future restoration plans, as plans are approved on an ongoing basis—typically for several projects at a time. The four completed restoration plans, together with early restoration spending and other activities, including planning and administrative efforts, account for all authorizations made by the Deepwater Horizon Trustee Council as of December 31, 2017, according to NOAA—the agency that manages the system the trustee councils uses for financial reporting. As shown in figure 4, these authorizations include approximately $1.1 billion, or 13 percent, of the $8.1 billion restoration trust fund on five goals. The Deepwater Horizon Trustee Council has authorized roughly $460 million for habitat protection—about 10 percent of the almost $4.7 billion ordered for this use by the settlement. According to the 2016 programmatic restoration plan, habitat protection includes both conservation acquisition and habitat enhancement, such as creating, restoring, or enhancing coastal wetlands. For example, during the first phase of early restoration in 2012, the trustee council authorized $14.4 million to the Louisiana Coastal Protection and Restoration Authority to create 104 acres of new brackish marsh at Lake Hermitage in Barataria Bay, Louisiana. The project involved dredging sediment and planting native marsh vegetation to restore marsh habitat damaged by the spill. The project is currently in the monitoring phase. As of the end of 2017, the Deepwater Horizon Trustee Council had approved 34 habitat protection projects, many of which were still in progress as of December 2017. The initial results of these projects include the restoration of over 4,000 acres of habitat and the creation of over 40 artificial reefs, according to a written statement by the federal trustees. The trustee council has authorized roughly $349 million to enhance recreational use—about 83 percent of the almost $420 million ordered for this use by the settlement. According to the 2016 programmatic restoration plan, enhancing recreational use includes acquiring land along the coast, building improved or new infrastructure, and improving navigation for on-water recreation. For example, during the first phase of early restoration in 2012, the Deepwater Horizon Trustee Council authorized approximately $5.3 million to the Florida Department of Environmental Protection to repair and construct boat ramps in Pensacola Bay and Perdido Bay, Florida. Construction was completed in 2016, and the project is currently in the monitoring and operations and maintenance phase. As of the end of 2017, the Deepwater Horizon Trustee Council had approved 43 projects to enhance recreational use, many of which were still in progress as of December 2017. These projects have provided new or enhanced facilities, such as pavilions, picnic areas, and boat ramps, according to a written statement by the federal trustees. The Deepwater Horizon Trustee Council has authorized roughly $218 million to restore coastal and marine wildlife—about 12 percent of the almost $1.8 billion ordered for this use by the settlement, primarily for birds ($108 million), sea turtles ($50 million), oysters ($38 million), and fish ($20 million). According to the 2016 programmatic restoration plan, restoring coastal and marine wildlife includes activities that restore the resources, such as fish, sea turtles, and deep coral communities, which contribute to a productive, biologically diverse, and resilient ecosystem. For example, during the first phase of early restoration in 2012, the trustee council authorized $11 million to the Mississippi Department of Environmental Quality to deploy a mixture of oyster shells, limestone, and concrete on 1,430 acres in waters off Hancock and Harrison Counties in Mississippi. This material, when placed in oyster spawning areas, provides a surface for free swimming oyster larvae to attach and grow into oysters. The project is currently in the monitoring and operations and maintenance phase. As of the end of 2017, the Deepwater Horizon Trustee Council had approved 32 projects to restore coastal and marine wildlife. Although the trustee council authorized millions of dollars to restore coastal and marine wildlife, it authorized 1 percent or less of funds ordered by the settlement for sturgeon, marine mammals, submerged aquatic vegetation, and other seafloor species—such as corals. According to the 2016 consent decree, the Open Ocean implementation group is responsible for authorizing the majority of the restoration funds for these types of wildlife, but that trustee implementation group has not yet completed its initial restoration plan. According to NOAA, the complexity of restoring several of these resources necessitated additional preplanning and restoration technique development prior to considering specific restoration projects for several of these types of wildlife. The trustee implementation group is developing two restoration plans that will include projects for birds and sturgeon, as well as for sea turtles, fish, marine mammals, and corals, according to a Deepwater Horizon Trustee Council press release. The trustee council released the first draft plan for public comment in October 2018, and plans to release the second plan in early 2019. In August 2017, the Deepwater Horizon Trustee Council announced that the Louisiana implementation group was soliciting project ideas to fund the restoration of submerged aquatic vegetation, among other types, to include in a future restoration plan but has not yet submitted such a plan for public review. Roughly $27 million has been authorized for administrative oversight and monitoring activities, or about 3 percent of the almost $810 million that the settlement ordered for this use. The majority of the funding ($25 million) was for administrative oversight activities, and the balance was for monitoring. According to the 2016 programmatic restoration plan, administrative oversight includes the costs for trustees to guide project selection, implementation, and adaptive management. For the state trustees, all administrative costs are covered by their respective trustee implementation groups, and for federal trustees, all administrative costs are covered by the Open Ocean implementation group. For example, during the postsettlement phase, the trustee council authorized approximately $6.6 million to Interior for (1) participation on the trustee council; (2) restoration planning, plan development, and coordination with other trustees; (3) environmental compliance reviews; (4) technical assistance; and (5) financial management, among other uses. As of the end of 2017, the Deepwater Horizon Trustee Council had approved nine administrative oversight and monitoring projects, which remained ongoing as of December 31, 2017. The results of the trustee council’s activities in this area so far include the completion of a monitoring and adaptive management manual and its standard operating procedures. The Deepwater Horizon Trustee Council has authorized $4 million to restore water quality—about 1 percent of the $410 million that the settlement ordered for this use. According to the 2016 programmatic restoration plan, restoring water quality includes both reducing nonpoint nutrient pollution to coastal watersheds and improving water quality in Florida through efforts such as stormwater control and erosion control. As of the end of 2017, the Deepwater Horizon Trustee Council approved two nonpoint nutrient reduction projects to address excessive nutrient loads in Gulf waters but no water quality projects in Florida. For example, in 2017, the Deepwater Horizon Trustee Council authorized approximately $224,000 to conduct restoration planning to develop, draft, and finalize a restoration plan addressing nonpoint nutrient reduction, among other goals. The trustee council has authorized few funds to date for this restoration goal because, in part, the Florida implementation group has not yet completed its first postsettlement restoration plan. In September 2017, the trustee council announced that the Florida implementation group was reviewing water quality project ideas for its initial restoration plan, and it released a draft of the plan for public comment in September 2018. According to the Deepwater Horizon Trustee Council, the final plan will be released in January 2019. Interagency Committee Members Funded Oil Spill Research Projects from Fiscal Years 2011 through 2017, but the Committee Did Not Coordinate with All Relevant Entities Nine of the interagency committee member agencies funded over 100 oil spill research projects per year from fiscal years 2011 through 2017, for a total cost of about $200 million; however, we found that the interagency committee did not coordinate its research with some key entities. More specifically, approximately half of the interagency committee members said internal coordination on such research improved during this time, but the committee may not have included all relevant agencies, and we found that the committee did not coordinate with relevant trustee councils. Nine Member Agencies Funded over 100 Oil Spill Research Projects per Year for a Cost of About $200 Million from Fiscal Years 2011 through 2017 During fiscal years 2011 through 2017, 9 of the 15 interagency committee member agencies funded oil spill research projects, spending about $200 million on this research, based on our review of agency data from the member agencies. These nine agencies were the Bureau of Ocean Energy Management, BSEE, the Coast Guard, the Department of Energy, EPA, NASA, NOAA, the Pipeline and Hazardous Materials Safety Administration, and the U.S. Arctic Research Commission. One of these agencies—BSEE—spent about $84 million, or about 40 percent of the total amount spent by all nine agencies (see table 1). In March 2011 we reported that during fiscal years 2000 through 2010, seven interagency committee member agencies spent about $163 million on oil pollution research, according to officials from those agencies. Since we last reported on the interagency committee, three additional agencies told us that they also fund oil spill research—the Department of Energy, BSEE, and the U.S. Arctic Research Commission—while the U.S. Navy told us that it no longer funds oil spill research projects. According to agency officials, the nine interagency committee member agencies funded from 100 to 200 research projects annually from fiscal years 2011 through 2017. These nine agencies reported funding research projects in one or more of the interagency committee’s four oil spill research categories: prevention, preparedness, response, and injury assessment and restoration (see table 2). The Interagency Committee Improved Internal Research Coordination Efforts but May Not Have Included All Relevant Agencies and Did Not Include the NRDA Trustee Councils We reported in March 2011 that federal agencies conducted oil pollution research but that the interagency committee had taken limited actions to foster the communication and coordination of this research among member agencies and nonfederal stakeholders. More specifically, we noted that member agencies were not consistently represented on the interagency committee and interested nonfederal stakeholders reported limited contact with the interagency committee. We recommended, among other things, that the Commandant of the Coast Guard direct the chair of the interagency committee, in coordination with member agencies, establish a more systematic process to identify and consult with key nonfederal stakeholders. Officials from 8 of the 15 member agencies said they believe that the interagency committee’s coordination efforts have improved since the Deepwater Horizon oil spill in 2010. In response to our recommendation on coordination with nonfederal stakeholders, we found that members consistently attend major oil spill conferences and workshops. In addition, we observed that the interagency committee invites outside speakers and researchers to its meetings to update the membership on ongoing research activities in academia, industry, and the government. The committee charter calls for meetings at least semiannually, but since fiscal year 2011 the interagency committee has held quarterly meetings with member agencies as well as meetings with outside groups of knowledgeable stakeholders. At the meetings, member agencies have the opportunity to present information on oil spill research they are conducting, share information about upcoming research conferences, and listen to presentations by outside groups. According to member agency officials, some of the benefits of the interagency committee’s improved coordination efforts include a reduction in research redundancies, increased understanding of the broader oil spill research community, the facilitation of relationships, the identification of research gaps, and the ability to leverage resources. U.S. Navy officials said that the interagency committee facilitated communication between member agencies that use the Navy’s equipment for research purposes. As a result of discussions that took place at an interagency meeting, the Navy offered the use of a hydraulic power unit to the Coast Guard for hydraulic testing in Arctic conditions in Alaska. Officials from a few of the member agencies, including the Coast Guard, BSEE, EPA, and NOAA, told us that they collaborate on oil spill- related research efforts with other member agencies of the interagency committee. In addition, the release of the 2015-2021 research and technology plan provides a new baseline for research, including 150 priority oil pollution research needs within 25 research areas. According to the research and technology plan, future updates will reflect advancements in oil pollution technology and changing research needs by capitalizing on the unique roles and responsibilities of each member agency. According to officials from one member agency, the revised research and technology plan has helped member agencies coordinate with other member agencies to leverage funding and expertise. Member agencies also cooperate with nonfederal research entities on research needs and activities. The interagency committee has demonstrated key practices that strengthen coordination, such as agreeing on common terminology and priorities for oil spill research in its revised research and technology plan. However, the committee could enhance coordination by ensuring that relevant participants have been included—another key practice. Under OPA, certain federal agencies are members of the interagency committee, but member agencies may choose which office or official represents them at meetings and coordinates with other members on committee-related work. Officials from 6 of the 15 member agencies told us that their particular research efforts are not the focus of ICCOPR meetings, and therefore ICCOPR’s ability to coordinate their research efforts are less valuable. For example, NASA officials said the office representing their agency at meetings is not involved in oil spill research, but other offices within their agency fund or conduct relevant research. In addition, 7 of the 15 officials we interviewed from member agencies suggested that other federal agencies could be relevant to the committee’s research efforts. For example, officials we interviewed from several member agencies suggested including the U.S. Geological Survey (USGS) as a full member because of its relevant research and mapping expertise. According to committee documents, the interagency committee considered adding USGS in 2015 but has not made a decision on USGS’s membership. The Commandant of the Coast Guard, under his or her capacity as chair of the interagency committee, has been delegated authority to appoint additional agencies to the committee as appropriate. A leading practice for collaboration calls for interagency groups to ensure that all relevant participants have been included in collaborative efforts. According to this leading practice, participants should have the appropriate knowledge, skills, and abilities to contribute to the outcomes of the collaborative effort. However, interagency committee member agency officials said the committee has not systematically reviewed its membership to determine which offices within current member agencies are the most relevant to its mission and whether adding other federal agencies as members would be beneficial. By systematically reviewing its membership to determine whether any additional agencies should be involved in coordinating oil spill research and that the most appropriate offices within member agencies are represented, the interagency committee could improve its ability to coordinate research among federal agencies. In addition, agency officials knowledgeable about the work of the NRDA trustee councils are not the same officials representing their agency as members on the interagency committee. The research and technology plan notes that the interagency committee’s injury assessment and restoration research is intended to support the NRDA process. However, the NRDA trustees who manage the restoration funds for the Exxon Valdez and Deepwater Horizon oil spills told us that they have not coordinated or communicated on oil spill research or restoration efforts with the interagency committee; therefore, they would not have been involved with developing the research and technology plan. In addition, some trustee council members told us that they were not even aware that the interagency committee existed. Under OPA, one of the interagency committee’s responsibilities is to coordinate with federal agencies and external entities on an oil pollution research program that includes methods to restore and rehabilitate natural resources damaged by oil spills. As previously discussed, the NRDA trustee councils are charged with assessing natural resource damages for the natural resources under their trusteeship and developing and implementing plans for restoration efforts. The research that the interagency committee members fund includes research on restoration that could be pertinent to the work of the NRDA trustee councils. For example, following the oil spill in 2010, the Deepwater Horizon Trustee Council evaluated baseline conditions for several different representative species, such as sea turtles and Gulf sturgeon, to quantify the extent of injury as part of the restoration planning process that OPA regulations required. Some interagency committee member agencies, such as NOAA and BOEM, fund research on baseline data that could inform the NRDA trustee councils’ injury assessment work. In turn, the NRDA trustee councils’ work could also inform the interagency committee’s coordination of future oil spill research by, for example, identifying gaps in research as identified and prioritized in updates to the research and technology plan. By coordinating with the NRDA trustee councils, the interagency committee could ensure that its research informs and supports the councils’ damage assessment and restoration efforts and better leverages members’ resources. Literature Suggests the Effectiveness of Offshore Oil Spill Response Techniques Varies Based on Regional Environmental Differences and Other Factors According to the literature we reviewed, environmental differences between the Gulf of Mexico and Arctic regions, as well as factors such as the type of oil, influence the potential effectiveness of various oil spill response techniques. In each region, environmental conditions, such as water and air temperature, water movement, and salinity, influence how effective oil spill response techniques can be. Further, according to the literature we reviewed, these conditions determine which response techniques are appropriate. Environmental conditions, such as ocean water and air temperature, can influence the effectiveness of natural oil removal through evaporation or biodegradation. These processes may occur more quickly in warmer climates, such as in the Gulf of Mexico. In the event of an oil spill, communities of microbes can bloom to respond to the new supply of oil. According to a 2011 report from the American Academy of Microbiology, these microbes can biodegrade up to 90 percent of some light crude oil, but the largest and most complex molecules––such as the ones that make up road asphalt––are not significantly biodegradable. A 2016 study found that higher temperatures lead to increased biodegradation, and increased salinity had a small positive impact on crude oil removal. However, the American Academy of Microbiology report also states that while microbes can biodegrade oil over time, the process may not be fast enough to prevent ecological damage. Therefore, immediate containment or physical removal of the oil is an important first response. The effectiveness of oil removal is also influenced by conditions of the water, determined by wind, waves, and currents. According to literature we reviewed, winds and currents can make it more difficult to remove the oil, increasing the likelihood of the oil spill affecting larger areas and additional plant and animal populations. Further, high seas and rough waters can make some response techniques less effective. According to a 2017 study that estimates the effect of environmental conditions on deploying oil spill response techniques in the Arctic Ocean, most response techniques are not suitable during Arctic winters, between November and June. Literature we reviewed also shows that other factors influence the effectiveness of response techniques, including oil type, oil thickness, and the location and depth of oil spill events. Light crude oil typically evaporates and biodegrades more quickly than heavy crude oil, which is more viscous. However, if the oil slick is too thin, it becomes difficult to contain and limits response options. Oil spilled in a remote location, such as the place where the Exxon Valdez oil spill occurred, may complicate response efforts because equipment and personnel are far away and may not be able to respond within the window of opportunity before the oil spreads. According to Coast Guard officials, during an oil spill response, various response techniques are used to minimize the negative effects on the water surface, water column, and shorelines, each with different applications, advantages, disadvantages, and risks. The response techniques we reviewed are: Mechanical recovery in the marine environment uses a variety of containment booms, barriers, and skimmers, as well as natural and synthetic absorbent materials to capture and store the spilled oil until it can be disposed of properly. In-situ burning, meaning in-place burning, is the process of igniting and burning oil slicks in a controlled environment. Dispersants are chemicals that can mitigate the immediate damage caused by oil at the surface and help accelerate the natural removal of the spilled oil. Dispersants work similarly to dish soap, breaking up the oil into small droplets that can more easily spread through the water. Mechanical Recovery Safely Removes Spilled Oil but Has Limitations in Certain Conditions The advantage of mechanical recovery is that it physically removes the oil from the water, minimizing the negative effects of the oil. Mechanical recovery can be used to safely remove oil where other methods might cause health risks or environmental damage, according to a 2013 report published by the National Academies Press. However, mechanical recovery has limitations in some conditions. If the oil slick is thin, it is difficult to achieve a significant rate of recovery and requires a lot of equipment to concentrate the slick so it is thick enough to be collected. According to literature we reviewed, mechanical recovery is less effective during inclement weather or high seas because the oil spreads and can emulsify in these conditions and is difficult to contain. Low temperatures and the presence of ice also make it challenging to achieve high recovery rates, and mechanical recovery becomes increasingly ineffective as wave heights increase, according to literature we reviewed. Furthermore, the process of recovering the oil is labor- and cost-intensive, and recovery can be delayed if the equipment is not readily available. Mechanical recovery is especially challenging to implement quickly when spills occur in remote areas, such as with Exxon Valdez, or where the oil is traveling quickly and broadly, such as with Deepwater Horizon. For example, according to a 1999 EPA report, skimmers were not readily available during the first 24 hours following the Exxon Valdez oil spill, repairs to damaged skimmers were time-consuming, and continued inclement weather slowed down the recovery efforts. In addition, a disadvantage of mechanical recovery is that temporary storage for large amounts of oil is frequently needed and recovered oil is generally brought back to the shore for disposal, according to Interior officials. Because of the resources required to physically remove the oil, it is difficult to recover a large percentage of the spilled oil through mechanical recovery in large oil spills. In-Situ Burning Can Efficiently Eliminate Oil but Has Potential Side Effects According to two studies and an agency document we reviewed, in-situ burning can be a highly effective technique for eliminating spilled oil from the sea surface. In response to the Deepwater Horizon oil spill, roughly 5 to 6 percent of all of the spilled oil was burned, about double the amount of oil removed with skimmers, according to a 2013 National Academies Press report. The primary advantage of in-situ burning is its efficiency. In ideal conditions, this method can quickly eliminate spilled oil. According to several reports we reviewed, in optimal conditions, in-situ burning can eliminate up to 90 percent of the spilled oil contained for burning with a relatively minimal investment of equipment or manpower. Literature we reviewed suggests that it is especially suited for response in Arctic conditions, particularly in ice-covered water where logistics and environmental conditions may preclude other options and where the ice can act as a natural barrier to help keep the oil slick thick enough to burn. However, in-situ burning also has its disadvantages. Burning has a narrow window of opportunity, and if the approval process takes longer than it takes to prepare for the burn, the opportunity for using in-situ burning may be lost, according to a NOAA document. Similar to mechanical recovery, burning can only be used if the oil slick is a certain thickness and when waves, wind, and currents are not too strong. In-situ burning becomes increasingly difficult in strong winds or with waves over 3 feet tall. A second disadvantage is that the burn residue caused by in- situ burning may have negative effects on ocean life, though studies we reviewed differed on this matter. According to a 2014 National Academies Press report about oil spills in the U.S. Arctic environment, a series of studies in the 1990s found that burn residues have little to no impact on oceanic organisms. However, a 2015 review on burn residues from in- situ burning in Arctic waters concluded that not enough research has been done on the side effects of burn residue from in-situ burning. According to NOAA officials, another disadvantage of in-situ burning is that the soot from inefficient combustion can result in unsightly and unhealthy particulates that may affect any downwind populations before the smoke dissipates. Use of Dispersants Is Versatile but Its Effectiveness Depends on Several Factors According to Coast Guard officials, chemical dispersants are typically used in conjunction with mechanical means and are considered when offshore mechanical methods are recognized as inadequate because of the spill volume, the geographical extent of the slicks, or specific on- scene environmental conditions. According to the literature we reviewed, an advantage of dispersants is their versatility. Dispersants are not as limited by environmental conditions as other response techniques, and they can be applied on surface or underwater environments. Further, dispersants can be applied through a variety of mechanisms. For example, they can be applied on oil slicks at the water’s surface by boats, planes, or helicopters. Dispersants can also be used below the surface, through subsea injection at the site of the spill, as was applied in response to the Deepwater Horizon oil spill. However, the literature suggests that the effectiveness of dispersants depends on many factors, such as the type of oil, type of dispersant used, and sea and weather conditions. According to Coast Guard officials, the decision to use dispersants is made after careful consideration of the location of the spill, type of oil spilled, seasonal resources at risk, and the environmental conditions at the time, as these factors influence the effectiveness and practicality of using dispersants, as well as the advisability of the tactic in the face of other options and risks. These officials also noted that dispersants are rarely used in the United States, but in certain situations, where mechanical means such as booming and skimming may not be effective, dispersants may be considered. In addition to the uncertainty of their effectiveness, the potential environmental risks associated with dispersants are also uncertain. One 2014 study states that while dispersants were thought to undergo rapid degradation in the water column, there was evidence that the dispersants remained on Gulf of Mexico beaches almost 4 years after the Deepwater Horizon oil spill. During the Deepwater Horizon oil spill, responders applied over 1.8 million gallons of chemical dispersants to the spilled oil— an unprecedented volume in the United States. It was the first major oil spill to use dispersants on such a large scale, and approximately 42 percent of these dispersants were applied sub-sea in the first operational sub-sea application of this technique. According to Coast Guard officials, the toxicity and long-term effects of large-scale application of dispersants on the ecology of marine life are unknown. According to literature we reviewed, there is evidence that chemically dispersed oil and some dispersant compounds may be toxic to some marine life, especially those in early life stages. Coast Guard officials also said that continued monitoring and further review of scientific research should improve the understanding of the impact of dispersants on mitigating the effects of oil spills as well as their overall environmental impact. Conclusions Following initial response and cleanup efforts, restoration activities related to a significant offshore oil spill, such as those from Exxon Valdez or Deepwater Horizon, can endure for decades. Federal agencies of the interagency committee conduct and fund research projects related to preventing, preparing for, responding to, and restoring the environment after oil spills. The interagency committee has improved the coordination of federal oil spill research efforts since the Deepwater Horizon oil spill in 2010. However, the interagency committee has not systematically reviewed its membership to determine which offices within current member agencies are the most relevant to its mission and whether adding other federal agencies as members would be beneficial. By systematically reviewing its membership to determine whether any additional agencies should be involved in coordinating oil spill research and that the most appropriate offices within member agencies are represented, the interagency committee could improve its ability to coordinate research among federal agencies. In addition, the interagency committee does not coordinate with the NRDA trustee councils that manage the large restoration funds and monitor the restoration of damaged resources after a specific spill, such as the Exxon Valdez and Deepwater Horizon oil spills. Coordinating with the NRDA trustee councils could help ensure that the interagency committee’s oil spill research program is effectively supporting the damage assessment and restoration efforts of the councils, and better knowledge sharing between groups and leveraging its members’ oil spill research resources. Recommendations for Executive Action We are making the following two recommendations to the Commandant of the U.S. Coast Guard at the Department of Homeland Security: The Commandant of the U.S. Coast Guard should direct the chair of the Interagency Coordinating Committee on Oil Pollution Research, in coordination with member agencies, to systematically review its membership to determine whether any additional agencies should be involved in coordinating oil spill research and that the most appropriate offices within member agencies are represented. (Recommendation 1) The Commandant of the U.S. Coast Guard should direct the chair of the Interagency Coordinating Committee on Oil Pollution Research, in coordination with member agencies, to coordinate with the relevant Natural Resource Damage Assessment trustee councils to help ensure that the interagency committee’s research informs and supports the councils’ damage assessment and restoration efforts. (Recommendation 2) Agency Comments We provided our draft report to the Department of Agriculture, Department of Commerce, Department of Defense, Department of Energy, Department of Homeland Security, Department of the Interior, Department of Transportation, Environmental Protection Agency, National Aeronautics and Space Administration, and U.S. Arctic Research Commission for review and comment. In comments reprinted in appendix II, the Department of Homeland Security concurred with our recommendations. In addition, the departments of Commerce, Homeland Security, Interior, and EPA provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of the report to the appropriate congressional committees; the Secretaries of Agriculture, Commerce, Defense, Energy, Homeland Security, the Interior, and Transportation; the Administrators of EPA and NASA; the Executive Director of the U.S. Arctic Research Commission; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report examines (1) how the Natural Resource Damage Assessment (NRDA) trustee councils have used the restoration trust funds for the Exxon Valdez and Deepwater Horizon oil spills and the status of the restoration efforts; (2) the status of the Interagency Coordinating Committee on Oil Pollution Research’s (interagency committee) oil spill research efforts and how coordination of such efforts has changed since we last reported on it in March 2011; and (3) what literature suggests about the effectiveness of various oil spill response techniques in the Arctic and the Gulf of Mexico. To examine how the NRDA trustee councils used the restoration funds from the Exxon Valdez oil spill (from October 1992 to January 2018) and the Deepwater Horizon oil spill (from April 2012 to December 2017) for restoration and the status of the restoration efforts, we obtained data from each trustee council on the amount of funds (1) ordered by the settlement for each restoration type; (2) authorized by the trustees for, but not yet spent on, restoration activities (authorizations); (3) spent on restoration activities (expenditures); and (4) not yet authorized for restoration activities (remaining balance) through calendar year 2017 for Deepwater Horizon and through January 31, 2018, for Exxon Valdez. To assess the reliability of the financial data, we reviewed related budget documentation; interviewed knowledgeable council staff about how fund balances are recorded and reported; reviewed the totals for obvious errors and inconsistencies; and reviewed internal control documents, such as a database manual and standard operating procedures. We determined that the data were sufficiently reliable for the purposes of our report. We examined the approved restoration plans (1994 restoration plan and 2014 restoration plan update for the Exxon Valdez oil spill, and the 2016 programmatic damage assessment and restoration plan for the Deepwater Horizon oil spill) and, when available, annual reports on restoration activities (1994 through 2018 annual reports for the Exxon Valdez Oil Spill Trustee Council (Exxon Valdez Trustee Council) and 2016 and 2017 annual financial reports for the Deepwater Horizon Natural Resource Damage Assessment Trustee Council (Deepwater Horizon Trustee Council)). We also reviewed project reports and scientific studies that the trustee councils funded to gain a better understanding of the status of restoration of injured natural resources, restoration priorities, activities, and progress made by the trustee councils. We reviewed laws and regulations that provide the legal authority for federal agencies to intervene and respond after an oil spill, such as the Oil Pollution Act of 1990 (OPA), the Clean Water Act, and NRDA regulations. We met with officials from the Exxon Valdez Trustee Council to discuss the distribution of settlement money for restoration purposes after the Exxon Valdez oil spill, and with officials from the Deepwater Horizon Trustee Council, Gulf Coast Ecosystem Restoration Council (RESTORE Council), and the National Fish and Wildlife Foundation to discuss the distribution of settlement money for restoration purposes after the Deepwater Horizon oil spill. Additionally, in May 2018, we traveled to multiple locations in the former spill area in Alaska to observe the extent of restoration efforts and ongoing issues. Along with researchers sent by the Exxon Valdez Trustee Council, we excavated three pits that revealed lingering oil about 6 inches below the surface of the beach on Eleanor Island in Prince William Sound. These researchers told us that oil previously uncovered at this location had been linked to the Exxon Valdez oil spill. In addition to fieldwork in Alaska, in November 2017 and February 2018, we attended public meetings in Alabama and Louisiana to learn about restoration plans for the Gulf States. To examine the status of the interagency committee’s federal oil spill research efforts and how coordination of such efforts has changed since we last reported on it in March 2011, we requested funding data and project information on oil spill research from all 15 member agencies of the interagency committee. We received data from the 9 member agencies that reported funding oil spill research projects from fiscal years 2011 through 2017. These 9 agencies provided data on agency expenditures on oil spill research and the research category of any projects funded. We assessed the reliability of the data by reviewing related documentation, interviewing knowledgeable agency officials, and reviewing agency internal controls for each of the 9 member agencies that provided us data about the steps they take to maintain this information. We determined that in most cases the data were sufficiently reliable for the purposes of our report. However, we chose not to provide the National Oceanic and Atmospheric Administration’s (NOAA) agency expenditures for oil spill research because NOAA officials were unable to provide reliable data on the actual amount the agency spent on such research during the time period we requested. In addition, some agency officials we interviewed raised the concern that their agencies do not track oil spill research funding and therefore the information they provided on expenditures for such research may not include all relevant efforts that could inform oil spill prevention, preparedness, response, and restoration. We also interviewed officials from the 15 member agencies to learn about each agency’s oil spill research efforts and participation in and coordination through the interagency committee, and compared their coordination practices to one of our federal leading practices for collaboration for interagency groups to evaluate the interagency committee’s efforts to coordinate such research. We chose to focus on the collaboration practice pertaining to participants because it appeared to be the most challenging for the interagency committee based on the findings of our previous March 2011 report, the actions taken by the interagency committee to address our recommendations from that report, and our own findings from our research for this report. In addition, we reviewed the 2013 interagency committee charter, the committee’s most recent biennial reports to Congress covering fiscal years 2008 through 2017, and the committee’s third multiyear research and technology plan for fiscal years 2015 through 2021; attended two committee meetings; and reviewed minutes of eight past meetings. We also reviewed OPA’s provisions that established and govern the interagency committee’s coordination efforts and membership, as well as various related executive documents. To examine what literature suggests about the effectiveness of various oil spill response techniques in the Arctic and the Gulf of Mexico, we conducted a literature search for studies and articles that analyzed and summarized the effectiveness of various oil spill response techniques in those regions. We identified existing literature from 1989 (the year of the Exxon Valdez oil spill) to March 2018 by searching various databases, such as Scopus and ProQuest. We chose to focus on three primary response techniques—mechanical recovery, in-situ burning, and the use of dispersants—used to clean up after offshore oil spills according to knowledgeable stakeholders and the literature we reviewed. The database search produced over 800 results. Our subject matter expert helped the team narrow this list to 50 results, of which we relied on 16 studies and articles that we determined were most relevant to our research objective of determining the effectiveness of various oil spill response techniques in the Arctic and the Gulf of Mexico. Some literature was not included if it was too specific for the scope of our review. Literature published recently, generally within the past 10 years, was considered more relevant. We supplemented the list of studies from these databases with literature from the Congressional Research Service, the National Academies Press, the Environmental Protection Agency (EPA), NOAA, the American Academy of Microbiology, the Arctic Oil Spill Response Joint Industry Programme, and our previous report on oil dispersants. In total, we relied upon 22 literature results to inform the findings of our objective. For a complete list of the literature, see the bibliography. We shared our summary of the literature search findings with agency officials representing some of the interagency committee member agencies. The following agencies responded with comments and we included their perspectives where relevant: the Department of the Interior, EPA, NOAA, and the U.S. Coast Guard. We did not independently evaluate the effectiveness of these response techniques. We conducted this performance audit from July 2017 to January 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Christine Kehr (Assistant Director), Amy Ward-Meier (Analyst-in-Charge), Colleen Candrl, Nirmal Chaudhary, Juan Garay, Cindy Gilbert, Matt Hunter, Jessica Lewis, Joe Maher, Greg Marchand, Kimberly (Kim) McGatlin, Cynthia Norris, Travis Schwartz, Sheryl Stein, Sara Sullivan, Vasiliki (Kiki) Theodoropoulos, Matthew Valenta, Sarah Veale, and Dan Will made key contributions to this report. Bibliography We reviewed literature to examine what it suggests about the effectiveness of various oil spill response techniques in the Arctic and the Gulf of Mexico. This bibliography contains citations for the studies and articles that contributed to these findings. American Academy of Microbiology, Microbes and Oil Spills FAQ (Washington, D.C.: 2011). Arctic Oil Spill Response Technology Joint Industry Programme, Synthesis Report, D. Dickens-DF Dickens Associates, LLC (May 3, 2017). Belore, Randy C., Ken Trudel, Joseph V. Mullin, and Alan Guarino. “Large-scale Cold Water Dispersant Effectiveness Experiments with Alaskan Crude Oils and Corexit 9500 and 9527 Dispersants.” Marine Pollution Bulletin, vol. 58 (2009): 118-128. Boufadel, Michel C., Xiaolong Geng, and Jeff Short. “Bioremediation of the Exxon Valdez Oil in Prince William Sound Beaches.” Marine Pollution Bulletin, vol. 113 (2016): 156-164. Brakstad, Odd G., Trond Nordtug, and Mimmi Throne-Holst. “Biodegradation of Dispersed Macondo Oil in Seawater at Low Temperature and Different Oil Droplet Sizes.” Marine Pollution Bulletin, vol. 93 (2015): 144-152. Committee on Responding to Oil Spills in the U.S. Arctic Marine Environment; Ocean Studies Board; Polar Research Board; Division on Earth and Life Studies; Marine Board; Transportation Research Board; National Research Council, Responding to Oil Spills in U.S. Arctic Marine Environment. National Academies Press (US) (Washington, D.C.: 2014). Committee on the Effects of the Deepwater Horizon Mississippi Canyon- 252 Oil Spill on Ecosystem Services in the Gulf of Mexico, Ocean Studies Board, Division on Earth and Life Studies, National Research Council, An Ecosystem Services Approach to Assessing the Impacts of the Deepwater Horizon Oil Spill in the Gulf of Mexico. National Academies Press (US) (Washington, D.C.: December 20, 2013). Corn, Lynne M., Claudia Copeland, The Deepwater Horizon Oil Spill: Coastal Wetland and Wildlife Impacts and Response. Congressional Research Service (July 7, 2010). Environmental Protection Agency, Office of Emergency and Remedial Response, Understanding Oil Spills and Oil Spill Response, EPA 540-K- 99-007 (Dec 1999). Fletcher, Sierra, Tim Robertson, Bretwood Higman, and Elise DeCola. Estimating Impact of Environmental Conditions on Deployment of Marine Oil Spill Response Tactics in the U.S. Arctic Ocean, proceedings of the Fortieth AMOP Technical Seminar. Ottawa: Environment and Climate Change Canada, 2017, 246-264. Fritt-Rasmussen, Janne, Susse Wegeberg, d Kim Gustavson, “Review on Burn Residues from In Situ Burning of Oil Spills in Relation to Arctic Waters.” Water Air Soil Pollution, vol. 226 (2015). GAO, Oil Dispersants: Additional Research Needed, Particularly on Subsurface and Arctic Applications, GAO-12-585 (Washington, D.C.: May 30, 2012). Naseri, M., and J. Barabady, Safety and Reliability: Methodology and Applications—Performance of Skimmers in the Arctic Offshore Oil Spills. London: Taylor & Francis Group, 2015, 607-614. National Oceanic and Atmospheric Administration, Oil Spill - Behavior, Response and Planning: Open-water Response Strategies: In-situ Burning, (August 1997). Nedwed, Tim, Tom Coolbaugh, and Amy Tidwell. Subsea Dispersant Use during the Deepwater Horizon Incident, proceedings of the Thirty-Fifth AMOP Technical Seminar on Environmental Contamination and Response. Vancouver, BC; Canada, ExxonMobil Upstream Research Company, 2012, 506-518. Nyankson, Emmanuel, Dylan Rodene, and Ram B. Gupta. “Advancements in Crude Oil Spill Remediation Research After the Deepwater Horizon Oil Spill.” Water Air Soil Pollution (2016). Rahsepar, Shokouh, Martijn P.J. Smit, Albertinka J. Murk, Huub H.M. Rijnaarts, and Alette A.M. Langenhoff. “Chemical Dispersants: Oil Biodegradation Friend or Foe?” Marine Pollution Bulletin, vol. 108 (2016): 113-119. Ramseur, Jonathan L., Oil Spills: Background and Governance. Congressional Research Service (Sept 15, 2017). Sharma, Priyamvada, and Silke Schiewer. “Assessment of Crude Oil Biodegradation in Arctic Seashore Sediments: Effects of Temperature, Salinity, and Crude Oil Concentration.” Environmental Science and Pollution Research (2016): 14881-14888. Shi, X., P.W. Bellino, A. Simeoni, and A.S. Rangwala. “Experimental Study of Burning Behavior of Large-scale Crude Oil Fires in Ice Cavities.” Fire Safety Journal, vol. 79 (2016): 91-99. United States Coast Guard, On Scene Coordinator Report Deepwater Horizon Oil Spill, (September 2011). White, Helen K., Shelby L. Lyons, Sarah J. Harrison, David M. Findley, Yina Liu, and Elizabeth B. Kujawinski. “Long-Term Persistence of Dispersants Following the Deepwater Horizon Oil Spill.” Environmental Science & Technology Letters (2014): 295-299.
Why GAO Did This Study The Exxon Valdez and Deepwater Horizon oil spills are two of the largest offshore oil spills in U.S. history, causing long-lasting damage to marine and coastal resources. OPA includes provisions to prevent and respond to such oil spills by authorizing (1) federal-state trustee councils that manage billions of dollars from legal settlements and (2) an interagency committee to coordinate oil pollution research, among other things. GAO was asked to review the federal government's response, restoration, and research efforts after the Exxon Valdez and Deepwater Horizon oil spills. This report examines, among other things, (1) how the trustee councils have used the restoration trust funds and the status of restoration and (2) the interagency committee's coordination of oil spill research efforts. GAO reviewed the councils' plans for the funds and how they were used, federal funding of oil spill research by member agencies, and key laws. Also, GAO evaluated the coordination of such efforts against a leading collaboration practice. GAO interviewed members of the trustee councils and the interagency committee. What GAO Found The trustee councils, composed of federal and state members, have used portions of the restoration trust funds from the Exxon Valdez and Deepwater Horizon oil spill settlements to restore natural resources. From October 1992 to January 2018, the Exxon Valdez Oil Spill Trustee Council used about 86 percent of the fund's roughly $1 billion, primarily on habitat protection and restoration of damaged natural resources. According to the council, all but 5 of the 32 natural resources and human services identified as damaged by the spill have recovered or are recovering. The health of Pacific herring is one example of a resource that has not yet recovered. Further, the presence of lingering oil remains a concern almost 30 years after the spill. In May 2018, GAO accompanied trustee council researchers to the spill area and observed the excavation of three pits that revealed lingering oil roughly 6 inches below the surface of the beach, as captured in the photo below. The Deepwater Horizon Natural Resource Damage Assessment Trustee Council finalized a programmatic restoration plan in 2016; four trustee implementation groups have since issued initial restoration plans for designated restoration areas, and three anticipate issuing restoration plans in 2019 or later. From April 2012 to December 2017, the council used 13 percent of the at least $8.1 billion restoration trust fund, mostly on habitat protection, enhancing recreation, and marine wildlife and fishery restoration. The Oil Pollution Act of 1990 (OPA), which was enacted after the Exxon Valdez spill in 1989, established the Interagency Coordinating Committee on Oil Pollution Research (interagency committee) to coordinate oil pollution research among federal agencies and with relevant external entities, among other things. However according to the trustee council members that manage the restoration trust funds, the committee does not coordinate with the trustee councils and some were not aware that the interagency committee existed. The research of the member agencies could be relevant to the trustee councils' work on restoration. By coordinating directly with the trustee councils, the interagency committee could ensure better knowledge sharing between groups and leverage its member agencies' resources to inform and support the work of the councils. What GAO Recommends GAO recommends, among other things, that the interagency committee coordinate with the trustee councils to support their work and research needs. The agency agreed with GAO's recommendations.
gao_GAO-19-16
gao_GAO-19-16_0
Background Phased retirement arrangements are programs that allow older workers to reduce their working hours to transition into retirement, rather than stopping working abruptly at a given age. The option to transition into retirement through phased retirement encourages older workers who might otherwise retire immediately to continue working. Delayed retirement may help alleviate pressures on national pension systems and address labor shortages and shortages of skilled workers. Phased retirement programs exist in both the public and private sectors and are used by employers that cover workers through both defined benefit (DB) and defined contribution (DC) retirement plans. The programs sometimes include a partial draw-down of pension benefits for workers while they continue to work and may include a knowledge-transfer component. Phased retirement programs are often called “flexible,” “partial,” or “gradual” retirement programs. Sources of Retirement Income Similar to the United States, the retirement systems in other developed countries consist of three main pillars: a national pension, similar to the U.S. Social Security program; workplace employer-sponsored pensions or retirement savings plans; and individual savings. Retirement plans can be broadly classified as DB or DC. A DB plan promises a stream of payments at retirement for the life of the participant, based on a formula that typically takes into account the employee’s salary, years of service, and age at retirement. A DC plan, such as a 401(k) plan in the U.S., allows individuals to accumulate tax-advantaged retirement savings in an individual account based on employee and/or employer contributions, and the investment returns (gains and losses) earned on the account. With DC plans certain risks and responsibilities shift from the plan sponsor (employer) to the plan participant (employee). For example, workers with a DC plan often must decide how much to contribute, how to invest those contributions, and how to spend down the savings in retirement. For DB plans, many of those decisions reside with the employer. Some retirement plans combine features of both DB and DC plans, often referred to as hybrid plans. National pensions: According to literature we reviewed, many countries have created retirement plans for their citizens and residents to provide income when they retire. These plans are typically earnings-based and require employer and employee contributions over a number of years, with pension benefits not accessible before a certain age. National pensions are generally DB plans, similar to the U.S. Social Security program. Employer-sponsored pensions or retirement savings plans: Employer-based pensions or retirement savings plans are set up by employers to help ensure their workers have income during retirement. Employer-sponsored plans often require both the employer and employee to contribute money to a fund during employment so that the employee may receive benefits upon retirement. Employer-sponsored pensions typically refer to DB plans that promise a source of lifetime income at retirement, whereas retirement savings plans are typically DC plans, with retirement benefits that accrue based on contributions and the performance of the investments in the employees’ individual accounts. Over the past several decades, there has been a significant shift in private sector employer-based retirement plans from traditional DB plans to DC plans. In the U.S. this shift has been to 401(k)s as the primary employer-sponsored retirement plans. Individual savings: Individuals can augment their retirement income from the national pension and employer-sponsored plans with their own savings, which would include any home equity, investments, personal retirement savings accounts like Individual Retirement Accounts (IRA) used in the United States, and other non-retirement savings. Population Aging and Economic Productivity Population aging, primarily due to declining fertility rates and increasing life expectancy, has raised concerns about the sustainability and adequacy of pensions, especially as many workers continue to exit the labor force before the statutory or full retirement age. Research indicates that while certain countries are aging more rapidly than others, population aging will affect most OECD countries, including the United States, over the coming decades. For example, the share of the population aged 65 and older is projected to increase significantly by 2030 (see fig. 1). According to a 2017 OECD report, since 1970, the average life expectancy at age 60 in OECD countries has risen from 18 years to 23.4 years and, by 2050, it is forecast to increase to 27.9 years. At that time, the average person is expected to live to nearly 90 years old. The increased life expectancy means that workers are spending more years in retirement. In many instances, the aging population is placing additional pressure on public pension systems and has raised concerns about the solvency of national pension systems and the long-term adequacy of benefits. In response, countries have used strategies, including increasing the statutory retirement age of their national pension systems, to reduce that pressure. However, many workers continue to leave the workforce prior to reaching the statutory retirement age, according to OECD data. To address this development, retaining older workers in the labor market has been an objective in many countries. Some researchers have suggested that, in the U.S., economic productivity could decline as baby boomers age and leave the labor force, thus reducing the rate of economic growth. For example, a 2016 study found that a 10 percent increase in the percentage of the population age 60 and older decreases the growth rate of per capita gross domestic product (per capita GDP) by 5.5 percent. According to this study, two-thirds of the reduction is due to slower growth in the labor productivity of workers of all ages while one- third is due to slower labor force growth, suggesting that annual GDP growth in the U.S. could slow by 1.2 percentage points per year this decade, entirely for demographic reasons. Phased retirement has the potential to provide options that would be beneficial both to older workers and the overall economy by extending labor force participation. We Identified 17 Countries with Aging Populations That Have Phased Retirement Options for Older Workers Among the 44 countries that met our initial criteria as having a national pension system similar to Social Security and an aging population, we identified 17 with some kind of phased retirement program. Based on a review of relevant research, studies, and interviews, we determined that phased retirement programs in these countries were established in several ways: (1) through national policies including legislative actions and specific programs that encourage phased retirement; (2) at the industry or sector-level through collective bargaining agreements that cover specific occupations or sectors; and (3) by individual employers. Table 1 shows the three types of phased retirement arrangements found in the 17 countries we identified. Based on our research, we determined that a national policy on phased retirement may provide a voluntary framework within which employers may participate rather than a requirement that they offer such programs. For example, Canadian officials reported Canada changed regulations that require employers who provide defined benefit pension plans and also offer phased retirement to allow participating workers to receive some partial pension benefits while continuing to accrue pension credits. However, according to the Canadian government, it is ultimately up to individual employers to make phased retirement available for their employees. In many countries, collective bargaining played a key role in the formation of phased retirement programs, particularly at the industry or sector level. Half of the 17 countries have “sectoral” phased or partial retirement arrangements established through collective bargaining agreements that cover a large number of workers from specific industrial sectors or occupations, such as local government workers in Sweden or metal and chemical sector workers in Germany. Such sectoral programs can include public and private employers that provide a program or policy that applies to their workers only. Sometimes, companies with sectoral programs have the flexibility to set their own program requirements, within the broad guidelines of arrangements established through collective bargaining agreements. Phased retirement programs can also be established by individual employers. Employers offering phased retirement are generally larger companies in the private sector with their own pension plans. Our research found examples of phased retirement programs offered by individual employers both within and outside of collective bargaining agreements. Selected Countries’ National Policies Were Generally Designed to Encourage Phased Retirement, and Individual Program Design Aspects Vary Selected Countries Employ Various Strategies to Encourage Phased Retirement The national policies implemented in our four case study countries— Canada, Germany, Sweden, and the U.K.—currently, are mainly designed to encourage older workers to remain in the labor force and continue to earn and contribute to their pensions, and often, share their institutional knowledge with younger workers, according to the officials, experts, and employers we interviewed. For example, according to Canadian government officials, Canada, to retain older workers and meet the financial needs of those workers, amended its income tax regulations in 2007 to allow phased retirement under certain DB pension plans. Additionally, government officials in the U.K. reported that in 2014, the U.K.’s national flexible work policy was expanded to cover older workers who wanted to phase into retirement. They said that this was done, in part, to keep older workers—aged 50 and over—in the labor force. However, the reasons for instituting phased retirement have shifted over time. Based on our research and interviews with foreign officials and other experts, we found that, in some cases, phased retirement was initially used as an incentive for older workers to retire early so employers could hire unemployed younger workers. For example, officials reported that in 1996 at a time of double-digit unemployment (around 10 percent), Germany instituted a national part-time work program, the Altersteilzeitgesetz (ATZ), to encourage older workers to retire. Officials said this phased retirement program originally sought to get older workers out of the labor force and encourage employers to hire unemployed workers and trainees. Today, in response to an aging population, Germany is using phased retirement to encourage older workers to remain in the workforce and ensure knowledge and skills transfer, according to officials we interviewed. In addition, our research found that Sweden offered a national phased retirement program or a “partial pension” scheme from 1976 to 2001, mainly as an option to allow workers to gradually withdraw from work 5 years before the statutory retirement age. According to our research, this program was implemented, in part, to make it the transition from work to retirement more flexible. Swedish officials stated that the country abolished the program in 2001, mainly due to excessive costs, and implemented a new policy in 2010 that permits partial retirement and access to partial pension to encourage workers to stay in the labor force longer. The four case study countries employed various efforts at the national level to encourage phased retirement options that seek to keep older workers in the labor force. From our interviews with government officials, unions, and other experts, we found that all four countries have national policies to help facilitate phased retirement. Examples include national programs that companies and sectors can offer to workers—such as the national program in Germany or the program in Sweden that ended in 2001—as well as implementing policies that seek to incentivize both employers and employees to offer and participate in phased retirement programs. As shown in table 2, the four countries reported having made efforts at the national level to encourage phased retirement, including implementing national policies and programs that involve public subsidies, tax incentives, or changing pension rules to allow individuals to receive partial pension benefits while continuing to accrue benefits in the same pension plan. For additional information on the national efforts made by case study countries, see appendix II. Individual Programs in Case Study Countries Have Similar Aspects, but Vary in Design and Sources of Supplemental Income to Workers Employers in our case study countries have implemented various phased retirement programs that reflect the employers’ goals for offering phased retirement and the preferences of participating employees. Based on our interviews with officials, employers, and representatives from employer associations and unions in the four selected countries, we found that the programs offered by employers in those countries had similarities and differences in how the programs were established, designed, implemented, and funded. Role of collective bargaining. Based on our research and interviews with experts, we found that most of the phased retirement programs we reviewed in the four case study countries were established as part of collective bargaining agreements between employers and union- represented workers. This was often the case for sectoral programs in either the public or private sectors and for those covering specific occupations. The programs often covered a large number of workers. For example, in Sweden, representatives of an organization for public employers with approximately 1.2 million employees (23 percent of the Swedish workforce) told us that 90 percent of the workers in Sweden were covered by collective agreements, and that they have negotiated collective agreements that included phased retirement for many of their members. In Canada, one expert reported that phased retirement was most common in fields that are highly unionized, because Canadian unions wanted to increase flexibility for members to gradually decrease work, but also receive a pension payment. For example, the expert said that universities were at the forefront of phased retirement implementation and they are highly unionized. While most of the programs we reviewed were based on collective bargaining agreements, we identified a few companies that initiated phased retirement for their workers outside of the collective bargaining process, when the employer determined a need for such a program. For example, one private sector employer in the financial industry we interviewed in the U.K. told us that offering phased retirement options addressed employees’ need for flexibility. This employer commented that if employees are happy, they will stay with the company longer and continue to provide customers with superior service. As another example, a large German employer in the transportation industry offers a phased retirement program for managers who are not covered by a collective bargaining agreement. Defined benefit and defined contribution plans available. Many phased retirement programs we reviewed involve DB pension plans that provide a fixed stream of payments at retirement for the life of the participant. However, we also found some employers that were moving from such plans to DC or hybrid pension plans, and phased retirement is permitted under those plans as well. For example, a private sector employer in the U.K. that sponsors both DB and DC retirement plans, told us that workers in both types can participate in phased retirement and can draw from their employer-sponsored retirement accounts at age 55, although the drawdown rules are different for each type of retirement plan. As another example, the UK’s National Health Service workers are currently covered by two retirement plans, according to pension plan administrators we interviewed. Specifically, a pure DB plan initiated in 2008 is being phased out and replaced by a DB hybrid plan introduced in 2015. Both plans offer flexible retirement options, plan administrators said. Health care coverage. Each of the four countries we reviewed provided universal health care coverage. The broad availability of health care in these countries, allows workers to reduce their work hours or responsibilities without concern for losing health coverage, while not increasing employer costs. This also made it easier for employers in our case study countries to retain phasing part time workers and potentially hire another worker without the additional cost of providing health care to two workers. Program limits. Other similarities found in the phased retirement programs that we reviewed in the four case study countries, include 1) having a maximum age up to which a worker can partially retire— sometimes phased retirement can only be taken previous to the statutory retirement age as set by the country’s national pension system—and 2) limiting phased retirement to specific groups of employees. As examples, one employer in Germany told us that it offers phased retirement only to employees working in “hardship” positions, such as those who work night or rotating shifts, while some employers in Sweden offer phased retirement to workers in particularly skilled occupations where workers cannot be easily replaced, such as certain health professionals, according to representatives from an employer association. Program terms and conditions. Based on our review of program documents and interviews with program administrators, we found that the phased retirement programs we reviewed in the four countries, regardless of type, had basic requirements, such as age of participation, years of service, eligible positions, period of phasing work, and time requirements; however, the specific terms differed from program to program. For example, a sectoral phased retirement program in Sweden allowed workers to apply for phased retirement at age 60, and draw down 50, 80, or 90 percent of their earned employer-sponsored retirement account while phasing. A public sector employee program in the U.K. provided a phased retirement option at age 55, and workers could draw down from 20 to 80 percent of their employer-sponsored pension while reducing their work hours. In contrast, a program in Germany only allowed workers aged 56 and older, with 20 years of service, and who had rigorous work schedules (i.e., night shifts or rotating shifts) to apply for phased retirement. Other aspects, such as the categories of workers eligible to participate, also differ. For example, one higher education employer in Canada only allows faculty and librarians to participate in phased retirement, while another employer in the U.K. allows all employees to apply for phased retirement. Sources of income. Workers participating in phased retirement typically forego some amount of wages as a result of reduced working hours or reduced responsibilities, similar to the wage reduction in full retirement. In the programs we reviewed in our four countries, workers are able to offset foregone wages, at least partially, from multiple sources. According to program administrators and employers we interviewed, these sources include the national pension; employer-sponsored retirement accounts; an employer-provided benefit designated for this purpose; personal savings; or some combination of these sources. For example, German experts told us that, in Germany, workers participating in the national ATZ program can reduce their work hours by 50 percent. Experts told us that employers are required to pay a minimum of 70 percent of full-time wages for phasing employees and pay contributions toward the employee’s pension as though the employee were working 90 percent. Among the employers we interviewed that continue to offer the national ATZ program, the 20 percent topped-off amount was reported as generally financed by the employer. In the U.K., employees participating in a private-sector employer’s phased retirement programs make up for the foregone wages by withdrawing funds from their own employer- sponsored retirement plan. In Canada, one employer offers a lump-sum allowance to employees between 60 and 64 years of age who wished to reduce their hours as part of phased retirement. Participating employees are paid a salary proportional to their reduced hours and can use the lump-sum benefit to supplement their income, but may not exceed their full-time salary. This lump-sum is funded solely by the employer. During the phased retirement period, employees can continue to contribute to their employer-sponsored retirement account as if working full time, and need not withdraw from their pension. In Sweden, one public sector phased retirement arrangement is financed by employers as part of collective bargaining agreements. This program allows workers to work 80 percent of a full- time job and receive 90 percent of a full-time salary. The employers continue to contribute to the employer-sponsored pension as if employees were working full-time. Workers in Sweden can also supplement any reduced income with national pension benefits. Even with Unique Considerations for the United States, the Experiences of Other Countries with Phased Retirement Could Inform U.S. Efforts Differences in Institutional and Employer-Specific Factors May Affect How U.S. Efforts to Provide Phased Retirement Can Be Informed by Other Countries’ Experiences Institutional and employer-specific factors in other countries, which shape the design of phased retirement programs, typically differ from the institutional environment experienced by many U.S. private sector employers, although they may be similar to those common in U.S. public sector employment. Some of these institutional factors include the extent to which employers and workers are supported by universal health insurance, whether the programs are structured around employer- sponsored traditional DB plans—particularly for workers who have worked at their firm long enough to qualify for phased retirement—and whether programs are the result of collective bargaining agreements. In many of the selected countries we reviewed, phased retirement programs designed to extend labor force participation are fairly recent. While the rate of employment among older workers in the case study countries and the U.S. increased in recent years, data has not been collected in the case study countries to gauge the effects of phased retirement and participation is low. Experiences of the case study countries suggest that, in implementing such programs at the employer or national level, phased retirement programs may be more effective if carefully designed based on the employer’s specific industry or production characteristics, and with data collected and analyzed to pinpoint the most successful strategies. A Unique Consideration for U.S. Companies Wishing to Offer Phased Retirement: Importance of Employer- Sponsored Benefits Unlike our case study countries, most U.S. workers get their health insurance through their employer, which can be a costly benefit to provide. Employers with 50 or more employees must provide coverage or pay a fee; however, the requirement does not apply to those working less than 30 hours per week, on average. In June 2017, we found that employers offering phased retirement programs must decide if they will include participants in their health care coverage and that all eight of the employers with phased retirement programs with whom we spoke had extended their employer-sponsored insurance to program participants. In addition, the benefit payments provided under U.S. Social Security may not be as high as the national retirement benefits in some of our case study countries and many U.S. workers rely on employer-based retirement benefits and personal savings for a secure retirement. Strategies such as allowing continued contributions during phased retirement and supplementing phased retirement income through partial retirement payouts or other sources may be helpful for worker satisfaction in phased retirement programs. more common in the U.S. than in most of our case study countries. (see sidebar) However, we found examples of phased retirement programs offered to workers covered under DC pension plans that are not collectively bargained in our case study countries. Some of the employers with DC pensions that we learned about were transitioning from traditional DB plans to DC plans. In these instances, newer workers are usually enrolled in the DC plan and, because the shift is recent, many of the workers covered under DC plans may not be old enough or have sufficient years of service to qualify for phased retirement, where such characteristics are criteria for participation. For example, a privately-run transportation company in Germany reported offering phased retirement programs that reduce working hours by about 20 percent, to workers who meet certain criteria. Workers hired after 1995 and workers from the former East Germany are covered under a DC plan and may qualify for the phased retirement program. These examples indicate that private sector employers in the U.S., where workers are increasingly covered by DC plans rather than DB plans and generally not covered by collective bargaining agreements, may also be able to implement and benefit from phased retirement programs. Most of the programs we reviewed are relatively recent and have reported small numbers of participants. Although OECD’s data show that employment of 55- to 64-year-olds increased between 2006 and 2016 in Germany, Sweden, and the U.K., it is not clear what role phased retirement has played in that growth. (see fig. 2) Governments, employers, and unions have not systematically collected data to understand the effect of the program on choices older workers make regarding when to retire or the effects of phased retirement on employers, workers, or national workforce participation. Some employers we spoke with provided information on the number of workers who had used or were currently using the programs, but there is not enough data to draw conclusions, possibly because the programs are relatively new. As previously mentioned, the goal for some phased retirement programs has shifted and although employers and national governments now have greater incentives to retain older workers, the design of some phased retirement programs may encourage workers to use the program to leave the workforce earlier than they might in its absence. For example, experts at a high-skill employer in Canada said that they believed that the program may have incentivized older workers to reduce their hours when in the absence of the program they may have worked full time. Competing Needs of Employers, Workers, and Countries Mean That Benefits for Some May Be Challenges for Others Employers, workers, and countries may have competing needs and goals in phased retirement programs, which must be considered in designing programs. Specifically, these groups may differ in their preferences in the areas of who may participate, the primary goals for the program, and how the program will be financed. In previous work, we found that some U.S. employers are reluctant to offer phased retirement programs because they believe there is not sufficient interest among employees and that employers in industries with technical and professional workforces were more likely to provide formal and informal phased retirement programs. Challenges identified by the programs in our case study countries can provide helpful insights into areas of concern in designing phased retirement programs in the U.S. A Unique Consideration for U.S. Companies Wishing to Offer Phased Retirement: Nondiscrimination Laws In June 2017, we found that U.S. industries with skilled workers or with labor shortages also have motivation to offer phased retirement programs, in part because their workers are hard to replace. However, U.S. companies must comply with laws intended to protect workers from discrimination. Experts and employers said programs that target highly skilled workers, who are often highly paid, could violate nondiscrimination rules, which generally prohibit qualified pension plans from favoring highly compensated employees. One study we reviewed for that work noted that regulatory complexities and ambiguities involving federal tax and age discrimination laws impact an organization’s ability to offer a phased retirement program. Program scope: Certain experts noted that, particularly in the context of collective bargaining, workers typically want phased retirement programs to be broadly available; in contrast, certain employers may want narrowly scoped programs that are targeted to certain high-skilled or scarce workers. Phased retirement is also used by certain employers to target key employees with rare or sought after knowledge, skills, and experience and provide opportunities for knowledge transfer prior to retirement. Representatives from two German companies with high-tech or high- skilled workforces noted that phased retirement was important to retain workers with experience and knowledge. Employers also reported setting criteria that limit the program to individuals with a specific length of service with the employer, with physically difficult jobs, or with challenging schedules, which may help employers to target the program to certain workers. We reported in June 2017, that U.S. employers noted that targeting specific workers might pose a challenge because of laws that prohibit special treatment of selected workers for certain U.S. pension plans. (see sidebar) The differences in the desired scope of phased retirement programs could potentially be resolved. For example, some experts we interviewed reported that employers may have caps which limit participation, such as limiting participation to a specific percentage of employees who are age eligible. A union representative in Germany noted that employers there may set restrictions or caps on participation, such as 3 percent of the workforce, or an employer may effectively cap the extent of participation by restricting the program to a budgeted amount of funds. Employers in the U.S. could explore whether using a similar approach regarding the scope of a phased retirement program, taking into consideration any legal concerns or other practical challenges, could help them to control the number of workers participating in phased retirement programs. Knowledge sharing/succession planning: A representative at a German employer noted that the employer has integrated a knowledge sharing component to its program so that workers are able to train younger workers and share their expertise. Retaining older workers may have an added benefit—according to a U.K. public plan administrator, their phased retirement program also brought more age diversity to the workforce. One expert said that phased retirement has the additional benefit of helping with succession planning since management has more information about the retirement decisions of those participating in the program. An official from a Canadian university stated that the university’s phased retirement program, which includes a specified timeframe of 3 years, helps with planning because they know exactly when the worker will leave their job and can begin the sometimes lengthy process of recruiting replacement faculty. In our previous report, we noted that five of the nine employers we interviewed said that knowing when workers will retire allows employers to plan for the future. Work life balance/program complexity. Union representatives in our case study countries described several benefits that phased retirement provides to workers. For example, one said that phased retirement provides more choice for workers, another noted that phased retirement allows workers to continue to work at reduced hours until they reach the statutory age to receive a national pension, and a third mentioned that such programs reduce the burden for workers who cannot or do not want to work full time. Similarly, other experts we interviewed said that phased retirement’s part time work schedule provides workers the opportunity to continue working when they might otherwise retire. The experts each cited specific reasons workers might retire, including health concerns, the physical demands of their work, or the responsibility of caring for a loved one. U.K government officials stated that phased retirement for older workers in their country originated from a 2002 policy to facilitate flexible work for caregivers of dependent adults and young and disabled children. According to the U.K.’s government website, flexible work can be part time, job sharing, annualized hours, or telework, among others. It also states, that employers can decline a request for flexible employment if they can demonstrate that granting such a request can have a detrimental effect on the firm, but, according to a 2013 U.K. government survey, 97 percent of employers offer some kind of flexible work. Experts in several of our case study countries noted that the rate of participation in phased retirement programs is low, which each attributed to different factors, including that workers may have insufficient knowledge or understanding of the programs; employers may have restrictions on program participation, such as eligibility requirements or caps on participation; or there may be insufficient interest or incentives for workers. For example, a German academic noted that his country’s Teilrente program, which combines partial national pension benefits and reduced work hours for workers age 63 and older, is confusing and has not been well-marketed, leading to low uptake. In our previous report, we noted that according to 2014 Health and Retirement Study data, an estimated 29 percent of 61- to 66-year-olds in the U.S. plan to reduce their work hours: however only an estimated 11 percent actually did gradually reduce their hours. Extending labor force participation: Countries may want to encourage older workers to delay retirement to increase labor force participation, broadly or in certain sectors, especially in times of low unemployment. In the past, phased retirement in some nations had been used as a tool to downsize workforces and encourage workers to retire early. However, the rising costs of national pensions and an aging workforce have now encouraged nations to view phased retirement as a tool or mechanism to extend labor force participation. Indeed, according to the European Commission, increased labor force participation of older workers is a goal of the Eurozone. According to an academic expert we interviewed, increasing the use of phased retirement is not a specific strategy to achieve that goal, some countries are now using such programs to help achieve it. For example, a Swedish official commented that the availability of phased retirement can help older workers stay in the workforce longer. In addition, an association of employers in Germany stated that raising the age of eligibility for national pension benefits and eliminating incentives for early retirement was likely to induce older workers to work longer. Delayed retirement also gives workers longer working lives and earning potential, which may help make pension systems sustainable. A German academic noted that continued work keeps older individuals out of poverty and increasing retiree income could reduce their reliance on national “safety net” benefits. He said that retired people are interested in Germany’s program allowing work after retirement age because they may have insufficient savings and “mini jobs” provide opportunities for earning more. Certain sectors of national economies may particularly benefit from extending workers’ time in the workforce. For example, an expert at a U.K. consulting firm noted that, due to Britain’s expected departure from European Union membership the country may face labor shortages in certain sectors, such as health care and hospitality, because of the loss of foreign workers. He also suggested that flexible work arrangements may help to avoid potential shortages by retaining older workers who are citizens in those sectors. We also found, in our previous report, that phased retirement could also benefit the U.S. economy in helping to extend participation in the workforce. A Unique Consideration for U.S. Companies Wishing to Offer Phased Retirement: In-service Distributions and ERISA Requirements Related to Plan Design We previously reported that defined benefit (DB) plans may provide in-service distributions, which would allow phased retirement participants to draw a portion of their retirement benefit during their participation in phased retirement, to workers aged 62 and older. Defined contribution (DC) plan participants generally may not receive distributions from a DC plan until they reach age 59 ½ and distributions before that age may be subject to an additional tax. Our previous work also found that in-service distributions may be important to supplement salaries for participants in phased retirement. An expert we spoke to stated that the Employee Retirement Income Security Act of 1974 (ERISA) requirements pertaining to plan design reduce plan flexibility since changes to plan structure to allow for phased retirement have to be honored even if the economy changes and employers want to shed rather than retain older workers. He stated that this requirement reduces the appeal of phased retirement for employers sponsoring DB plans. Program design. Experts in certain case study countries reported that employers must design their programs carefully to ensure that they meet sometimes complex statutory requirements and to ensure that workers are eligible for and benefit from phased retirement. However, some also mentioned that designing a program that incentivizes continued work and avoids penalties for workers can be a challenge. For example, an expert we interviewed stated that, in Germany, early retirees can receive their full pension benefit after 45 years of work, but they are subject to salary caps until they reach the full retirement age, which may be a disincentive to combining continued work with a pension draw down. Conversely, there is an incentive for continued work in Germany without claiming a pension since, should the worker continue to work, contribute to the public pension, and delay claiming, their benefit increases by 0.5 percent for each additional month worked. In our previous report, U.S. employers also cited concerns in designing programs to meet statutory requirements. (See sidebar). According to a Eurofound report, the flexibility of phased retirement can come with administrative costs, particularly if frequent changes are allowed. For example, a Canadian employer noted that managing a workforce of part-time employees was a challenge because it was unfamiliar. They also said that, in some circumstances, their program allowed participants to renege on their retirement date and that it was administratively cumbersome. We also reported in our previous work that employers using phased retirement in the U.S. had experienced administrative concerns that included challenges with part-time workforces. Potential costs of phased retirement programs. Several of the experts we spoke with said that making programs sufficiently financially beneficial to encourage worker participation can be costly. In addition, some employers reported that, where available, tax incentives, government subsidies, or financing salary supplements directly from the workers’ retirement benefits were used, which may have helped to minimize their costs in providing the programs. In contrast, some government experts from the case study countries noted in interviews that certain government supports had been cut, suggesting that those governments prefer employers to finance more of the benefit. Other experts we spoke to explained that some employers in our case study countries paid for most of the cost of the programs themselves, although, some employers also benefit from tax incentives. For example, according to experts, the current provisions of the German ATZ program’s required that employers provide salary supplements of at least 20 percent of full-time wages above the pay for partial (50 percent) employment. According to an OECD report, initially, the supplement was paid through government subsidies to employers but now, if employers wish to retain the program, they must pay the salary supplement themselves, adding additional costs to employers. German government officials noted that the salary supplement paid during phased retirement is tax-advantaged. Such incentives might also encourage employers in the U.S. to offer phased retirement programs. Potential reductions in future benefits: Some experts noted that certain phased retirement programs allow workers to reduce their hours without a proportional reduction in wages or benefits when they enter full retirement. It may also provide more options in how to draw down benefits. However, some programs we reviewed also include pay that is less than what is received during full employment and may involve reduced benefits after retirement, which is a factor for workers considering participation. For example, German experts explained that ATZ requires a salary supplement of at least 20 percent of salary, effectively resulting in workers receiving 70 percent of their wage for 50 percent of hours worked. In our previous report, we noted that according to 2014 HRS data, an estimated 22 percent of U.S. workers aged 61- to 66-years surveyed would be interested in reducing their hours even if it meant their pay would be reduced proportionally. We also found in our previous report that low savings and concerns about eligibility for health benefits may create barriers that affect workers’ ability or interest in participating in phased retirement programs. Even when they receive employer-provided subsidies, as in Germany, workers’ salaries in phased retirement programs are less than under what is earned for full-time work. A recent OECD report noted that removing obstacles, such as limits on earnings while working and receiving pension payouts and limits on the accumulation of benefits, is important to make combining work and pensions more attractive. A Canadian employer had similar concerns and noted that workers may be reluctant to reduce their hours without having some way to supplement their income, for example through a partial draw down on their retirement savings or private or public pension. In some cases, workers may work and draw a benefit from their national or employer-sponsored pension plan. Some experts reported that certain programs allow workers to continue to contribute to their pension plans or earn pension credits. Union representatives in the U.K. and Germany noted the importance of workers remaining in the labor force longer for the purpose of increasing their income after full retirement. For example, according to a U.K. government website, the U.K. has no mandatory retirement age for the national pension system and allows individuals who have reached the retirement age to work and draw a benefit. According to a U.K. government website, if a worker continues to work after the full retirement age and delays their claim for the national pension benefit, their weekly payments could be larger when they do choose to retire and take their benefit. Experts at a privately run German transportation company noted that workers earn 100 percent of their pension credits during the period that they are participating in the company’s phased retirement program. In addition, the U.K. allows workers to draw a portion of their plan benefits—with 25 percent being tax-free—and one U.K. employer we spoke to allows continued contributions to those plans. Participants may also see reductions in their retirement benefits after full retirement. Workers with DC plans may reduce their retirement savings through early withdrawals during phased retirement. Similarly, depending on program design, workers may have limitations on their contributions to their employer-sponsored DB plan or public pension during phased retirement; yielding lower pension benefits at retirement. An OECD report notes that national pension payments made during participation in phased retirement programs and any change in the age at which a worker retires, such as retiring prior to or after the full retirement age, should result in pension adjustments that are actuarially neutral—in other words, workers taking early pension payments will have reduced benefits for the duration of their retirement while those who delay payment receive increased benefits. One expert at a German university noted that participants do not always realize the effect the program will have on their pensions. Agency Comments We provided a draft of this report to the Commissioner of the Social Security Administration, the Secretary of State, the Secretary of Labor, the Secretary of the Treasury, the Commissioner of the Internal Revenue Service, and the Acting Director of the Office of Personnel Management. The Social Security Administration provided a technical comment, which was incorporated as appropriate. The remaining agencies had no comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Commissioner of the Social Security Administration, the Secretary of State, the Secretary of Labor, the Secretary of the Treasury, the Commissioner of the Internal Revenue Service, the Acting Director of the Office of Personnel Management, and other interested parties. This report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines (1) the extent to which phased retirement exists in other countries with aging populations, (2) the key aspects of phased retirement programs in selected countries, and (3) the experiences that other countries have had in providing phased retirement and how that can inform the U.S. experience. To determine the extent to which phased retirement exists in other countries with aging populations, we used data from the Social Security Administration’s publication Social Security Programs throughout the World and United Nations population data to first identify countries with aging populations. Social Security Programs throughout the World contains comprehensive data on the social security programs in different countries around the world, including the statutory retirement age, early retirement age, and GDP per capita. We used the Social Security Administration’s publication to gather a list of 179 countries that have some kind of social security program. For these countries, we used United Nations population data to find the proportion of the population aged 50 and over, where available. We then limited our research to those countries whose proportion of population aged 50 and over is more than one standard deviation above the average. This group represents countries where the proportion of the population aged 50 and over is above 33 percent, and includes a total of 44 countries. To determine whether the 44 countries that met our initial criteria of having 1) an national pension program similar to social security and 2) an aging population have adopted phased retirement programs, we reviewed the Organisation for Economic Co-operation and Development (OECD) and the European Union reports and data that focus on older workers and extending work life in other countries. We focused on OECD and European Union countries because they are advanced economies that are most similar to that of the United States. In addition, we conducted literature searches and reviews to identify countries with phased retirement programs aimed at extending working lives of older workers as well as to assist with knowledge transfer from older workers to younger workers. The literature searches comprised of terms related to phased retirement, such as gradual retirement; partial retirement; labor force participation of older workers; and transitional retirement. We limited our searches to literature released during the 10-year period from 2007 to 2017. Additionally, we spoke with subject matter experts to gain their perspective on which countries offer phased retirement programs or have a policy aimed at extending working lives of older workers. We identified these experts through our review of relevant literature and expert referrals. In total, we identified 17 countries with some form of phased or gradual retirement options for older workers. We examined these 17 countries to identify the types of phased retirement programs within each country. For example, we researched whether the country had (1) national phased retirement policies or programs (2) sectoral programs established through collective bargaining agreements that cover specific industries, occupations, or sectors; and (3) individual employer programs. To obtain a more complete understanding of key aspects, and the benefits and challenges of phased retirement programs in selected countries, as well as potential lessons learned for the U.S., we reviewed the group of aging countries with relevant programs identified in the first objective, to select a sample of four countries for case studies. These countries are Canada, Germany, Sweden, and the United Kingdom (U.K.). The criteria for selecting case study countries included being described in literature or by experts as having a national policy related to phased retirement or as having taken legislative action, in part, to facilitate or encourage phased retirement, a variety of sectoral and individual employer programs (public and private sector), when the programs were implemented, and expert or industry recommendations. We also considered the various countries’ economic and social frameworks and whether they are similar to that of the U.S. Specifically, we selected Canada, Germany, Sweden, and the U.K. because they had national phased retirement policies, which may include a national program such as in Germany and Sweden, and a wide variety of phased retirement programs in both the private and public sectors. For the case studies, we conducted interviews with government officials, program administrators, employer associations, unions, and employers to obtain in-depth program information and to learn about their experiences with phased retirement. We identified appropriate officials and organizations to contact primarily through review of relevant literature, subject matter expert recommendations, and referrals from the U.S. Embassy in each country. We reached out to a variety of labor unions and employers in selected countries in an effort to obtain multiple perspectives on issues related to phased retirement and met with those available to speak with us. We did not conduct an independent legal analysis to verify the information provided about the laws, regulations, or policies of the foreign countries selected for this study. Rather, as described above, we relied on appropriate secondary sources, interviews, and other sources to support our work. We submitted key report excerpts to government officials in each country, as appropriate, for their review and verification, and we incorporated their technical corrections as necessary. To determine whether experiences with phased retirement in other countries could inform efforts in the U.S., we relied on testimonial evidence from interviews and a review of relevant research. The applicability of lessons learned was shaped by the differences in the national pension and social systems in the selected countries, such as the availability of healthcare and other retirement benefits. Appendix II: Key Features of Phased Retirement Systems To compile the information in this appendix, we interviewed officials and program administrators from selected phased retirement programs in Canada, Germany, Sweden, and the United Kingdom (U.K.), as well as employer associations, unions, and retirement experts. We also reviewed documentation and obtained statistics from country agencies. We identified employers offering phased retirement programs primarily through reviews of relevant literature, referrals from subject matter experts, and referrals from the U.S. Embassy in each country. We reached out to a variety of labor unions and employers in selected countries and met with those available to speak with us. We did not conduct an independent legal analysis to verify the information provided about the laws, regulations, or policies of the countries selected for this study. Rather, we relied on appropriate secondary sources, such as plan documents; interviews; and other sources. We submitted key report excerpts to government officials in each country, as appropriate, for their review and verification, and we incorporated their technical corrections as necessary. At a glance • Population: 37 million (2018) • GDP: $1.65 trillion (2017) • Statutory retirement age: starting at age 65 with full benefits Early retirement age: 60, with Sources of retirement income National pension: The earnings- related Canada Pension Plan targets a replacement rate of 25 percent of average lifetime earnings, up to a maximum earnings limit each year. Starting in 2019, this plan will replace one- third of average earnings, and the earnings range used to determine average earnings will also gradually increase. Employees in the province of Quebec have their own Quebec Pension Plan, broadly similar to the Canada Pension Plan. National efforts to encourage phased retirement In 2007, Canada introduced changes to the Income Tax Regulations to allow more flexible phased retirement arrangements under defined benefit (DB) registered pension plans. Under the pension tax rules, phased retirement allows an individual to receive a portion of his or her pension benefit from a DB pension plan while continuing to accrue pension benefits in the same plan. The income tax regulation changes permitted qualifying employees to receive up to 60 percent of their accrued benefits in their employer-sponsored DB pension while continuing to accrue further pension benefits based on either full-time or part-time work, subject to employer agreement. Qualifying employees must be at least 60 years of age or aged 55 or older and eligible for an unreduced pension under the terms of the DB plan. Highlights of individual phased retirement programs Sectoral Collectively Bargained Programs Employer group 1: Certain provincial government hospital employees of this public sector employer, those aged 55 or older with at least 5 years of service, can reduce their work schedule to between 50 and 60 percent of full-time work, and receive pay proportional to hours worked plus an annual pension pre-payment from their employer-sponsored retirement plan, which changed from a DB to a target benefit or shared-risk plan. Combined, the payments equal 85 percent of full- time earnings. Workers can choose to phase for a period of 1 to 5 years. Participants continue to accrue pension service benefits based on full-time work. Employer-sponsored pensions: Registered Pension Plans established by employers or unions to provide pensions for employees. In general, the plans can be defined benefit (DB), defined contribution (DC), or a combination of DB and DC plans. Individual savings: Individuals can use tax-assisted arrangements that foster personal savings including Registered Retirement Savings Plans that are similar to traditional IRAs in the United States and the Tax Free Savings Account—a general purpose savings plan that provides tax treatment similar to Roth IRAs in the United States. the ages of 60 and 64, can reduce their workload by working fewer hours. They are paid a salary proportional to their reduced hours and a lump-sum retirement allowance, paid by the employer that can be used to supplement their income, not to exceed their full time salary. Participants can continue to contribute to the employer-sponsored DB plan as if working full time. Canada (cont.) participate in phased retirement up to 3 years prior to age 71. Participants can work 50 percent of full time work each year over a 3- year period and get paid a salary proportional to their reduced hours. Participants cannot draw from their employer-sponsored DB plan, but can contribute to it and the national pension as if working full time. Employer 5: An employer with two phased retirement programs. One program was established through a collective bargaining agreement, and allows unionized faculty aged 60 or older with at least 10 continuous years of service to slowly reduce their work time and receive proportionate pay. Participants can contribute to their employer-sponsored DC pension as if working full time. Participants in this program cannot draw from their pension until fully retired. The second phased retirement program was established in-house by the employer (outside of collective bargaining agreements) for non-faculty staff (see details below). Employer 5 (same employer 5 above): All non-faculty staff over age 55, with at least 15 years of full-time work can reduce hours for up to 3 years. Source of supplemental income In Canada, employees participating in phased retirement programs we reviewed were compensated for foregone wages due to reduced hours primarily by withdrawing funds from their own employer-sponsored pension plan, a lump sum benefit funded by the employer, or their savings, as necessary. At a glance • Population: 82.3 million (2018) GDP: $3.68 trillion (2017) and a few months, gradually increasing to 67 by 2029 (Those with 45 years of contribution can get a full pension at 63, gradually increasing to 65) Early retirement age: 63 with 35 years of contributions, with reduced benefits, gradually increasing to 67 Sources of retirement income National pension: An earnings- related pension, requiring at least 5 years of contributions. In 2018, the employer and employee contribution rates were 18.6 percent of covered earnings. common national phased retirement program, the ATZ was established in 1996. Broad program guidelines specify that the program is available to those 55 and older and allows part-time work up to 6 years prior to the statutory retirement age. Workers can participate in the ATZ under two basic models: one in which an employee works part-time the entire period (reducing hours up to 50 percent of full-time work) and a second “block” model with 100 percent work the first half of the period and 0 percent the second half. The second model was the most popular among workers as a way to retire early. Employers pay a minimum of 70 percent of full-time wage for works in the phasing period. In general, 20 percent of the income foregone due to a reduction in hours worked is paid by the employer, who would also pay contributions toward the national pension as though the employee was working 90 percent of the time. ATZ provides tax benefits to both employers and employees on the 20 percent supplemented wages and the national pension contributions. The ATZ program provides the general framework, but employers and employees can set specific parameters through collective bargaining agreements. In 2009, the program reached its peak with 680,000 participants, when public subsidies were discontinued. Public sector employees have access to a phased retirement program similar to ATZ with minor differences such as a starting age of 60 instead of 55 and a maximum duration of 5 years. Employer-sponsored pensions: While most occupational pension plans are DB plans, they vary by how they are funded, such as book reserves, autonomous pension funds or direct insurance. Employer-sponsored pensions are generally voluntary and cover about 60 percent of the workforce. Pension reforms implemented in January 2018 aim at increasing coverage by making it less onerous for employers to sponsor DC pensions. The reforms removed the guaranteed minimum benefit that was previously required for DC plans that made it difficult for smaller employers especially to offer pensions to their workers. Teilrente: This national phased retirement program, established in 1992, allows eligible workers to work reduced hours and draw partial benefits from the national pension at the same time, with a ceiling on allowable earnings for those below the statutory retirement age. The program is used very little because it is perceived as complicated, though program reforms in 2017 simplified some of the features and added flexibility, such as raising the earnings limit and replacing the 3- tier partial benefits with smoother withdrawal options between 10 percent and 99 percent of pensions. In general, eligibility for Teilrente starts at age 63, and there are no rules on additional earnings past the full retirement age. With the reforms, policymakers hope more people will consider the program and not stop working completely at 63 when they reach early retirement age. Sources of retirement income (cont.) Individual savings: Private retirement savings include products such as Riester pensions, first introduced in 2002. Riester pensions benefit from tax incentives on contributions but also from additional direct public subsidies for low-income households and households with children. The self-employed are generally not eligible for Riester pensions but can benefit from the Ruerurp pensions, another instrument for private retirement savings. Germany (cont.) Employer 1: This employer offers the ATZ program to its workers. Currently, almost 14 percent of this employer’s eligible workers aged over 55 and covered by collective bargaining agreements participate in the ATZ phased retirement program. Of those in the program, about half are in the active phase of ATZ, working 100 percent (first years of the block model), while the other half are in the second phase with 0 percent work (last years or second half of the block model). Participants in the ATZ receive 85 percent of full-time wages for an average of 50 percent of full-time hours during the phasing period, which lasts up to 6-years. The employer also contributes 100 percent of full-time wages to the employer-sponsored hybrid contribution plan and the national pension plan during the entire phasing period. Employer 2: This employer has workers covered by collective bargaining agreements participating in the ATZ phased retirement program. Accordingly, employees 55 and older can reduce their hours to 50 percent for up to 6 years prior to the statutory retirement age, subject to approval. However, the employer reports it is phasing out ATZ as it has negotiated its own company phased retirement program. The new program targets workers in hardship positions, such as those who work night or rotating shifts. Specifically, workers aged 56 and older with at least 20 years of service with this employer, including at least 10 years of service in a hardship position, can phase into retirement for a maximum of 6 years and then must retire. Eligible workers can work 80 percent of full-time hours, receive 90 percent of their full-time wage, and receive 100 percent of their employer- sponsored pension credits as well as 90 percent of national pension credits. There is no cap on the number of workers who may participate, though eligibility requirements effectively limit the number of workers who can enroll. Currently 2,400 workers are participating in the program. Employer 1 (same employer 1 above): This employer offers a phased retirement program to certain retired executives for the purpose of retaining experience and knowledge, with a temporary contract (18 months maximum). The program is relatively new and currently includes about 80 senior experts, about 85 percent of which are aged 65 or older. Employer 2 (same employer 2 above): This employer offers a phased retirement program for managers, that allows managers to work an 80 percent schedule and receive 80 percent of their pay and 100 percent of their pension credits. Source of supplemental income In Germany, employees participating in phased retirement programs we reviewed were compensated for the foregone wages due to reduced hours primarily by their employer, together with their own savings schemes. National efforts to encourage phased retirement The current part-pension national policy, in effect since 2010, allows workers, after age 61, to withdraw 25, 50, 75, or 100 percent of their national pension benefits, independent of hours worked. Individuals can draw from the earnings related to part of their national pension and continue to earn new pension entitlements. There is no penalty for working and earning and drawing from the national pension. The decision to draw a pension has a lifelong effect, but is not irrevocable. The pensioner can instruct pension payments to cease and subsequently for the pension to resume at any time. The two components of the national pension, the income pension and the premium pension, are drawn independently of each other. Early retirement age: None Sources of retirement income National pension: The earnings- related national pension has two components, one notional income pension and a smaller DC premium pension. Employers and employees contribute 16 percent of salary toward the income pension and 2.5 percent towards the premium pension, for a total of an 18.5 percent contribution rate. Sweden had a national partial pension program that was in effect from 1976 to 2001, when it was abolished. The program allowed workers to gradually withdraw from work 5 years before the statutory retirement age, which was lowered from 67 to 65 at the time. Partial retirement was publicly funded, replacing 65 percent of the loss of income resulting from the reduction in hours worked (made less generous with a replacement rate of 50 percent in 1981). Upon reaching the statutory pension age of 65, program participants still received a full old-age pension. Highlights of individual phased retirement programs Sectoral Collectively Bargained Programs Local authorities and regions employers: Public sector workers covered by a multiemployer collective bargaining agreement can work 80 percent of full-time work, receive 90 percent of full time salary, and receive an employer-sponsored pension as if working full-time. Employers of graduate engineers: Engineers covered by a multiemployer collective bargaining agreement, age 60 and older may apply for the right to part-time retirement. Once approved the employees can ask to reduce their hours and receive 50, 80, or 90 percent of the earned employer-sponsored pension. Employers of professional employees: White collar union members working in all parts of the labor market, including schools, healthcare, trades, media, police, sports, and telecom, among others, are covered by a multiemployer collective bargaining agreement that allows phased retirement. This program allows workers aged 62 and older to shorten their working hours and begin to take withdrawals from their employer-sponsored pension. Sweden (cont.) Sources of retirement income (cont.) Employer-sponsored pensions: Workplace pension plans are generally established through collective bargaining agreements and cover about 90 percent of workers, in the public and private sectors. Employers and unions negotiate the details of workplace pensions in four sectoral collective bargaining agreements: blue-collar private sector, white-collar private sector, state employees, and municipal employees. Most workplace pensions are DC plans. In general, workers can withdraw from pensions at age 55. Source of supplemental income In Sweden, employees participating in a phased retirement programs we reviewed were generally compensated for foregone wages due to reduced hours primarily by withdrawing funds from their own employer-sponsored pension plan or their own savings, as necessary. Workers also have the option to withdraw benefits from the national pension after age 61. Individual savings: Until 2016, it was possible to make tax deductions for private pension saving, up to a maximum. The tax- deductibility of private voluntary pension savings was abolished in 2016 for all but the self-employed, who do not qualify for occupational pension plan reductions. Population: 66 million (2017) GDP: $2.62 trillion (2017) Statutory retirement age: (state pension age) 65, gradually rising to age 66 from 2018 to 2020, to age 67 from 2026 to 2028 and to age 68 between 2037 and 2039. National efforts to encourage phased retirement Since 2014, the UK has had a flexible work policy where any employee who has worked for their employer continuously for at least 26 weeks has the statutory right to request flexible work. There are several types of flexible working, including job sharing, working from home, working compressed hours, or working annualized hours, among other things. The policy covers workers who want to phase into retirement. Early retirement age: None (for the state pension) Sources of retirement income National pension: A flat-rate single-tier national pension was introduced in April 2016. This new pension plan replaces the previous two-tier system and provides a regular payment of about £164 per week (increasing to £168.60 in April 2019) or £8,528 per year, unless the pension is deferred, in which case it increases by about 5.8 percent per year. Employer-sponsored pension: Since the 2008 Pensions Act, employers have been required to automatically enroll eligible workers into a qualified workplace pension plan and make minimum contributions, with the option for workers to opt-out. The qualified plans can be either DB, DC, or hybrid plans. The National Employment Savings Trust (NEST), managed as an independent entity, was established by the government to help employers meet their obligation to automatically enroll eligible workers in a retirement plan and thus functions as the default qualified workplace plan. covered by this DB pension plan, aged 55 and older, can reduce their hours or move to a less senior position. Reduced income can be supplemented by the workers workplace pension. Participants can draw some or all of their pension benefits, while continuing to contribute into their pension and build up future pension benefits. According to plan documents, actuarial reductions on benefits paid before a worker reaches their statutory retirement age can be waived, in whole or in part, upon agreement with the employer. Teacher’s Pension: Since 2007, teachers, between the age of 55 and 75 in England and Wales covered by this DB pension plan, can reduce earnings by at least 20 percent due to part time work or a reduction in responsibilities for a minimum of 1 year. This reduction in income can be supplemented by the workers workplace pension. The maximum amount that participants can withdraw from their pension is 75 percent of the total pension benefits. Remaining pension benefits continue to grow as participants continue to work and contribute on a reduced salary. According to plan documents, benefits taken before statutory retirement age would be subject to actuarial reductions. United Kingdom (cont.) Sources of retirement income (cont.) Individual savings: Savings arranged by the individual—similar to traditional or Roth IRAs in the U.S. The U.K. has Individual Savings Accounts that allow an individual to save up to a designated amount per year tax-free. Workers can take money out of their Individual Savings Account at any time. Highlights of individual phased retirement programs (cont.) Civil service pension: Since 2008, civil service workers covered by the civil service pension, aged 55 and older, can reduce their earnings by at least 20 percent due to reduced hours or reduced job responsibilities. Participants can take some or all of their pension and pension lump sum they have accrued, while continuing to work, and contribute to their pension until their normal pension age. Drawn down benefits paid before a worker reaches their normal pension age are actuarially reduced as they are being paid early. A private sector employer in the financial industry offered phased retirement to employers under both a DB and a DC plan. Both plans allow workers age 55 and older to reduce their hours and receive benefits from their DB and DC pension plans. Workers continue to contribute to their workplace pension and the national pension plan. In the U.K., employees participating in phased retirement programs we reviewed were generally compensated for the foregone wages by withdrawing funds from their own workplace employer sponsored pension plan. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above Michael Collins (Assistant Director), Susan Chin (Analyst-in-Charge), Laurel Beedon, Britney Tsao, Margaret J. Weber, and Seyda Wentworth made key contributions to this report. Also contributing to this report were Sharon Hermes, Amy MacDonald, Sheila R. McCoy, Kelly Snow and Adam Wendel.
Why GAO Did This Study In response to an aging workforce, countries around the world have developed policies to encourage older workers to work longer to improve the financial sustainability of national pension systems and address shortages of skilled workers. Phased retirement is one option that can be used to encourage older workers to stay in the workforce. GAO was asked to look at phased retirement programs in the United States and other countries. In June 2017, GAO issued a report (GAO-17-536) that looked at phased retirement in the United States, where formal phased retirement programs are as yet uncommon. This report looks at phased retirement in other countries. Specifically, GAO examined (1) the extent to which phased retirement exists in other countries with aging populations, (2) the key aspects of phased retirement programs in selected countries, and (3) the experiences of other countries in providing phased retirement and how their experiences can inform policies in the United States. GAO analyzed relevant data, reviewed academic research, and conducted interviews to identify countries with phased retirement, and selected four countries with national policies permitting phased retirement programs with broad coverage for case studies. GAO also conducted interviews with government officials, unions, employer associations, and other experts. What GAO Found GAO's review of studies and interviews with employment and retirement experts identified 17 countries with aging populations and national pension systems similar to the Social Security program in the United States. These countries also have arrangements that allow workers to reduce their working hours as they transition into retirement, referred to as “phased retirement.” Phased retirement arrangements encourage older workers who might otherwise retire immediately to continue working, which could help alleviate pressures on national pension systems as well as address labor shortages of skilled workers. The17 countries had established phased retirement programs in different ways: at the national level via broad policy that sets a framework for employers; at the industry or sector level; or by single employers, often through the collective bargaining process. GAO's four case study countries—Canada, Germany, Sweden, and the United Kingdom (UK)—were described as employing various strategies at the national level to encourage phased retirement, and specific programs differed with respect to design specifics and sources of supplemental income for participants. Canada and the U.K. were described as having national policies that make it easier for workers to reduce their hours and receive a portion of their pension benefits from employer-sponsored pension plans while continuing to accrue pension benefits in the same plan. Experts described two national programs available to employers and workers in Germany, with one program using tax preferences. Experts also said Sweden implemented a policy in 2010 that allows partial retirement and access to partial pension benefits to encourage workers to stay in the labor force longer. Even with unique considerations in the United States, other countries' experiences with phased retirement could inform U.S. efforts. Some employer-specific conditions, such as employers offering employee-directed retirement plans and not being covered by collective bargaining are more common in the United States, but the case study countries included examples of designs for phased retirement programs in such settings. Certain programs allow access to employer-sponsored or national pension benefits while working part-time. For example, experts said the U.K. allows workers to draw a portion of their account based pension tax-free, and one U.K. employer GAO spoke to also allows concurrent contributions to those plans. In addition, experts said that certain program design elements help determine the success of some programs. Such elements could inform the United States experience. For instance, U.S. employers told us that while offering phased retirement to specific groups of workers may be challenging because of employment discrimination laws, a union representative in Germany noted that they reached an agreement where employers may set restrictions or caps on participation, such as 3 percent of the workforce, to manage the number of workers in the program. Employers in the U.S. could explore whether using a similar approach, taking into consideration any legal concerns or other practical challenges, could help them to control the number of workers participating in phased retirement programs. What GAO Recommends GAO is not making recommendations in this report.
gao_GAO-18-37
gao_GAO-18-37_0
Background The Social Security Administration’s (SSA) Disability Insurance (DI) and Supplemental Security Income (SSI) programs are the two largest federal programs providing cash assistance to people with disabilities. The DI program, established in 1956, provides monthly payments to working-age adults (and their dependents or survivors) who are unable to work due to a long-term disability. The SSI program, established in 1972, is a means-tested income assistance program that provides monthly payments to adults or children who are aged, blind, or have other disabilities and whose income and assets fall below a certain level. Individuals with low incomes and assets who also have a sufficient work history may qualify for the DI and SSI programs concurrently. In this case, the individual’s SSI payment is generally offset by the amount of the DI payment. In fiscal year 2016, according to SSA, about 10.8 million disabled workers and their family members received about $143 billion in DI benefits, and an estimated 8.2 million individuals received almost $59 billion in SSI benefits (of those, 2.6 million received SSI in addition to DI or Old-Age and Survivors benefits). Disability Criteria Although DI and SSI have different purposes and target populations, the disability criteria for adults are the same for both programs. To be considered eligible for either program as an adult, a person must have a medically determinable physical or mental impairment that (1) has lasted or is expected to last for at least a continuous period of 1 year or result in death, and (2) prevents them from engaging in any substantial gainful activity (SGA). The disability decision-making process includes five sequential steps (see fig. 1). First, SSA determines if a claimant is working and screens out (denies) claimants who earn over a specified amount. Second, SSA determines whether the claimant has an impairment severe enough to significantly limit his or her ability to do basic work activities and expected to last more than 12 months or result in death, and denies claimants who do not meet these criteria. At the third step, SSA determines whether a claimant’s impairment meets or is equivalent to an impairment listed in SSA’s Listings of Impairments. If a claimant “meets” or “equals” one of the listed impairments, they are allowed benefits. If not, SSA proceeds to the last two steps and assesses whether a claimant, given their impairment, can do their past work (step four) or other work that exists in significant numbers in the national economy (step five). Over time, more of SSA’s disability decisions have been made at the last two steps in the process, which require more judicial discretion than decisions made at steps 1 through 3, according to SSA. In 2000, 29 percent of decisions were made at steps 4 and 5, according to an SSA report. By 2014, nearly half—49 percent—of all decisions were made at these steps. Disability Application and Appeals Process To apply for benefits, a claimant must file an application online, by telephone, or mail, or in person at a local Social Security office. If field office staff determine that the claimant meets the nonmedical eligibility criteria, they forward the claim to the appropriate state Disability Determination Services (DDS) office. DDS staff—generally a team comprised of disability examiners and medical consultants—review medical and other evidence provided by the claimant, obtaining additional evidence as needed, and make the initial disability determination. In fiscal year 2016, SSA received more than 2.5 million disability claims. If the claimant is not satisfied with this determination, in most states he or she may request a reconsideration of the decision within the same DDS office. If the claimant is dissatisfied with the reconsideration, he or she may request a hearing before an administrative law judge (ALJ). In one of several initiatives to improve the disability determination process, SSA has eliminated the reconsideration step of the process in 10 states, allowing the claimant to appeal the initial decision directly to an ALJ. In fiscal year 2016, claimants appealed more than 698,000 decisions to the hearings level, and SSA issued more than 637,000 dispositions (including allowances, denials, and dismissals). (See fig. 2). Within SSA’s Office of Disability Adjudication and Review (ODAR), there are approximately 1,500 ALJs who are located in 166 hearing offices across the country, as well as at five National Hearing Centers. In general, cases are randomly assigned to ALJs within the area each hearing office serves, in the order in which the requests for a hearing are received. The ALJ reviews the claimant’s file, including any additional evidence the claimant submitted after the initial determination, and generally conducts a hearing. At the hearing, the ALJ may hear testimony from the claimant, medical experts on the claimant’s medical condition, and vocational experts regarding the claimant’s past work and jobs currently available in significant numbers in the national economy. The majority of claimants are represented at these hearings by an attorney or nonattorney representative, such as a professional disability representative, relative, or social worker. If the claimant is not satisfied with the ALJ decision, he or she may request a review by SSA’s Appeals Council, which is the final administrative appeal within SSA. The Appeals Council may grant, deny, or dismiss a request for review. If it agrees to review the case, the Appeals Council may uphold, modify, or reverse the ALJ’s decision, or it may remand the case back to the ALJ to hold another hearing and issue a new decision. In fiscal year 2016, the Appeals Council reviewed more than 154,000 ALJ decisions and remanded 13 percent of them. Hearings Backlogs and Processing Times in Recent Years Hearings-level backlogs and processing times have increased between fiscal years 2010 and 2016. The number of annual requests for a hearing before an ALJ peaked in fiscal 2011, and declined in each subsequent year, through fiscal year 2016. Despite this decline, SSA has not been able to keep pace with the demand, in terms of dispositions— the number of cases the agency decided or dismissed—in each of those years after 2010 (see figure 3). By the end of fiscal year 2016, SSA reported there were about 1.1 million pending cases. Average processing times for hearings-level decisions also increased during this same time period, from 426 days to 543 days. During these years, the number of ALJs declined, along with the number of case dispositions per month. For example, SSA reported it employed 1,356 ALJs in fiscal year 2013, and these judges had an average of 48 case dispositions per month. In fiscal year 2015, 1,265 judges had an average of 44 case dispositions per month. Also during this time period, SSA reduced its reliance on senior attorney adjudicators (SAA) to make fully-favorable, on-the-record decisions (that is, decisions in which a hearing is not necessary because the documentary evidence alone supports a decision that is fully favorable to the claimant). According to SSA, its backlog will be eliminated when the national average processing time for a hearing decision is 270 days. In January 2016, SSA issued a plan to achieve this goal by the end of fiscal year 2020. However, in its fiscal year 2018 performance plan, SSA set a goal for processing hearings decisions in 600 days (up from a target of 485 days in fiscal year 2010). SSA reported that the increase in average processing times is due to the increase in the number of pending cases. Since SSA generally processes cases in the order in which they are received, they focus on the oldest cases first, which increases the average processing time for closed cases. Requirements for Hiring, Overseeing, and Disciplining SSA Administrative Law Judges The role of ALJ was created by the Administrative Procedure Act, which was enacted in 1946 to ensure fairness and due process in federal agency proceedings involving rulemaking and adjudications. ALJs serve in a number of executive branch agencies, although SSA employs the vast majority. ALJs preside and make decisions at formal adjudicatory proceedings. One of the primary goals behind the creation of the ALJ position is to ensure that judges can conduct hearings free from influence or coercion from the agency. Although ALJs are hired by and serve as employees of executive branch agencies like SSA, the Office of Personnel Management (OPM) is responsible for the initial examination, certification for selection, and implementation of the three levels of basic pay of ALJs. As part of its responsibilities, OPM sets the minimum qualifications for ALJs, which are that they generally must be licensed attorneys with a minimum of 7 years of experience in litigation and/or administrative law and pass the competitive examination. The Administrative Procedure Act gave ALJs qualified decisional independence, with some oversight from agencies. Decisional independence means that ALJs can make decisions independently. Federal law also excludes ALJs from performance evaluations and generally requires that disciplinary actions against ALJs be for good cause established and determined by the Merit Systems Protection Board (MSPB). While ALJs have qualified decisional independence, they must follow their agency’s policies and procedures when making decisions. The Administrative Procedure Act also authorized agencies to review ALJ decisions. If SSA determines that an ALJ has not followed its policies and procedures, it can issue a directive to the ALJ to comply and, if that is unsuccessful, bring a disciplinary action before the MSPB. Allowance Rates Vary Across Judges, Even for Typical Claims Allowance Rates Have Varied Across Judges and Hearing Offices in Recent Years, Even After Holding Constant a Range of Factors Relevant to the Appeals Process Allowance rates varied across administrative law judges from fiscal years 2007 through 2015. We defined the “allowance rate” for each judge as the number of claims in which a judge granted the claimant Disability Insurance (DI) and/or Supplemental Security Income (SSI) benefits divided by the total number of decisions issued by the judge (excluding claims that were dismissed). We analyzed about 3.3 million decisions made by administrative law judges on adult Social Security disability appeals over this period. The average allowance rate across judges fell 15 percentage points over this period—from a peak of 70 percent in 2008 to 55 percent in 2015—but the range in allowance rates across judges remained fairly constant (see fig. 4). Specifically, the range—the difference between judges with high allowance rates (those at the 95th percentile) and judges with low allowance rates (at the 5th percentile)—was 55 percentage points over this period. This variation in allowance rates persisted, but fell modestly over time, even when we used multivariate statistical methods to hold constant a variety of factors related to the disability appeals process. These factors included characteristics of claimants, judges, and hearing offices, as well as other factors such as the unemployment rates in a claimant’s state, that could otherwise explain differences in allowance rates. Specifically, for the years 2007 through 2015 combined, our analysis estimated that the allowance rate would vary by 46 percentage points for a typical claim, depending on the judge who heard the case. For example, we estimated that the allowance rate for a typical claim heard by a judge with low allowance rates would be 42 percent, compared to 88 percent for a judge with high allowance rates. This estimated range fell from 50 percentage points in 2007 to 45 percentage points in 2015 (see fig. 5). (Appendix I describes this statistical analysis in more detail.) Allowance rates also varied across hearing offices during the same time period, but this variation was considerably smaller than the variation across judges in every year. The estimated range across the entire period was 19 percentage points across hearing offices (see fig. 6), compared to a 46 percentage-point estimated range across judges. Accounting for differences in allowance rates across offices ensured that the variation across judges did not reflect characteristics of their offices (such as the types or severity of disability claims received by their offices). SSA officials noted that the variation in allowance rates we observed across judges was not surprising, nor was the modest narrowing in this range over time. Administrative law judges usually hear complex appeals that may not be clear-cut allowances or denials. As a result, according to SSA officials, given judges’ decisional independence, different judges could look at cases with similar fact patterns and circumstances and come to different conclusions. At the same time, officials also pointed to several factors potentially related to the modest narrowing in the range of allowance rates. First, they noted that SSA started conducting quality assurance reviews of a random sample of allowances in 2011— previously, such cases were not reviewed. In addition, they said that Social Security’s disability programs and administrative law judges were under increased public and Congressional scrutiny following a high-profile fraud case in 2011 involving a judge and an attorney representative. Further, officials said that the expanding use of electronic case files and data analytics within SSA made it possible for the agency to enhance monitoring of decision-making and share this information with judges. Finally, while SSA cannot direct judges to decide cases in a particular way, officials suggested that some judges may have “self-corrected” their approach to decision-making, given all of these factors. Our multivariate analyses had some limitations, but it provides more information than simple comparisons in allowance rates across judges. For example, the SSA data we used for this analysis do not include a measure of the severity of a claimant’s impairment or their remaining ability to work, which could help explain why one claim with a particular impairment was allowed while another was denied. The data also do not include a standardized measure for the nature of claimants’ prior work (such as the skill level or extent of physical labor), which is also relevant for the disability decision. Nevertheless, our multivariate analysis enabled us to compare allowance rates across judges and hearing offices for typical claims. In addition, SSA’s practice of assigning cases randomly to judges makes it more likely that the remaining variation we found across judges reflects the unique effect of having a particular judge hear a case, rather than other factors. As a result, even though we could not account for all factors that could explain differences in allowance rates, random assignment increases the chances that such factors were similar across all of the cases heard by individual judges. Numerous Factors, Particularly Those Representing SSA’s Disability Criteria, Are Associated with Variation in Allowance Rates Although variation in allowance rates persisted across judges, even after controlling for certain factors, many of the factors we identified had meaningful associations with the chance that a claimant was allowed benefits. These factors represent criteria in SSA’s disability decision- making process, such as the claimant’s age, impairment, prior work, and education. We also identified factors that did not have such associations. Certain claimant characteristics—such as older ages or certain impairments—were associated with higher allowance rates. Age: Claimants’ chances of being allowed benefits increased with age, even holding constant other factors. For example, a 55-year-old claimant was allowed benefits at a rate 4.3 times higher than a typical 35-year-old claimant. This association is consistent with Social Security’s vocational guidelines, which are generally more lenient for older claimants. As part of SSA’s five-step process to determine eligibility for adult disability benefits, SSA uses a set of rules to evaluate how a claimant’s age, education, and work experience affect their remaining capacity for work. SSA’s criteria vary across four primary age groups—45-49, 50-54, 55-59, and 60 and older. The criteria are less stringent for claimants in older age groups than they are for younger claimants, because the rules assume that individuals at older ages may be less able to transition to other work. Impairment: Certain impairments were also strongly associated with the chance of being allowed benefits (see fig. 7). For example, claimants with primary impairments recorded in SSA’s data of heart failure or multiple sclerosis were allowed benefits at rates 4.2 and 5 times higher, respectively, than typical claimants with asthma. From fiscal years 2007 through 2015, the allowance rates for claimants with heart failure or multiple sclerosis were 78 and 80 percent, respectively, compared to 44 percent for asthma. Critical or terminal case: Claimants with critical or terminal cases were allowed benefits at a rate 1.4 times higher than a typical claimant without a critical or terminal case. Critical and terminal cases are cases that require special processing, such as a terminal illness or a veteran with a 100-percent permanent and total disability compensation rating. Prior work: Claimants reporting shorter work histories (4 years or less in the last 15 years before applying for disability benefits) were allowed at a rate 0.8 times as high as a typical claimant with 10 or more years of work history. As expected, given the nature of the work requirements for the DI program, the association with prior work history was stronger for that program than for the SSI program. College education: Claimants who reported having a college-level education or higher were approved at a slightly higher rate (1.1 times higher) than a typical claimant with a high-school education. SSA officials suggested that this association could be an indirect measure of the severity of a claimant’s impairment, a factor for which we did not have data. They said that individuals with higher levels of education often have higher incomes and, therefore, may be less likely to forego their income to apply for disability benefits, were it not for the severity of their disability. Claim type: DI claimants were allowed at a rate 1.7 times higher than a typical SSI claimant. Across judges, the average allowance rate for DI claimants (67 percent) was higher than for SSI claimants (52 percent) from fiscal years 2007 through 2015, with the allowance rate for claimants applying concurrently for DI and SSI benefits falling in between (58 percent). Other Participants in the Disability Appeals Process Claimants who had appointed a representative to present their case, or had a medical expert testify at their hearing, were associated with a greater chance of being allowed benefits, but the presence of a vocational expert had the opposite association. Claimant representative: Similar to findings in our prior work, claimants who had a representative—either an attorney or a nonattorney representative—were allowed at a rate 2.9 times higher than a typical claimant with no representative. SSA officials stated that representatives may have a screening process for potential clients, and under SSA’s fee structure, representatives are paid only if the claimant is awarded benefits. As a result, representatives may tend to take cases they believe will be successful. Officials also stated that a representative can help the claimant by ensuring that the medical evidence and other records are fully developed and help the claimant present their case at a hearing. From fiscal years 2007 through 2015, most claimants (77 percent) had an attorney representative, and 12 percent had a nonattorney representative. Expert testimony: Claimants whose hearings involved testimony from a medical expert were allowed at a rate 1.6 times higher than a typical claimant without a medical expert present. Medical experts include physicians, psychologists, and other types of medical professionals who provide impartial, expert opinion evidence for an ALJ to consider when making a decision about disability. SSA officials said that the association of medical experts with an increased chance of allowance is expected, given that judges are required to seek the testimony of a medical expert in certain cases, for example, when the judge is considering allowing benefits because the claimant’s impairment may be medically equivalent to one in SSA’s Listing of Impairments. In other cases, involving a medical expert is generally at the judge’s discretion. From fiscal years 2007 through 2015, 12 percent of decisions involved a medical expert. The presence of a vocational expert had the opposite effect— claimants with a vocational expert testifying were allowed at a rate 0.8 times as high as claimants without a vocational expert testifying. Vocational experts provide objective, expert opinion evidence to the ALJ, primarily at the last two steps of the disability decision-making process where SSA considers whether claimants can do their prior work or transition to other work available in the national economy. Although involving a vocational expert is generally at a judge’s discretion, SSA officials said that they were not surprised by this result, because vocational experts are usually called upon at the final two steps in the disability decision-making process. At that point, claimants had already not been allowed benefits at an earlier step because their impairment(s) did not meet or were not equivalent to an impairment in SSA’s listings. From fiscal years 2007 through 2015, most hearings (85 percent) involved a vocational expert. Judges with certain characteristics, such as those appointed in earlier years, were associated with a greater chance of allowing benefits. Appointment cohort: A claimant whose claim was heard by a judge appointed between 1995 and 1999 was allowed at a rate 1.5 times higher than a typical claim heard by a judge appointed after 2010. SSA officials said that, since 2010, they have changed the way they train and mentor new judges, and introduced new tools to help provide a standardized decision-making template. As a result, SSA officials said, more recently hired ALJs may be more aware of agency policies and procedures. Certain characteristics of hearing offices and other factors also were associated with higher chances of allowance. For example: Hearing type: Claimants whose hearings were held in person were allowed at a slightly higher rate (1.1 times higher) than a typical claimant with a hearing conducted remotely using videoconference technology. This is equivalent to a 2.8 percentage-point higher probability of being allowed benefits for a claimant whose hearing was held in person, compared to an otherwise typical claimant whose hearing was conducted by videoconference. However, we did not seek to estimate the causal impact of videoconferences on allowance rates, and so did not design our analysis to account for all factors that could affect this relationship. Rather, we accounted for the use of videoconferences solely to further ensure that circumstances were similar across the judges and offices we analyzed. Expanding video service delivery is a key goal for SSA, including plans to partner with other agencies, such as the Department of Veterans Affairs, to increase the number of available video hearing sites beyond those already available at hearing offices and the five National Hearing Centers. Year of decision: Claimants whose appeals were decided in earlier years were associated with a greater chance of being allowed benefits. While this trend is similar to the raw change over time shown in figure 4, our multivariate analysis showed that this change held even for claimants in similar circumstances. For example, claimants who received decisions in 2007 were allowed at a rate 2.0 times higher than a typical claim in 2015. This is consistent with other studies that have found trends of lower allowance rates in recent years. Factors Not Associated with Differences in Allowance Rates Some factors were not meaningfully associated with allowance rates when holding other factors constant. Workload measures: Workload and productivity measures at the hearing office and judge level were not meaningfully associated with allowance rates. This includes the annual percentage of cases that were backlogged (that is, awaiting a judge’s decision for more than 270 days) at each hearing office, as well as the annual number of dispositions (decisions plus dismissals) each judge issued. This may suggest that judges’ decisions to allow or deny cases are not significantly influenced by the number of cases before them, similar to findings in prior research. Hearing office type: We found no meaningful differences in allowance rates between similar claims heard at one of SSA’s National Hearing Centers or a traditional hearing office, after holding constant other factors (including whether the hearing was held by videoconference). SSA has five National Hearing Centers, which hear cases from across the country by videoconference in order to reduce backlogs in certain hearing offices. Economic characteristics: The unemployment and poverty rates in the claimant’s state at the time of the ALJ decision were not associated with allowance rates. Higher unemployment rates can result in increased applications for Social Security disability benefits because workers with impairments that could qualify them for the program who experience job loss may find it more difficult to become re-employed during periods of high unemployment and apply for benefits. However, the impact on allowance rates in the research we reviewed is mixed. SSA’s Efforts to Monitor Accuracy and Consistency of Hearings-Level Decisions Lack Performance Measures and Have Not Been Evaluated SSA Has Timeliness Measures, but Lacks Public Performance Measures for Accuracy and Consistency SSA has employed a range of efforts to monitor the accuracy and consistency of hearings decisions, but it lacks performance measures to report publicly on these efforts. SSA’s current strategic plan includes an objective to “improve the quality, consistency, and timeliness” of its disability decisions; however, all of the hearings-level measures supporting this objective are related to timeliness. In a previous report, we developed nine attributes of performance goals and measures based on previously established GAO criteria, as well as relevant federal laws and performance management literature. One key attribute states that an agency’s suite of performance measures should be balanced to cover various priorities. In addition, each measure should cover a priority such as quality, timeliness, and cost of service. However, because SSA’s performance measures do not fully reflect its goals, the overall success of SSA’s efforts in this area may be limited. SSA previously had performance measures related to hearings-level accuracy, which used data from ALJ peer reviews. These measures were discontinued in fiscal year 2009, when the ALJs conducting the reviews were reassigned to hearing cases. By comparison, SSA continues to have a measure for accuracy at the initial decision-making level (see table 1). SSA officials stated that they have no plans to add new performance measures related to the accuracy and consistency of hearings decisions to the strategic plan. They said that while they collect and monitor a wide variety of workload and performance measures for day-to-day operations, they have to select a few, representative measures that are meaningful to stakeholders and represent agency-wide efforts to achieve its goals. They stated that the current performance measures meet these requirements. Although SSA officials said the agency does not publicly report performance measures related to the accuracy and consistency of hearings decisions, they said that SSA uses internal performance measures related to hearings decisions. However, these internal measures to monitor quality and consistency of hearings decisions have limitations and are not shared with the public. Regional chief judges—who oversee the hearing offices and judges within each of SSA’s 10 regions— and others told us that they use a measure known as the “agree rate” to help monitor the quality of a judge’s decisions. This measure is based on the number of cases that have been appealed to the Appeals Council by the claimant or representative as a request for review. The agree rate reflects the percentage of cases in which the Appeals Council—the final level of appeals within SSA—concluded that the ALJ’s decisions were supported by substantial evidence and contained no error of law or abuse of discretion. However, the agree rate has some limitations. For example, as noted earlier, it does not reflect the accuracy of ALJ decisions that the claimant did not appeal. SSA’s Office of the Inspector General (OIG) found that this measure provided information on less than one-quarter of all ALJ dispositions and it is not representative of the ALJ’s entire workload because it is based only on Appeals Council reviews of appealed cases. In addition, a March 2017 SSA OIG report found that SSA has not maintained historical data on agree rates, limiting the agency’s ability to analyze agree rate trends. SSA uses other internal measures to track consistency. For example, SSA developed an internal early monitoring system that tracks 22 metrics of ALJ performance to identify outliers. For example, three of these metrics (average number of dispositions a judge issues per day, agree rate, and allowance rate) have “alarm thresholds” to indicate when an ALJ’s metrics fall outside of a given threshold. Based on these findings, SSA may conduct a focused quality review (a type of quality assurance review) to ensure the judge’s decisions complied with SSA policies, or follow up with the regional chief judge to determine if additional policy guidance or training is needed. Although these internal measures are helpful for management to monitor and improve accuracy and consistency, without sharing this or similar information publicly, SSA lacks accountability for improving the quality of hearings-level decisions. In addition, federal internal control standards state that management should externally communicate the necessary quality information to achieve objectives, including to external stakeholders such as Congress and the public. Further, given the persistent variation in allowance rates, SSA may be missing an opportunity to provide the public with information on the results of its efforts to improve the accuracy and consistency of disability decisions. SSA Has Efforts to Monitor and Improve Accuracy and Consistency of Hearings- Level Decisions, but Quality Reviews May Overlap and Have Not Been Systematically Evaluated SSA provides training and tools to all ALJs and initiates disciplinary actions where needed, as part of its efforts to monitor and improve accuracy and consistency. SSA also conducts multiple quality assurance reviews, but some of these reviews may overlap and SSA has not evaluated them. Training, Tools, and Policy Guidance ALJs receive ongoing training and guidance from several sources, including through judicial trainings, mentoring, and policy memorandums. In 2006, SSA implemented a three-phase training program for new ALJs, which includes training on core competencies as well as a formal mentoring program in which new ALJs are paired with experienced ALJs for regular sessions over a nine-month period. Regional managers, judges, and stakeholders we spoke with had positive feedback on the training SSA provides to judges. For example, officials from one stakeholder group told us that they believe training had created more consistency in allowance rates. SSA’s chief judge also issues guidance memorandums to clarify policies related to the hearings process. For example, in July 2013, SSA issued a memorandum establishing expectations for the instructions judges provide to decision writers, who are SSA staff who prepare the draft decisions. SSA officials said that they issued the memorandum in response to an ALJ who was providing low- quality instructions to decision writers and SSA realized it had not provided formal guidance on the topic. In addition, ALJs also receive quarterly continuing education training and have a library of reference materials and on-demand video courses to use as needed. SSA also uses internal metrics and provides electronic tools to judges to monitor and improve accuracy and consistency. Regional chief judges regularly review management information (MI) reports and develop strategies, such as recommending training, to address identified issues. Beginning in 2011, SSA established an electronic tool called “How MI Doing?”, which allows ALJs to compare their productivity and timeliness metrics to hearing office, regional, or national metrics. The tool also provides data on the agree rate for each judge as well as the hearing office, regional, and national agree rates. Using this tool, judges can also learn the reasons any prior decisions have been remanded, and access on-demand training pertaining to that reason. Regional chief judges we spoke with generally found “How MI Doing?” to be a helpful tool, although SSA does not track judges’ usage and has not formally evaluated its effectiveness. In addition, SSA established the electronic Bench Book (eBB), which is designed to assist users with documenting, analyzing, and making consistent and accurate decisions on hearings-level adult disability cases. However, the SSA OIG recently recommended that SSA evaluate eBB and determine whether to continue it. Regional chief judges we spoke with provided mixed feedback on the use of eBB and its usefulness for ALJs. In fiscal year 2016, nearly 500 ALJs (about one- third) used eBB. In June 2017, SSA officials said that while no formal evaluation of eBB was conducted, they recently received approval to proceed with plans to replace eBB with a similar tool as part of updates to SSA’s case management system. SSA also addresses identified issues with the accuracy and consistency of hearings decisions by taking disciplinary actions, as needed. SSA can take non-disciplinary or disciplinary action to address performance concerns. Non-disciplinary actions include training and counseling (known as “collegial conversations”). Another non-disciplinary action is a written directive, which SSA can issue to individual judges to improve performance on workload, scheduling or policy compliance. From 2007 through 2016, SSA issued about 1,330 such directives. Nearly all (95 percent) were issued to improve timeliness, while about 2 percent were issued to improve policy compliance. If an ALJ’s conduct or performance does not change or becomes more egregious, SSA continues with progressive discipline including reprimand or seeking disciplinary action from the Merit Systems Protection Board, such as short- or long-term suspension or removal. From 2007 through 2016, there were 98 reprimands, 34 proposed suspensions, and 16 proposed removals, according to SSA. SSA conducts various quality assurance reviews to improve accuracy and consistency. SSA officials stated that the agency has been enhancing its quality review efforts since 2009. Since then, it has added five types of quality assurance reviews that are conducted by three additional offices within SSA (see fig. 8). SSA added quality assurance reviews for various reasons. For example, in 2009, SSA’s regional staff under the Office of the Chief Administrative Law Judge began conducting regional inline quality reviews, which involve assessing the extent to which hearing office staff are processing cases and preparing them for hearings in accordance with SSA policy, as well as the policy compliance and legal sufficiency of the draft decision. SSA added this review to enhance its reviews of decisions before they are issued, in an effort to reduce remands. Also in 2009, SSA’s Office of Quality Review began conducting disability case reviews to provide feedback on decision-making accuracy to ALJs. In addition, in 2010, SSA created the Division of Quality under the Appeals Council, a unit focused on conducting reviews on a regular basis of decisions that claimants did not appeal. Prior to 2010, SSA generally only reviewed decisions that claimants appealed through the Appeals Council. While these quality assurance reviews have somewhat different focuses—for example, some assess aspects of how a case was processed while others review the accuracy of the decisions—they overlap in two key ways. According to prior GAO work, overlap occurs when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. Some of SSA’s quality review efforts fit the description of overlap in that they have similar goals and review similar cases. For example: Similar goals: Several of the reviews have similar goals (see table 2). For example, two of the four entities conducting reviews—the Appeals Council’s Division of Quality and staff in SSA’s 10 regional offices— both review decisions for policy compliance before those decisions go into effect (known as pre-effectuation reviews). While one review looks at the judge’s decision and the other looks at the draft decision prior to the judge’s review and approval, according to officials and documents we reviewed, these reviews share similar goals: to guide training and provide feedback to judges. In addition, all the reviews are designed to assess compliance with SSA policy. Similar cases: SSA’s five quality assurance reviews look at similar cases, and could potentially include the same cases (see table 3). SSA takes some steps to prevent assessing the same claim in multiple quality assurance reviews. Officials told us that, in conducting focused quality reviews (conducted after the decision is final), they exclude cases that were reviewed in a pre-effectuation review. However, they said that the Division of Quality does not know whether cases it has selected were also subject to a regional inline quality review. They said that additional efforts to prevent multiple reviews of a case are manual in nature, and thus there is still the potential for claims to be reviewed more than once. Further, SSA officials said they did not see a need to prevent multiple reviews of a case, in particular, because some reviews are conducted before the decision is final and others are conducted after the decision is final. SSA officials stated that opportunities exist to improve coordination across offices conducting quality assurance reviews. We found that several offices coordinated their work in some cases. For example, SSA’s Division of Quality and Office of Quality Review participate in a multi- office workgroup that addresses such issues as policy compliance across the initial and hearings levels of the disability process. In addition, they have also worked together on several studies, including a one-time quality review of 454 claims that were denied at the initial determination level, but were allowed as fully favorable at the hearings level. The Office of Quality Review also reviews the content of selected training for judges. In addition, the Division of Quality provided some initial input when the regional inline review effort was being designed. Prior GAO work has found that enhanced coordination can help to reduce overlap and improve efficiency. Effective October 1, 2017, SSA created a new deputy commissioner-level component, the Office of Analytics, Review and Oversight. This agency reorganization moved six oversight offices into the new component, including the Division of Quality and Office of Quality Review. Officials said the new component will create opportunities for improved coordination between these six offices. While this reorganization creates the opportunity for SSA to assess many of its quality assurance reviews, the regional quality review staff will not be included in the new office, and it is too early to tell how this reorganization will help manage the overlap between SSA’s various quality assurance reviews. In addition, SSA has struggled to sustain all of its quality assurance reviews due to competing demands for the staff who perform them. For example, SSA placed regional inline quality reviews on hold in September 2016 and again in December 2016, because officials said that the agency needed staff to complete pending decisions before a change in the medical listings for mental impairments took effect in January 2017. Decisions not completed before the new listings took effect would have to be redone. Also, the Office of Quality Review curtailed its Disability Case Reviews in fiscal year 2016 to help prepare the oldest cases for hearings. As a result, only the Appeals Council’s review of appealed ALJ decisions (requests for review) and the Division of Quality’s quality assurance reviews were active in 2016. Even as SSA has added quality assurance reviews, it has not systematically evaluated the efficiency and effectiveness of all the reviews to determine the extent to which they may be overlapping or complementary. We found that reviews conducted by the four entities have resulted in similar findings, raising questions about the efficiency of these reviews. For example, during the same 3-year period (fiscal years 2013 through 2015), quality reviews conducted by all four entities found problems with judges’ assessment of a claimant’s ability to perform work- related tasks, known as a residual functional capacity assessment. In addition, all four entities found problems with the evaluation of medical opinion evidence. Moreover, SSA has not conducted a cost-benefit analysis of the five reviews. Officials said that there are no definite plans to do so, although they may consider conducting such an analysis in the future. We found that costs for the quality assurance reviews conducted in fiscal year 2015 were at least $23.7 million, and in fiscal year 2016 were at least $11.7 million (see table 4). By evaluating the quality assurance reviews to determine the extent to which each is needed to monitor and improve accuracy and consistency, SSA would be better positioned to meet its goals within its resources. In addition, SSA continues to develop and implement initiatives aimed at improving hearing decisions, without evaluating the potential for overlap with existing quality assurance reviews. For example, as part of its backlog reduction plan known as the Compassionate And Responsive Services (CARES) plan, SSA is using computer algorithms for natural language processing to analyze the text of disability decisions and flag potential errors. Although the agency is piloting this effort in the Appeals Council before expanding it to hearing offices, it did not conduct a cost- benefit analysis. SSA officials said that natural language processing could be used to identify cases for further review, similar to its current selective reviews, and that decision writers could use the tool to conduct their own reviews of their draft decisions. SSA officials said that they do not anticipate much overlap between the use of natural language processing and OAO’s pre-effectuation reviews. However, there could be potential for overlap with regional inline reviews, which also review decisions drafted by decision writers. Federal internal control standards state that management should implement control activities through policies. Periodically reviewing policies, procedures, and related control activities for continued relevance and effectiveness in achieving objectives and addressing related risks can help agencies meet this standard. Conclusions SSA’s disability programs provide more than $200 billion in benefits for tens of millions of Americans annually, making it one of the largest components of the nation’s social safety net. The hearings and appeals level of the disability decision-making process is particularly important because about one in three people receiving Social Security disability benefits are granted benefits at this level. Given the number of people and the dollars at stake, it is crucial that claimants are treated fairly and their applications are evaluated accurately and consistently across the country, at all levels of the program. Some of the variation in allowance rates that we found across judges may be expected, given the complexity of the cases and judges’ decisional independence. However, the persistent variation we observed over time, even after accounting for various factors that could otherwise explain allowance rates, might warrant additional attention. SSA is rightly focusing on oversight of judges, but our work suggests that the agency’s emphasis on timeliness over accuracy in its public metrics and the potential overlap in its quality assurance efforts may offer opportunities for improving the accuracy and consistency of hearing decisions. First, this amount of variation in allowance rates underscores the need for SSA to measure and hold itself accountable for accuracy and consistency. However, without sharing performance information on the accuracy and consistency of its hearings-level decisions, such as the rate at which the Appeals Council agrees with a judge’s decisions, SSA may not be providing the public with adequate information on progress toward its objective to improve the quality, consistency, and timeliness of its disability decisions. Developing a set of performance measures that includes the accuracy and consistency of hearings decisions will help ensure the overall success of the program. Second, SSA has not systematically considered how each of its quality assurance reviews helps the agency meet its objective to improve the quality of hearings-level decisions. Although the planned consolidation of multiple oversight and quality review offices is a positive step, it will be important for SSA to consider the usefulness of the information yielded by each quality assurance effort, as well as the costs associated with conducting the effort. Evaluating the efficiency and effectiveness of quality assurance activities can help ensure that SSA is using its resources for maximum benefit toward its objective to improve the quality, consistency, and timeliness of its disability decisions. Recommendations for Executive Action We are making the following two recommendations to SSA: The Commissioner of SSA should develop a set of public performance measures, to include accuracy and consistency, as well as timeliness, of administrative law judges’ (ALJ) disability decisions. SSA could consider whether existing quality review or monitoring efforts could provide suitable data for such measures. (Recommendation 1) The Commissioner of SSA should systematically evaluate the efficiency and effectiveness of its quality assurance reviews and take steps to reduce or better manage any unnecessary overlap among them to ensure strategic use of resources. Such steps could include enhancing collaboration where reviews overlap or only conducting the reviews that are most efficient and effective in achieving agency goals for improving accuracy and consistency of ALJ disability decisions. (Recommendation 2) Agency Comments and Our Evaluation In commenting on a draft of this report, SSA agreed with our two recommendations to (1) establish public performance measures for the accuracy and consistency of administrative law judges’ decisions, and (2) systematically evaluate its various quality assurance reviews and take steps to reduce or better manage any unnecessary overlap among them. SSA stated that it would address both recommendations as part of a comprehensive assessment and refinement of its oversight roles and processes. SSA made several other comments about one of our conclusions and our analysis of variation in administrative law judge allowance rates, which we discuss below. SSA also provided technical comments, which we incorporated into the report as appropriate. In its comments, SSA described its evolving oversight activities at the hearings level, including providing policy guidance and training for judges, capturing and utilizing data to gain a better understanding of trends and challenges, and implementing additional oversight review processes, all of which we discussed in our report. SSA’s comments acknowledged that our report describes the steps that the agency has taken to improve oversight, but disagreed with our conclusion that SSA emphasizes timeliness over accuracy. Our final report clarifies that we came to this conclusion based on a review of the performance measures the agency shares with the public in its annual strategic plan and performance reports. As we state in the report, SSA has employed a range of efforts to monitor the accuracy and consistency of hearings decisions, but it lacks performance measures to report publicly on these efforts. Regarding our analysis of variation in ALJ allowance rates, SSA raised a concern about our finding (on page 26 of the final report) that claimants whose hearings were held in person were slightly more likely (by about 2.8 percentage points) to be allowed benefits than a typical claimant with a hearing held by videoconference. SSA cited its own internal analysis, which found a small (0.6 percentage-point) difference in allowance rates between in-person and videoconference hearings after controlling for a number of factors. It is not surprising, however, that our estimates are somewhat different, since SSA’s internal analysis differs from ours in several ways. The primary purpose of our statistical analysis was to isolate variation in allowance rates due to the unique judge or hearing office assigned to each claim. To do this, we developed a multilevel model using 9 years of data that controls for judge, hearing office, and claimant-level factors associated with allowance rates. On the other hand, SSA’s analysis was specifically designed to look at the difference in allowance rates between in-person and video hearings. SSA’s analysis also covered a shorter, more recent period of time (part of fiscal year 2015, fiscal year 2016, and part of fiscal year 2017), than our study (fiscal years 2007 through 2015). Additionally, the version of the model SSA cited in its comments included hearings held in person or by videoconference only in regular hearing offices, whereas our analysis included hearings held in National Hearing Centers as well as regular hearing offices and controlled for the type of hearing office. These differences notwithstanding, we agree with SSA that the estimated model- adjusted difference in allowance rates between in-person and videoconference hearings in both GAO’s and SSA’s analyses could potentially be explained by unmeasurable factors. In addition, SSA noted that our measure of variation in judge decisions focused on allowance rates at the extremes of the distribution. Given that our charge was to explore the extent of variation in allowance rates across judges, we believe it was appropriate and important to measure the range of allowance rates between judges with high allowance rates (at the 95th percentile) and those with low allowance rates (at the 5th percentile). This would be more conservative than an approach that looks at allowance rates across all judges, including potential extreme values; and more nuanced than an approach that looks at the number of judges whose allowance rates are higher or lower than a given threshold. Further, our analysis shows that unadjusted allowance rates at the 95th percentile declined over the period of our analysis, from a high of 96 percent in fiscal year 2008 to 82 percent in fiscal year 2015. We saw a comparable decline in allowance rates after applying our multivariate model. To provide additional context, our report figures also show the middle of the distribution (the 25th and 75th percentiles), as well as the average allowance rates. We have also added information to our report further describing this middle range. Finally, SSA noted that our analysis was not weighted by the number of determinations a judge made, suggesting that judges who decided very few claims, for example, could affect the range in allowance rates or the trends. As we show in Appendix I, Table 7, only 2.3 percent of the judges in our study population heard fewer than 250 claims per year. This group of judges had an unadjusted allowance rate of 61.9 percent, very similar to the allowance rate among judges who heard 500-699 claims per year (61.6 percent). Furthermore, the statistical methods we used to estimate the distributions of allowance rates (multilevel models) adjust the estimates for judges with fewer claims by weighting them more heavily toward the overall approval rate. This mitigates against judges with smaller caseloads, and therefore higher sampling variation, from contributing overestimated allowance rates that might have inflated our estimated variation across judges. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Commissioner of Social Security, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to assess (1) the extent to which allowance rates vary across administrative law judges, and any factors that are associated with this variation, and (2) the extent to which the Social Security Administration (SSA) has processes to monitor the accuracy and consistency of hearings decisions. To answer these objectives, we reviewed SSA policies and procedures related to administrative law judge (ALJ) disability hearings and decisions; manuals and documents describing SSA’s case processing systems for each level of SSA’s disability decision-making process, and guidance and training provided to judges for making disability decisions. We interviewed SSA officials in several offices within the Office of Disability Adjudication and Review (ODAR), including the Office of the Chief Administrative Law Judge and the Office of Appellate Operations, as well as conducted semi-structured interviews with Regional Chief Administrative Law Judges in each of SSA’s 10 regions. We also observed administrative law judge hearings in one of SSA’s five National Hearing Centers, in Falls Church, Virginia, (hearings in these offices are conducted by videoconference), as well as in two of SSA’s regular hearings offices, in Washington, D.C., and Seattle, Washington. The purpose of these observations was to gain a better understanding of the hearings process in practice, and to inform our scope and methodology for this study. We selected these sites, which are not generalizable to the population of all hearing offices, for a number of reasons, primarily: (1) to observe hearing offices in different geographic locations and observe both in-person and video teleconference hearings, and (2) to select sites at which a cross-section of cases with different types of disabilities and impairments were available. We attended hearings involving both adult and child claimants with a mix of physical and mental impairments. This appendix is divided into three parts. The first describes our data sources and analysis of allowance rates across judges and associated factors, the second describes our multivariate statistical model, and the third describes our work related to our second research objective on SSA’s processes to monitor the accuracy and consistency of hearings decisions. Analysis of Variation in Allowance Rates across Judges and Associated Factors For this objective, we analyzed data from two primary sources from fiscal years 2007 through 2015: SSA’s administrative data systems for the initial and hearings levels of the disability decision-making process, and the agency’s personnel data system. We also obtained other SSA administrative data on staffing levels and numbers of pending cases in each hearing office. Finally, we obtained data on state poverty and unemployment rates from the U.S. Census Bureau and the Bureau of Labor Statistics, respectively. SSA Administrative Data Systems To analyze information on all adult disability decisions made by administrative law judges from fiscal years 2007 through 2015—the most current data available at the time of our analysis—we compiled claims data from several SSA administrative data systems. These data contained information on the outcomes of the disability decisions and the characteristics of claims associated with each decision. Specifically, the information was drawn from the following systems: 831 File and Structured Data Repository: The 831 File pertains to the initial and reconsideration level of the disability determination process, within the state Disability Determination Services (DDS). Data on claimant characteristics we used from this system include the date of the claimant’s initial application for benefits and the claimant’s self-reported years of education. We also received a limited set of data captured from the claimant’s disability application in SSA’s electronic case folder system (Structured Data Repository), including the number of years a claimant reported being employed out of the 15 years before becoming disabled. Case Processing and Management System (CPMS): This system pertains to the hearings level and was our primary source of information on hearing outcomes, claim, and claimant characteristics. Specifically, this system provided information on claim type (i.e., Disability Insurance, DI; Supplemental Security Income, SSI; or concurrent claim); the outcome of the claim (i.e., dismissed, allowed, or denied) and the date the decision was made; the unique identification number of the administrative law judge (ALJ) who made the decision; whether a medical expert or vocational expert attended the hearing; whether the claimant was represented; the hearing office where the claim was decided and the type of hearing office (i.e., hearing office or National Hearing Center); the claimant’s date of birth; the primary impairment at the time of the hearing level; the presence of a secondary impairment; and whether the case was classified as being a critical case—that is, a case requiring special processing, such as a terminal illness. We used case identifiers to link the information from each of these databases that pertained to each disability decision we analyzed. Federal Personnel and Payroll System We obtained data from SSA’s Federal Personnel and Payroll System (FPPS) database on all administrative law judges who were employed by the agency at any time during the period from January 1, 2005, through December 31, 2015. We obtained information on each judge, such as their date of appointment as an ALJ and the type of appointment (regular career appointment or non-permanent); service computation date; and prior position titles within SSA, if any. Other SSA Administrative Data We obtained summary-level data, as of January 2017, from SSA on staffing levels (numbers of ALJs, decision writers, and other support staff) at each hearing office for each fiscal year in our study period (fiscal years 2007 through 2015) from SSA’s Payroll Operational Data Store system. We also obtained data on the numbers of cases left pending at the end of each fiscal year (including the number of cases pending for more than 270 days). SSA provided those data from a management information report that uses CPMS data. Economic Conditions Data We used publically available estimates of state poverty rates for each year in our analysis (calendar years 2007 through 2015) from the U.S. Census Bureau’s American Community Survey (ACS). We considered using estimates at the county level but that approach had limitations. First, we would have been limited to using 3-year or 5-year estimates for all counties, because 1-year ACS estimates are only available for areas with populations of 65,000 or more. Second, the Census Bureau cautions against using estimates for particular time periods that do not align with the periods of its estimates. Although using state-level estimates reduced the geographic precision of the estimates, we gained precision by having annual estimates and the ability to measure potential variation in poverty rates over narrower time intervals. We also used publically available estimates of state unemployment rates in calendar years 2007 through 2015 from the Bureau of Labor Statistics’ Local Area Unemployment Statistics data. This variable allowed us to control for labor market conditions over time. Data Reliability SSA constructed custom files for GAO from several SSA datasets in response to our data requests. We assessed the reliability of the data used in our analyses through electronic testing, analyzing related database documentation, examining the SAS code used by SSA to construct the custom files, and working with agency officials to reconcile discrepancies between the data and documentation that we received. We determined that the 831, Structured Data Repository, and CPMS data on ALJ decisions and claimant characteristics and the FPPS data on ALJ appointments were sufficiently reliable for the purposes of describing the extent of variation in the outcomes of ALJ decisions. We also determined that SSA’s data on pending caseloads and ALJ and decision writer staffing, by year and hearing office, were sufficiently reliable for the purpose of describing hearing office characteristics. Finally, we determined that ACS data on state poverty rates and BLS data on state unemployment rates were sufficiently reliable for the purposes of describing these state economic characteristics. Scope of Analysis Our analyses of ALJ decisions excluded various types of decisions from the CPMS data because they were out of scope for our research objectives (e.g., child cases, non-disability cases, or cases that were decided by SSA staff who were not ALJs) or were not typically randomly assigned to judges. We selected cases that should have been assigned randomly to judges, according to SSA policy, because that random assignment made it more likely that variation in allowance rates across judges in our multivariate analysis reflects the unique causal effect of having a particular judge hear a case, rather than other factors that also vary across judges. Our exclusion criteria were similar to those used by an internal SSA study of ALJ allowance rates, conducted in 2017. We excluded cases that were: Dismissed. Cases can be dismissed for reasons not related to the merits of the case and that are usually beyond the ALJ’s control—for example, the claimant’s failure to file a timely request or to appear at the scheduled hearing (without good cause), or the claimant’s death before the hearing. In addition, data on key factors for these cases, such as the claimant’s impairment, were missing. From fiscal years 2007 through 2015, 1,007,526 claims (16 percent of all claims) were dismissed. Made “on the record” and not randomly assigned to judges. While most appeals are decided after an ALJ hearing, ALJs and senior attorney adjudicators (SAA) have the authority to issue on-the- record decisions. These are decisions where a hearing is not necessary because the documentary evidence alone supported a fully favorable decision. SSA has created screening criteria, such as the claimant’s age (50 and older) and specific impairments, to help identify possible on-the-record decisions earlier in the process. ALJs and SAAs can also issue on-the-record decisions for cases involving critical need, and claimants and their representatives can request that the ALJ or SAA issue an on-the-record decision. These cases are not assigned randomly to judges. From fiscal years 2007 through 2015, 716,574 claims (11 percent of all claims) were on-the-record decisions, although SSA has issued fewer on-the-record decisions in more recent years. Issued for children. We excluded claimants younger than 18 at the date of the initial application. We also excluded claimants with missing or invalid age values. From fiscal years 2007 through 2015, 492,158 claims (8 percent of all claims) were for people under 18 or with missing or invalid age values. We excluded child cases from our analysis because they involve different evaluation criteria. Remanded back to a judge from SSA’s Appeals Council (or federal court). These cases represent decisions that were corrected after an order from the Appeals Council or a federal court after the original ALJ’s decision. In these cases, judges are often addressing a narrow set of issues identified in the remand order. Remanded cases are also not assigned randomly to judges, since the Appeals Council generally sends them back to the judge who originally issued the decision. However, SSA’s Office of the Inspector General (OIG) in 2017 found that about half of the remanded cases in its sample were assigned to a different ALJ than the original ALJ. From fiscal years 2007 through 2015, 293,971 claims (less than 5 percent of all claims) were remands. Made by senior attorney adjudicators who were not administrative law judges. We excluded decisions made by SAAs. SSA implemented a program in 2007 whereby SAAs located in hearing offices across the country could issue fully favorable on-the- record decisions. According to SSA, this allowed ALJs to focus on cases that are more complex or require a hearing. From fiscal years 2007 through 2015, 227,133 claims (4 percent of all claims) were decided by SAAs. Appeals of continuing disability reviews (CDR). These cases represent decisions about whether or not to continue benefits for claimants who were previously found eligible for the program. As such, they involve different evaluation criteria. From fiscal years 2007 through 2015, 245,862 claims (4 percent of all claims) were appeals of CDRs. Non-disability cases. These cases include Social Security retirement and survivor benefit decisions. We excluded such cases because they involve different evaluation criteria from disability claims and represent a small minority of decisions at the hearings level. From fiscal years 2007 through 2015, 25,293 claims (less than 0.5 percent of all claims) were for non-disability cases. Decided by judges with limited experience. We excluded cases decided by judges within the first year (365 days) after their appointment as an ALJ, as calculated by the difference between their date of appointment and the date of the decision on each claim. We excluded these decisions to help ensure that variation we identified in allowance rates was not due to the judges’ more limited experience deciding Social Security disability claims. From fiscal years 2007 through 2015, 574,307 claims (approximately 9 percent of all claims) were decided by judges with limited experience. In total, our exclusion criteria reduced the number of records analyzed by about half. Specifically, out of a universe of about 6.3 million records, our study population included about 3.3 million decisions. Nevertheless, the overall allowance rate for our study population over fiscal years 2007 through 2015 was 62 percent, very close to the overall allowance rate for the entire population of claims during this period, which was 64 percent. Calculation of Allowance Rates We calculated allowance rates by dividing the number of favorable decisions by the total number of decisions (both unfavorable and favorable). We calculated allowance rates for different units of analysis: Overall, by program type (Disability Insurance, Supplemental Security Income, and concurrent) and by year and for all years and program types pooled together, At the judge level, by year and for all years pooled together, and At the hearing office level, by year and for all years pooled together. When analyzing our data at the case level, we identified whether the case was favorable or unfavorable to the claimant (that is, whether the claimant was allowed benefits or not). We did not include cases that were dismissed in our study population for two reasons. First, as discussed above, these cases can be dismissed for reasons not related to the merits of the case, and without a review of the medical evidence. Second, SSA’s data on dismissed cases are limited, partially because cases are dismissed without a review of medical evidence. For example, the impairment code from the hearings-level decision was missing for virtually 100 percent of dismissed cases. For concurrent claims—those in which an individual is applying for Disability Insurance (DI) and Supplemental Security Income (SSI) benefits—we considered a case an allowance if the claimant was approved for either or both programs. Our classification of allowances for concurrent cases differs from SSA’s usual practice (although a 2017 internal study of ALJ allowance rates used the same method as ours). SSA officials said they usually allow the SSI decision to “control” the overall outcome of the case. That is, SSA classifies a concurrent claim as an allowance if the SSI decision is an allowance, regardless of the outcome for the DI claim. Officials said that they chose this method primarily for convenience. This results in a different classification of some cases in which the SSI claim was denied but the DI claim was allowed. In such cases, the claimant is receiving a benefit as a result of their concurrent disability claim but would be classified in SSA’s data as a denial. However, the resulting differences in the number of allowances is very small—less than 4,000 claims over fiscal years 2007 through 2015— and the different definitions did not substantively affect allowance rates in any year. Random Assignment of Cases to Judges SSA policy states that cases are generally assigned on a “first in, first out” basis, meaning that cases are assigned to judges in the order in which they are received. Administrative law judges are assigned cases on a rotational basis, with the oldest case in the backlog given to a judge who most recently decided a case. Therefore, as noted in prior research, the initial assignment of cases to judges is random (conditional on applying at a given hearing office at a given time). Judges do not select their cases, nor are claimants able to request another judge after one is assigned. Claimants are generally assigned to hearing offices based on their ZIP code, although some claimants in hearing offices with higher numbers of pending claims may be transferred to one of SSA’s five National Hearing Centers. In those cases, hearings are conducted by videoconference rather than in person, as is traditionally done in SSA’s regular hearing offices. However, the claimant may opt out of a video conference hearing within 30 days of receiving a written notice acknowledging the request for a hearing. There are some exceptions to the “first in, first out” rule, such as cases that are likely to be dismissed or decided on the record (without a hearing) and critical cases (including terminal illness cases and veterans who have a 100-percent permanent and total disability compensation rating). However, as discussed previously, we have excluded all major categories of exceptions but critical cases from our analyses, and included a variable to identify critical cases in our analyses. Assignment of Claims to Steps in SSA’s Sequential Disability Decision-Making Process SSA’s disability decision-making process includes five sequential steps, and one part of our analysis was to determine the step at which each decision was made. In consultation with SSA officials, we used a code in CPMS—called the regulation basis code—to assign each claim to a particular step. Each claim in CPMS has between one and four regulation basis codes, depending on whether the claim was for a single program (DI or SSI), or a concurrent claim for both. We assigned each claim to one of the five steps in SSA’s disability decision-making process, based on its regulation basis code. Each regulation basis code is associated with one of five steps. Therefore, if a claim had just one regulation basis code, we assigned it to the corresponding step. If a claim had more than one regulation basis code, we used a series of decision rules to select the most appropriate step. Specifically, claims for a single program have up to two regulation basis codes listed, and we used the code that matched the outcome of the case and/or the latest step. We used a similar method for concurrent claims. We found that approximately 19 percent of all allowances occur at step 3, when SSA determines whether a claimant’s impairment meets or is equivalent to an impairment listed in SSA’s Listing of Impairments. Most (80 percent) of all allowances are made at the final step (step 5), when SSA determines whether the claimant can do any work in the national economy, given the limitations of their impairment and their age, education, and work experience. Nearly a third (28 percent) of denials are made at step 4, where SSA determines whether the claimant can do their past work, and 62 percent of denials are made at step 5. There are some differences between DI claims and SSI claims in the distribution of allowances and denials over the five steps. SSI allowances occur at step 3 to a greater extent than DI allowances, while SSI denials occur at step 5 to a greater extent than DI denials (see table 5 below). Statistical Model of Variation in Allowance Rates across Judges and Associated Factors We developed our multivariate statistical analyses in consultation with GAO statisticians, economists, and social scientists and SSA officials and experts. Our analysis was also informed by a comprehensive review of the literature pertaining to judicial decision-making and, in particular, adjudication for SSA’s disability programs. Specifically, we reviewed more than 90 potentially relevant peer-reviewed academic journal articles, government reports, and nonprofit association and think tank white papers. We selected 39 of these studies or reports for a detailed review of the scope and methodology, key factors or variables used in any empirical analyses, and other relevant findings. We also reviewed relevant SSA Office of the Inspector General (OIG) reports and consulted with SSA and OIG officials, and reviewed prior GAO reports that modelled judicial outcomes. Our statistical model included variables that are either direct or approximate measures for: (1) claimant characteristics that represent criteria used in the disability decision-making process, (2) judge characteristics, (3) other participants in the decision-making process, (4) SSA administrative characteristics, and (5) economic characteristics of the claimant’s state. Our analysis was purely statistical, in that we did not conduct the legal analysis needed to reach conclusions about what legal factors might have affected a judge’s decision or whether the decision that was reached in any particular case was correct. Similarly, we are not making any predictions about the likely or correct outcome of future individual decisions. Each case is unique in both its facts and circumstances and must be examined on its own merits. We included factors that represent criteria used in decision-making process, such as the type of claim (DI, SSI, or concurrent) and the claimant’s age, years of education (grouped into equivalent levels: less than high school, high school, some college, and college or higher), and primary impairment. We included factors related to the judge’s employment as an ALJ, such as the year appointed as a judge, the type of appointment (whether they had a career or temporary, non-permanent assignment), and any prior work history at SSA (specifically, whether they were an attorney or held another position prior to being appointed as an ALJ). Other participants in the decision-making process We included factors that represent other participants in the decision- making process, such as the claimant’s use of an attorney or non- attorney representative, or the testimony of a medical or vocational expert at the hearing. Our prior work has shown, for example, that claimants who were represented by an attorney or a person who is not an attorney (such as a relative or professional disability representative) were more likely to be allowed disability benefits than claimants who had no representative. We included factors related to SSA’s administration of its disability programs, such as the hearing office in which the claim was decided, whether the claim was heard in one of 10 states that do not have a reconsideration step between the initial state-level Disability Determination Service decision and a hearing before an ALJ, and the percentage of pending cases at the hearing office that were pending for more than 270 days (SSA’s definition of a “backlogged” case). Finally, we assessed economic characteristics of the state in which the claimant resided because some prior research suggests that such factors may be associated with disability application and allowance rates. Specifically, we analyzed: The unemployment rate in the claimant’s state as of the year of each decision in our analysis, from the Bureau of Labor Statistics’ Local Area Unemployment Statistics data. We selected this factor in order to account for the labor market conditions where claimants live. The poverty rate in the claimant’s state as of the year of each decision in our analysis, from the Census Bureau’s American Community Survey (ACS). Goals of Analysis The primary goal of our analysis was to isolate variation in allowance rates due to the unique judge or hearing office assigned to each claim by controlling for multiple factors that could otherwise affect this variation. Some variation in allowance rates across judges and hearing offices could reflect the distribution of other factors that are correlated with allowances. For example, judges who hear disability cases in regions of the country with higher obesity rates—a known risk factor for disability— may appear to have higher allowance rates than those in regions with low obesity rates. Because judges’ decisions to allow benefits may be related to this or other factors, simple univariate comparisons of allowance rates across judges may reflect characteristics of the cases that judges hear. To help isolate the potential unique effects of judges, we used multilevel, multivariate statistical models that held constant various factors that could have been associated with allowance rates. We held constant variables available in SSA and other public data sources that were relevant to the claim appeals process, in order to estimate the amount of potential residual variation across judges. Statistical Model The data we assembled had a multilevel structure, with applications for disability benefits clustered within the same judges and hearing offices. Judges were associated with multiple hearing offices, because judges sometimes decided cases in multiple hearing offices during the period of our analysis. For example, judges could travel to more remote sites to hear cases on a part-time basis. The data and outcome of interest suggested that a multilevel or mixed logistic regression model would adequately reflect the data generation process. We developed a mixed model that represented the grouping variables—judge, hearing office, and primary diagnosis code—with random intercepts, similar to prior research. We modeled group variation with random effects primarily for parsimony. Modeling group variation with fixed effects would have required estimating several thousand explicit parameters, one for each group level, which would have consumed many degrees of freedom. Estimating the amount of variation across groups then would have required interpreting many contrasts between pairs of fixed effect estimates. In contrast, modeling group variation using random effects allowed us to represent the variation with probability distributions and a small number of summary (hyper) parameters, such as the standard deviation of the judge random effect. Substantively, random effects accurately represented the SSA policy of randomly assigning judges to cases in our study population, using a “first in, first out” method. Moreover, we modeled variation across judges and hearing offices as random, which implies that we seek to make inferences about a larger, hypothetical population of judges and hearing offices that could exist if we replicated the study in the future. This seems appropriate, because the application review process could be repeated across many new judges and hearing offices in the future. We do not seek to make inferences limited to the judges and hearing offices at the particular time we assembled data. We held constant case, judge, and hearing office characteristics using covariates with fixed parameters. The smaller number of parameters associated with these covariates made a fixed effects approach easier to apply and interpret. We assumed that the covariate effects did not vary across groups, so that only the model’s intercept varied randomly. We had no prior expectation that specific covariate effects should have varied across groups. Moreover, increasing the number of random effects would have increased the complexity of the model and could have made it hard to estimate computationally. We viewed the covariates primarily as controls for isolating variation across judges and offices. We did not attempt to build a comprehensive model that correctly specified how all of the covariates were causally ordered and related to each other and the probability of an allowance. As a result, our estimates of these parameters may not be consistent with those obtained from a more comprehensive modeling effort, or from analyses designed to estimate the causal effects of particular variables, such as the use of videoconferences. In the body of the report, we present alternative explanations and provide context to avoid interpreting the covariate effects as with a high degree of causal certainty. For example, we note that claims with legal representation may have higher approval rates if representatives accept claims with greater merit and, therefore, a greater chance of compensation. Below, we test alternative model specifications for covariates where the causal ordering may be ambiguous, in order to avoid biasing estimates of the judge and office parameters of primary interest. Certain variables and parameters were applied across multiple versions of the model (described below). Let Y denote the allowance or denial decision for claimant i at any step of the appeals process, with Y = 1 if the ALJ allowed the claim and 0 otherwise. Each model took a typical hierarchical generalized linear form for a binary outcome: The probability of allowance, π, was a function of covariate vectors measuring characteristics of claims, Xijo, characteristics of the ALJs assessing those claims, Xjo, and characteristics of the hearing offices where the decision occurred, X. Claimants were clustered in j = {1, 2, … , J} judges, and judges were clustered in o = {1, 2, …, O} offices. g is the inverse logistic link function. We included normally distributed random effects, ε(.), for each judge, office, and the claimant’s primary diagnosis, indexed by diagnosis codes d = {1, … , D}. Random effects allowed the intercept for each group, α(.) = α + ε(.), to vary around the population average intercept, α, as a function of the group’s variance, σ(.): To make interpretation and computation easier, we classified all continuous covariates into substantively meaningful categories, and set the omitted reference categories to the sample modes. This transformation implied that the random effect variance, σ(.), described variation across judges and offices for a claim that had the modal value of all other covariates in the model and sample. The reference claim remained constant across models fitted to different subsamples, in order to make inferences about a claim that was typical for the study population. The center of the data at the modes, α, may not necessarily correspond to an actual claim. For example, all judges do not practice at the modal hearing office, and the modal age for the study population may not be typical for claims made in the modal office. Nevertheless, rescaling facilitates estimation and interpretation of the model, because all inference can be done on α and α(.) directly, using the random effect variance, σ(.), without transformation. This allowed us to concisely describe variation in allowance rates for a hypothetical, typical claim in the joint covariate distribution. In the body of this report, we summarized variation across judges and offices, holding constant other covariates at their sample means, using the estimated distribution of group intercepts scaled in logits: To describe variation across groups on the probability scale, we estimated the quantiles bounding the middle 50 and 90 percent of the group density on the logit scale and then transformed them using the inverse logistic function, g. This reference case does not represent a feasible claim, because the means of the categorical covariates are just the sample proportions. However, this approach complements the centering of the sample, which allows the group variance parameters, σ(.), to represent variation across groups for a feasible reference case at the sample modes. Covariates and Subsamples We fit a sequence of models using different covariates and subsamples, listed below. Fitting several models allowed us to assess how simplifying assumptions, such as ignoring the step at which ALJs made allowance decisions, affected our results. This approach also assessed the stability of estimates across multiple runs of the computational model estimation methods. We describe the substantive meaning of the covariates above, and give their exact measurement when reporting results in table 7 below. Model 1: Intercepts Only Model 2: Add Covariates (Unemployment and poverty vary at the state level, not at the office level, but we include them with the office covariates for simplicity.) Model 3: Claims Decided at Steps 4 or 5 We estimated Model 2 for only those claims decided at steps 4 or 5, according to each claim’s Regulation Basis Code. In these last two steps of the sequential disability decision-making process, the judge determines whether claimants retain the ability to perform their past work or other work in the national economy, given the limitations of their impairment and their age, education, and work experience. SSA officials provided methods to map these codes to steps of the appeals process. Estimating the model for decisions at steps 4 or 5 allowed all parameters to vary at these steps versus all steps in the pooled sample. For example, diagnosis may be less strongly associated with allowances at step 5 than at step 3, while the claimant’s age may be more strongly associated. Models 4-6: Stratify by Year of Decision and Claim Type To assess how the amount of variation across judges and offices has changed over time, we estimated Model 2 separately for each year of decision, claim type, and the cross-classification of these variables. Stratified models allowed all parameters to vary across claim types and years. Model 7: Exclude Potentially Endogenous Covariates We excluded covariates from Model 2 that may not be exogenous to the probability of approval. These include representation by an attorney or other person and the presence of a medical or vocational expert. Claims with legal representation may have higher approval rates if representatives tend to accept claims with greater merit and, therefore, a greater chance of compensation. (Representatives typically receive a share of their client’s benefits as compensation.) According to SSA officials, medical and vocational experts may be more likely to testify at a hearing, depending on the judge’s expected ruling on the case. Although judges generally have discretion about whether to involve medical and vocational experts, judges are required to seek the opinion of a medical expert in certain cases. For example, a judge must have a medical expert provide an opinion if the judge is considering allowing benefits because the claimant’s impairment may be medically equivalent to one in SSA’s Listing of Impairments. Excluding these covariates avoids potentially biasing estimates of the judge and office parameters of primary interest. Results We provide the estimated distributions of allowance rates across judges, hearing offices, and primary diagnoses, holding all other covariates at their means, in table 6 below. Each row in the table lists results for one specification of the model described above. We derived quantiles of the distributions across groups with the data and estimated model parameters, using the methods above. The standard deviations of the allowance rates on the logit scale are explicit parameters in the model and were directly estimated with the fixed coefficients. We used these distributions to describe variation across judges, offices, and diagnoses in the body of this report and in figures, where we interpret the results in more detail. Table 7 below provides estimated odds-ratios of allowances for the factors other than judge, hearing office, and diagnosis in our primary model of ALJ allowance rates (Model 2 above), along with sample distributions and raw allowance rates. We used the primary model to support our findings in the body of this report, where we interpret the results below in more detail. Our model included variables that are measures or approximate measures for (1) claimant characteristics that represent criteria used in the disability decision-making process, (2) judge characteristics, (3) other participants in the decision-making process, (4) SSA administrative characteristics, and (5) economic characteristics of the claimant’s state. The interpretation of the odds ratio for a particular variable depends on whether the variable is a dummy variable or a categorical variable. For dummy variables, a statistically significant odds ratio that is greater/less than 1.00 indicates that claimants with that characteristic are more/less likely to be allowed than claimants without it. For categorical variables, a statistically significant odds ratio that is greater/less than 1.00 indicates that claimants in that category are more/less likely to be allowed than the claimants in the reference category. Evaluation of SSA’s Processes to Monitor Accuracy and Consistency in Hearings Decisions For objective 2, we reviewed relevant federal laws, regulations, and documentation, and collected testimonial evidence from SSA officials to describe and evaluate the processes that SSA uses to monitor hearing decisions, detect variation, and improve accuracy and consistency. We interviewed SSA officials at different levels, including officials at headquarters, regional, DDS, and field office levels. We reviewed documents such as SSA’s Hearings, Appeals, and Litigation Law (HALLEX) manual, policy memoranda issued by the Chief Administrative Law Judge, monitoring and quality assurance reports, user manuals and guides for electronic tools, SSA OIG reports, and descriptions of processes that are under development. We assessed these monitoring efforts against federal internal control standards and our management and evaluation guide for assessing fragmentation, overlap, and duplication in government programs. We also reviewed SSA’s annual performance plans from fiscal year 2006 through fiscal year 2017 to identify performance measures the agency has established to improve the accuracy and consistency of its hearings decisions. We evaluated the current performance measures using key attributes of performance measures used in prior GAO work and federal internal control standards. In addition to interviews with agency officials, as described above, we also interviewed officials from organizations representing judges, disability claimants, and representatives to obtain their perspectives on SSA’s efforts to monitor and improve accuracy and consistency. Appendix II: Comments from the Social Security Administration Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Erin M. Godtland, Assistant Director; Rachael Chamberlin, Analyst-in-Charge; Dana Hopings, LaToya King, Stephen Komadina, Rhiannon Patterson, and Jeff Tessin made significant contributions to the report. In addition, Daniel Bertoni, Deborah Bland, David Chrisinger, Melinda Cordero, Holly Dye, Bill Egar, Alex Galuten, Benjamin Licht, Serena Lo, Mimi Nguyen, Samuel Portnow, Sheila McCoy, and Shana Wallace made valuable contributions.
Why GAO Did This Study Individuals who do not agree with the initial decision on a claim for Social Security disability benefits can ultimately appeal the decision by requesting a hearing before one of SSA's approximately 1,500 administrative law judges. However, the rate at which these judges have allowed benefits has varied, raising questions about the reasons for this variation. GAO was asked to review aspects of SSA's oversight of judges' decisions. This report examines (1) to what extent allowance rates vary across administrative law judges, and factors associated with this variation; and (2) the extent to which SSA has processes to monitor the accuracy and consistency of hearings decisions. GAO developed a statistical model to analyze SSA data on adult disability decisions made by administrative law judges from fiscal years 2007 through 2015, the most current data available at the time of GAO's analysis; reviewed relevant federal laws, regulations, and agency documents; and interviewed SSA officials and chief judges in SSA's 10 regions, as well as officials from organizations representing judges, disability claimants, and claimant representatives. What GAO Found Allowance rates—the rate at which Social Security Administration (SSA) administrative law judges allowed disability benefits to be paid when claimants appealed—varied across judges, even after holding constant certain characteristics of claimants, judges, hearing offices, and other factors that could otherwise explain differences in allowance rates. Specifically, GAO estimated that the allowance rate could vary by as much as 46 percentage points if different judges heard a typical claim (one that was average in all other factors GAO analyzed). SSA officials said that this level of variation is not surprising, given the complexity of appeals and judicial discretion. Nonetheless, the variation declined by 5 percentage points between fiscal years 2007 and 2015 (see figure), a change officials attributed to enhanced quality assurance efforts and training for judges. GAO also identified various factors that were associated with a greater chance that a claimant would be allowed benefits. In addition to characteristics related to disability criteria, such as the claimant's impairment and age, GAO found that claimants who had representatives, such as an attorney or family member, were allowed benefits at a rate nearly 3 times higher than those without representatives. Other factors did not appear related to allowance rates, such as the percentage of backlogged claims in a hearing office. SSA has various reviews to monitor the accuracy and consistency of hearings decisions by administrative law judges, but some of these reviews may overlap and SSA has not systematically evaluated them. Specifically, SSA conducts five types of quality assurance reviews of hearings decisions, several of which have similar goals and may look at similar claims. SSA has not evaluated the efficiency or effectiveness of these reviews, despite spending at least $11 million on them in fiscal year 2016. Moreover, the agency has struggled to sustain all of its quality reviews due to competing priorities—two of the five reviews were curtailed in 2016 because SSA reassigned staff to help expedite claims decisions. By evaluating which quality assurance reviews are most effective and efficient in improving accuracy and consistency, SSA would be better positioned to meet its goals within its resources. What GAO Recommends GAO is making two recommendations, including that SSA systematically evaluate its quality assurance reviews and take steps to reduce or better manage any unnecessary overlap among them. SSA concurred and plans to address them through a comprehensive assessment of its oversight.
gao_GAO-18-183
gao_GAO-18-183_0
Background Federal agencies implement specific elements of laws through regulations, which typically require or prohibit certain actions. Congresses and Presidents have required agencies to comply with multiple procedural and analytical requirements prior to issuing regulations. Administrative Procedure Act (APA). APA established the basic framework of administrative law governing federal agency action, including rulemaking. Before promulgating a regulation, agencies are generally required to publish a notice of proposed rulemaking (NPRM) in the Federal Register and take comments concerning the proposed rule. However, agencies may issue final rules without the use of an NPRM in certain cases, including when the agency determines for “good cause” that notice and comment procedures are “impracticable, unnecessary, or contrary to the public interest.” Further, Congress sometimes enacts laws that direct an agency to issue regulations without notice and comment. Regulatory Flexibility Act. RFA was enacted in response to concerns about the effect that federal regulations can have on small entities. RFA requires agencies to consider the impact of their regulations on small entities and to prepare regulatory flexibility analyses, unless the head of the agency certifies that the rule would not have a “significant economic impact upon a substantial number of small entities.” Paperwork Reduction Act. PRA was enacted to help minimize the burden that federal information collections (e.g., forms, surveys, or questionnaires) impose on the public, while maximizing their public benefit. PRA requires agencies to provide public notice, solicit comments, and request approval by OMB before imposing new information collection requirements. Unfunded Mandates Reform Act of 1995. UMRA was enacted to address concerns about federal statutes and regulations that require nonfederal parties to expend resources to achieve legislative goals without being provided funding to cover the costs. Among other things, UMRA generally requires federal agencies to prepare a written statement containing a “qualitative and quantitative assessment of the anticipated costs and benefits” for any rule that includes a federal mandate that may result in the expenditure of $100 million or more in any 1 year by state, local, and tribal governments in the aggregate, or by the private sector. Small Business Regulatory Enforcement Fairness Act. Under SBREFA, the Environmental Protection Agency (EPA) and the Occupational Safety and Health Administration (OSHA) are required to convene Small Business Review Panels (also known as SBREFA panels) for rulemaking efforts that are expected to have a significant economic impact on a substantial number of small entities. These panels are intended to seek direct input early in the rulemaking process from small entities that would be impacted by the rulemakings. Congressional Review Act. CRA was enacted to better ensure that Congress has an opportunity to review and possibly disapprove regulations, in certain cases, before they become effective. CRA established expedited procedures by which Congress may disapprove agencies’ regulations by introducing a resolution of disapproval that, if adopted by both Houses of Congress and signed by the President, can nullify an agency’s action. CRA states that an agency may not reissue the regulation in “substantially the same form,” as a regulation Congress disapproved. CRA requires us to provide Congress with a report on rules OMB’s Office of Information and Regulatory Affairs (OIRA) determines to be major rules, including our assessment of the issuing agency’s compliance with the procedural steps required by various acts and executive orders governing the rulemaking process. CRA’s definition of a major rule is similar to E.O. 12866’s definition of economically significant rules, and generally, economically significant regulations are classified for purposes of CRA as major rules and significant regulations are classified as nonmajor rules. CRA generally provides Congress time to review major rules before those rules take effect. Executive Orders and Relevant Guidance. In addition to the statutory requirements described above, executive agencies must also follow requirements Presidents have set in executive orders and related guidance: Role of OIRA: Under E.O. 12866, issued in 1993, OIRA reviews regulations deemed significant. The Administrator of OIRA is responsible for providing meaningful guidance and oversight with respect to regulatory planning and review to the extent permitted by law. Further, the order states that OIRA is to be the repository of expertise concerning regulatory issues. Role of agencies and assessment of costs and benefits: Among other things, under E.O. 12866 agencies are responsible for developing regulations and assuring that the regulations are consistent with applicable law. The order also requires agencies to prepare an agenda of all regulations under development or review. For economically significant regulations, E.O. 12866 requires agencies to provide to OIRA (unless prohibited by law) an assessment, including the underlying analysis, of the costs and benefits anticipated from the regulatory action and feasible alternatives. For significant regulations, E.O. 12866 requires agencies to provide to OIRA an assessment of the potential costs and benefits anticipated from the planned regulatory action. Circular A-4, published in 2003, provides guidance to agencies on how to conduct the required analysis and, among other things, directs agencies to estimate the costs and benefits of a regulation and “transfer” payments that may result from the regulation. Transfer regulations redistribute income from (usually) taxpayers to program beneficiaries (e.g., Medicare recipients), but generally do not result in economic benefits or costs. Agencies Published More Final Regulations and More Frequently Provided Advanced Notice to the Public during Transition Periods The three administrations published a higher number of economically significant and significant final regulations at the end of each President’s second term compared to the nontransition periods. (See figures 1 and 2.) The administrations published on average roughly 2.5 times more economically significant regulations during their transition periods than during nontransition periods. Our analysis also showed that within their transition periods (September 23 through January 20), the administrations of Presidents Clinton and Obama increased their rate of economically significant rulemaking following the elections held in 2000 and 2016 (between Election Day in November and January 20), while President Bush’s administration decreased the rate of economically significant rulemaking following the 2008 election. (See appendix II.) Economically Significant Regulations Published in Both Transition and Nontransition Periods Were Concentrated in Certain Agencies We found that the majority of economically significant regulations were published by a subset of agencies across the three administrations and between transition and nontransition periods. In particular, the Department of Health and Human Services (HHS) published one-third of the economically significant regulations we reviewed across all periods and was the most active agency in both transition and nontransition periods. (See table 1.) For example, the Centers for Medicare & Medicaid Services typically published regulations every calendar year describing reimbursement rates for medical providers serving Medicare patients. For significant regulations, HHS was the most active agency during both transition and nontransition periods. (See table 2.) However, significant rulemaking was less concentrated in a subset of agencies than was economically significant rulemaking. Specifically, five of the agencies that published the largest number of economically significant regulations accounted for between 65 and 70 percent of these regulations during both transition and nontransition periods, while the five agencies that published the largest number of significant regulations accounted for 42 percent of these regulations during both transition and nontransition periods. For Economically Significant Regulations, Agencies More Frequently Provided Advanced Notice to the Public during Transition Periods To provide perspective on the transparency of regulatory activity and the types of rulemaking procedures agencies used during transitions, we examined two indicators: 1) whether regulations were advertised in the previous spring’s Unified Agenda and 2) whether the final regulation was preceded by a proposed rule or NPRM: Prior Appearance in the Unified Agenda: The semi-annual Unified Agenda was established by E.O. 12866 and provides uniform reporting of data on those regulatory and deregulatory activities under development or review throughout the federal government. By including a planned regulation in the previous spring’s Unified Agenda, policy makers provided members of the public with several months of notice before a final regulation was published during any of the transition or nontransition periods. Notice of Proposed Rulemaking: The notice and comment process was established by the APA and gives the public an opportunity to provide information to agencies on the potential effects of a regulation or to suggest alternatives for agencies to consider before the agency publishes the final regulation. By publishing an NPRM, policy makers provided members of the public with an opportunity to influence the development of the regulation. Overall, we found that agencies more frequently provided advanced notice of regulations to the public during transition periods by announcing planned activities in the Unified Agenda and publishing NPRMs. A higher percentage of economically significant regulations appeared in the previous spring’s Unified Agenda during Presidents Bush’s and Obama’s transition periods compared to nontransition periods. (See figure 3.) President Clinton’s administration published a smaller percentage of regulations in the Unified Agenda during its transition period compared to its nontransition periods. This decrease is explained by the Department of the Interior (Interior) and HHS not entering four regulations they typically update each year into the spring 2000 Unified Agenda pertaining to migratory bird hunting and Medicare. For significant regulations, we estimate that a higher percentage of regulations published during Presidents Bush’s and Obama’s transition periods appeared in the previous spring’s Unified Agenda compared to President Clinton’s transition period. However, we found no statistical differences between the nontransition periods combined and any of the three transition periods. (See figure 4.) Across all three administrations, economically significant regulations published during transition periods were more often preceded by proposed regulations compared with those published during nontransition periods. (See figure 5.) We estimated that significant regulations published during Presidents Clinton’s and Bush’s transition periods were more often preceded by proposed regulations than significant regulations published during nontransition periods. However, we found no statistical differences between President Obama’s transition period and the other transition and nontransition periods. (See figure 6.) Nearly All Economically Significant Regulations Reported to the Public Compliance with Four Procedural Requirements, but a Quarter Did Not Comply with the Congressional Review Act Agencies Reported to the Public that They Complied with Four Procedural Requirements for Nearly All Economically Significant Regulations and the Majority of Significant Regulations Regulatory Flexibility Act (RFA), Paperwork Reduction Act (PRA), and the Unfunded Mandates Reform Act of 1995 (UMRA): We found that 91 percent of economically significant regulations across all periods reviewed explained to the public the determinations the agencies made regarding these three procedural requirements. Further, there was little difference between transition and nontransition periods in whether agencies provided explanation of these three procedural requirements. For the regulations that did contain explanation, agencies indicated that more economically significant regulations published during transition periods than in nontransition periods: (1) would not have a significant impact on a substantial number of small entities (RFA), (2) contained information collection requirements on nonfederal entities (PRA), and (3) generally could impose federal mandates on nonfederal entities (UMRA). For significant regulations, we estimate that 64 percent across all periods reviewed provided explanation to the public of the determinations the agencies made regarding these three procedural requirements. More specific information about the determinations agencies reached is presented in appendix II. For economically significant and significant regulations that did not contain explanations of one or more of these procedural requirements, this does not necessarily indicate noncompliance by the agency. An agency may not need to address a particular procedural requirement if the substance of the rule or exceptions and thresholds in the requirement lead the agency to determine that a specific regulation did not trigger the requirement. For example, regulations that were significant but not economically significant under E.O. 12866 would not be expected to contain a federal mandate that would result in the expenditure of $100 million or more in any 1 year, so would not trigger the requirement for an UMRA written statement. Small Business Regulatory Enforcement Fairness Act (SBREFA): EPA and OSHA reported holding small business review panels for 16 economically significant regulations reviewed, and we confirmed that the proceedings of all but one of these panels had been documented on the Small Business Administration’s website. EPA also reported holding a small business review panel for one of the significant regulations we reviewed, and we confirmed that this proceeding also had been documented. Over 25 Percent of Economically Significant Regulations and an Estimated 15 Percent of Significant Regulations Did Not Comply with the Congressional Review Act CRA requires agencies to submit regulations to Congress and to us and to delay the effective date of certain regulations in order to provide Congress an opportunity to review and possibly disapprove of regulations before they become effective. We reviewed agencies’ compliance with the requirements to: (1) submit the regulation to Congress and to us, (2) provide the required delay between submission of the regulation to Congress and us and its effective date, and (3) provide the required delay between publication of the regulation and its effective date. See figure 7 for these requirements regarding delays in effective dates. Our analysis determined that 132 of the 527 economically significant regulations across all periods reviewed failed to meet at least one of the requirements described above, and none of these regulations included agencies claiming “good cause,” which would have allowed them to delay the effective date. (See figure 8.) We found that noncompliance for economically significant regulations was primarily associated with agencies’ failure to delay the effective date of their regulations, while the failure to submit regulations to Congress and us accounted for a smaller proportion of the deficiencies. Of the 132 noncompliant economically significant regulations: 95 did not provide the required delay between the submission of the regulation to Congress and us and the effective date. Further, agencies generally missed the deadline by more than 5 days (70 of 92 regulations). It is our practice to alert the relevant congressional committees when we observe this particular deficiency in our major rule reports. Further, we also reported to Congress in 2007 that there appeared to be a broader pattern of noncompliance with this requirement, noting: “A consistent difficulty in implementing CRA has been the failure of some agencies to delay the effective date of major rules for 60 days as required by CRA.” 74 did not provide the required delay between publication in the Federal Register and the effective date. Once again, agencies generally missed this deadline by more than 5 days (62 of 74 regulations). It is our practice to alert the relevant congressional committees when we observe this particular deficiency in our major rule reports. 10 had not been submitted to us as of November 13, 2017. Among the most active regulatory agencies for economically significant regulations, HHS and the Department of Transportation (Transportation) had higher rates of noncompliance than the government-wide percentages for both the transition and nontransition periods we reviewed. (See table 10 in appendix II.) However, noncompliance was not limited to HHS and Transportation; 17 of the 23 agencies that published economically significant regulations during the periods we reviewed had at least one noncompliant regulation. As noted previously, our sample of significant regulations was not designed to provide estimates concerning individual agencies’ noncompliance with CRA. In addition, we estimate that 15 percent of significant regulations published across all periods reviewed failed to meet at least one of the CRA requirements we reviewed. (See figure 9.) We did not identify any statistical differences in the noncompliance rate among the three transition periods and nontransition periods combined. For significant regulations, we developed estimates for the following CRA deficiencies: Regulations submitted after the stated effective date: An estimated 15 percent of significant regulations published during all periods reviewed were not submitted to Congress and us before the stated effective date as required. Significant regulations were generally nonmajor rules, which do not have a requirement to delay the effective date by 60 days. There were no statistical differences among the three transition periods and the nontransition periods regarding this deficiency. Regulations not submitted to us: An estimated 7 percent of significant regulations published during all periods reviewed had not been submitted to us as of November 17, 2017, with no statistical differences among the three transition periods and nontransition periods. Agencies’ noncompliance with CRA has the overall effect of making it more difficult for Congress to exercise its oversight role under CRA; however, the precise effects of noncompliance depend on the type of regulation and the specific deficiencies. CRA provides expedited procedures that make it easier to overturn a regulation compared to following the regular legislative process. For economically significant regulations, which are generally classified as major rules under CRA, failing to provide the required delay for congressional review means that Congress has a shorter amount of time to use these expedited procedures to disapprove the regulation before the agency potentially starts enforcement actions. Furthermore, in general, if a rule is not submitted to Congress as required by CRA, Congress cannot use these expedited procedures. Moreover, not submitting a rule to Congress can potentially create legal uncertainty for agencies and regulated parties because courts have differed on the impact of noncompliance with CRA on the enforceability of the regulation. OIRA staff noted that CRA states that agencies are responsible for complying with the act’s requirements, and E.O. 12866 states that agencies are responsible for adhering to applicable laws. However under E.O. 12866, OIRA is also responsible for oversight of agencies’ rulemaking, consistent with law, and reviews regulations before publication, which provides it an opportunity to identify and help agencies avoid potential noncompliance. OIRA staff asserted that they already take steps to check agencies’ compliance with CRA. However, we found that OIRA completed its E.O. 12866 reviews for 110 of the 132 noncompliant economically significant regulations within 90 days of the stated effective date. OIRA staff noted that they cannot monitor every action agencies take following their review of draft final regulations, such as the specific date a regulation is published in the Federal Register or whether an agency submits a copy of the regulation to Congress or us. However, because economically significant regulations are generally classified as major rules under CRA, this indicates that OIRA frequently completes its review in close proximity to the start of the 60-day period intended for congressional review, and in such cases the regulation is at high risk of noncompliance with CRA. This close proximity to the 60-day period provides an opportunity for OIRA to identify potentially noncompliant regulations before agencies publish them and work with agencies on actions that would avoid noncompliance. Our analysis identified such actions agencies could use to comply with CRA. For example, we found instances of agencies explaining to the public that CRA requires a 60-day review period for major rules and therefore identifying an effective date more than 2 months after publication in the Federal Register. In other instances, agencies stated that the regulation would take effect 60 days after publication in the Federal Register, which ensures compliance with CRA provided that the regulation is submitted to Congress and us on or before the day it is published. In other cases, agencies stated they had “good cause,” to not delay the effective delay, such as a statutory or judicial deadline or an emergency situation. Variations Existed between Transition and Nontransition Periods in Agencies’ Anticipated Types of Economic Effects for Economically Significant Regulations Agencies anticipated that economically significant regulations published during transition periods were more likely to result in economic costs and benefits and generally less likely to result in “transfers” of income from taxpayers to program beneficiaries. To identify the types of economic effects that agencies anticipated, we placed the 527 economically significant regulations reviewed across all periods into one of four categories based on information agencies provided in the published regulation concerning the anticipated costs, benefits, or transfers resulting from a regulation: Expected economic costs, benefits, or both: For 197 of the 527 economically significant regulations (or 37 percent), agencies expected costs or benefits or both to result and made no mention of transfers. Our previous work has noted that regulations typically require a desired action or prohibit certain actions by regulated parties. Such requirements may impose costs on private-sector parties, such as businesses and individuals, and may also provide benefits to society as a whole. Examples we reviewed included EPA regulations limiting emissions from industrial facilities with the goal of improving air quality and Labor Department regulations intended to improve workplace safety. Transfers: For 184 of the 527 economically significant regulations (or 35 percent), agencies expected transfers to result from the regulation and made no mention of either costs or benefits. Examples we reviewed included HHS regulations stating how much Medicare will reimburse Medicare providers and Department of Agriculture regulations providing disaster assistance to farmers. While these payments increase the incomes of Medicare providers and farmers, Circular A-4 directs agencies to avoid misclassifying these transfers as economic costs or benefits because they do not change aggregate social welfare. Combination of economic costs, benefits, or transfers: For 108 of the 527 economically significant regulations (or 20 percent), agencies expected costs or benefits or both to occur and also expected transfers to occur. Examples we reviewed included regulations that expanded access to health insurance for tribal employees and established paid sick leave for federal contractors that were anticipated to result in both administrative costs and transfers. No economic analysis: The remaining 38 of the 527 economically significant regulations (or 7 percent) provided no economic analysis. Of these regulations, 22 were updates to migratory bird hunting regulations Interior published during Presidents Clinton’s Administration and President Bush’s first term. During the 2003-2004 nontransition period of President Bush’s Administration, Interior began providing a brief summary of the economic effects anticipated to result from hunting these birds. Comparing these reported effects between transition and nontransition periods, we found that agencies indicated that economically significant regulations published during transition periods were more likely to result in costs and benefits to society than those published during nontransition periods across all three administrations. (See figure 10.) In contrast, regulations involving only transfers became a smaller proportion of the economically significant regulations published during Presidents Bush’s and Obama’s transition periods. Regulations that involved various combinations of costs, benefits, and transfers became a larger proportion of regulations published during Presidents Bush and Obama’s transition periods and overall became a larger proportion of economically significant regulatory activity that occurred during President Obama’s transition period. Executive guidance encourages agencies to quantify and monetize expected costs and benefits to help decision makers understand the consequences of regulatory approaches. E.O. 12866 states that for economically significant regulations agencies should analyze costs and benefits to the extent feasible, and Circular A-4 encourages agencies, to the extent possible, to provide monetized estimates of these costs and benefits. For economically significant regulations, we found that agencies were more likely to monetize anticipated costs and transfers compared to benefits and were more likely to monetize anticipated costs during Presidents Clinton and Bush’s transition periods. (See figures 11-13.) For economically significant regulations, we also did additional analysis of the extent to which agencies anticipated the benefits would justify the costs and the extent to which net costs or benefits were calculated. (See appendix II.) In examining the extent to which agencies anticipated that costs, benefits, and transfers would result from significant regulations, we found that an estimated 57 percent across all periods reviewed provided information on the anticipated costs, benefits, transfers, or some combination of these, with no statistical differences among the three transition periods and the nontransition periods combined. An estimated 43 percent of significant regulations across all periods reviewed did not include any information on the anticipated costs, benefits, transfers, or some combination of these, with no statistical differences among the three transition periods and the nontransition periods combined. Conclusions Although we confirmed that agencies published a larger number of regulations during transition periods than during the same months in nontransition periods, the variety of other indicators we examined generally suggest that there were few significant differences—other than their numbers—when comparing regulations published during the three transitions to each other and to those published during nontransition periods. Among the few exceptions, economically significant regulations published during the transition periods were more likely to have provided advanced notice to the public and more likely to result in private sector costs and potential benefits to society. However, agencies’ noncompliance with the requirements of CRA for economically significant regulations (major rules under CRA) grew worse over time. Under CRA, agencies must allow additional time for Congress to review these most impactful regulations before they take effect unless the agency claims good cause for not delaying the effective date. Our review did highlight a potential opportunity for OIRA to work with agencies to improve CRA compliance going forward. Specifically, OIRA staff have the unique opportunity to work with agencies before economically significant regulations and regulations deemed significant for other reasons are published in final form in the Federal Register. OIRA staff should use this opportunity to identify economically significant regulations whose planned effective dates appear at risk of not providing Congress with sufficient time to review the regulation. To do this, our analysis points to a simple “rule of thumb” OIRA reviewers could use. If an agency is planning to make an economically significant regulation effective in less than 3 months from the time OIRA is completing its review, OIRA staff should discuss with agency officials strategies for ensuring compliance with CRA. These could include delaying the planned effective date, stating in the submission to the Federal Register that the regulation will go into effect 60 days after publication and ensuring prompt submission to Congress and us, or discussing whether the agency has a reasonable basis to claim “good cause” for not delaying the effective date and ensuring that the use of “good cause” is clearly explained in the regulation. Ensuring that agencies consistently provide Congress with the required time to review, and possibly disapprove regulations, is important throughout a President’s term, and particularly following a presidential transition when Congress typically has a larger number of regulations to potentially review. Recommendation for Executive Action We are making the following recommendation to the Director of OMB: The Director of OMB should ensure that OIRA’s staff, as part of the regulatory review process, examine the planned timeframes for implementing economically significant regulations or major rules and identify regulations that appear at potential risk of not complying with the Congressional Review Act’s delay requirements and then work with the agencies to ensure compliance with these requirements (Recommendation 1). Agency Comments and Our Evaluation We provided a draft of this report to the Director of OMB on January 18, 2018. In oral comments received on February 22, 2018, staff from OIRA and the Office of General Counsel discussed the findings, conclusions, and recommendation. OMB staff did not agree or disagree with our recommendation. However, they identified some concerns regarding the recommendation to improve agencies’ compliance with CRA. They noted that: (1) CRA states that agencies are responsible for complying with the act’s delay and submission requirements; (2) agencies determine when their regulations will take effect and when they submit the regulations to Congress and us, neither of which OMB has direct control over; and (3) where OMB does exercise authority—the regulatory review process under E.O. 12866—OIRA staff already take steps to check agencies’ compliance with CRA, and they do not see what more they could do to improve agencies’ compliance with the act. The staff also provided technical comments that were incorporated as appropriate. Regarding the first two concerns raised by OIRA staff, we believe our report sufficiently recognizes agencies’ responsibilities under CRA. Regarding the third concern, we disagree that OMB has done all that it can to improve compliance with CRA. As noted above, OMB staff asserted that they do take steps to check for CRA compliance, and these checks could provide a starting point for OMB to address our recommendation. However, our analysis raises questions about how effective these checks have been. OIRA completed its review for 110 of the 132 noncompliant economically significant regulations within 90 days of the stated effective date. This analysis points to a simple “rule of thumb” for OIRA reviewers to use. If a regulation has a planned effective date in less than 90 days, it is at high risk of noncompliance with CRA. Further, our report identifies three specific strategies OIRA staff could discuss with agency officials on how to comply with CRA. Thus, we believe that our report shows that OMB could do more to ensure CRA compliance and identifies specific ways OMB could help agencies accomplish this. We are sending copies of this report to the Director of OMB as well as appropriate congressional committees and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or krauseh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Section 5 of the Edward “Ted” Kaufman and Michael Leavitt Presidential Transitions Improvements Act of 2015 includes a provision for us to assess final significant regulatory actions promulgated by executive departments during specified presidential transition periods and to analyze and compare multiple characteristics of regulations issued during these transition periods to each other and to regulations issued during the same 120-day period (September 23 to January 20) in nontransition years since 1996. The transition periods identified in the act are those ending on January 20 in 2001, 2009, and 2017, which occurred at the end of the administrations of Presidents Clinton, Bush, and Obama. For purposes of this review, executive agencies are cabinet departments and other agencies that answer directly to the President and exclude the independent regulatory agencies. The definition of what the mandate refers to as a “covered regulation” is the same as the definition of a final significant regulatory action under Executive Order (E.O.) 12866. Under E.O. 12866, the Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs (OIRA) reviews significant proposed and final regulatory actions from all federal agencies (other than independent regulatory agencies) before they are published in the Federal Register. The order defines significant regulatory actions as those that are likely to result in a regulation that may: 1. Have an annual effect on the economy of $100 million or more or adversely affect in a material way the economy, a sector of the economy, productivity, competition, jobs, the environment, public health or safety, or State, local, or tribal governments or communities (generally referred to as “economically significant” regulations); 2. Create a serious inconsistency or otherwise interfere with an action taken or planned by another agency; 3. Materially alter the budgetary impact of entitlements, grants, user fees, or loan programs or the rights and obligations of recipients thereof; or 4. Raise novel legal or policy issues arising out of legal mandates, the President’s priorities, or the principles set forth in the executive order. For each of the three transition periods, and among these transition periods and the same 120-day periods in the 18 nontransition periods, our objectives were to assess the extent to which there were variations in: 1. The number of regulations and other indicators related to the scope and transparency of these regulations; 2. Agencies’ reported compliance with procedural requirements for promulgating the regulations; and 3. The anticipated economic effects agencies reported would result from the regulations. In general, to address each of these objectives we reviewed the universe of all 527 final economically significant regulations published during the specified time periods and a generalizable stratified random sample of 358 final significant regulations from the population of the 1,633 final significant regulations published during those same periods. For economically significant regulations, we can provide precise statistics on the extent of a finding, because we reviewed the universe of final economically significant regulations. For significant regulations, our findings are based on a sample designed to achieve a 7 percent margin of error and 95 percent level of confidence for each stratum in the population of all covered significant regulations published in each transition period and, collectively, all nontransition periods. Our findings for the sample are not generalizable to the individual agencies that published those regulations. We divided the significant regulations into four strata depending on when the regulation was published: 1) the 2000- 2001 transition period; 2) the 2008-2009 transition period; 3) the 2016- 2017 transition period; and 4) all the nontransition periods consolidated into one stratum. We made two modifications to the data for each stratum before we selected our sample: 1) We added to the sampling frames additional significant regulations that we had become aware of during our review of economically significant regulations; and 2) we reviewed the sampling frames and filtered out duplicate entries. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (for example, plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Table 3 summarizes the population and sample size by stratum for significant regulations. We primarily relied on the Reginfo.gov database on OMB’s regulatory reviews under E.O. 12866 to compile lists of final economically significant and significant regulations published during each of the transition and nontransition periods. As described in more detail below, we refined and supplemented the lists from the Reginfo.gov database with information from our database of rules submitted to us under the Congressional Review Act (CRA), and the Government Printing Office’s Federal Digital System database on the Federal Register. To test the reliability of these databases, we reviewed relevant documentation, interviewed knowledgeable agency officials, looked for missing data and outliers (for example, by identifying missing records or those included in error), traced a sample of entries to source documents, and conducted additional checks. We concluded that the data were sufficiently reliable for our purposes. Further for all objectives and for both economically significant and significant regulations, our primary source was the text of the published regulation. However, as described below, we did sometimes supplement that information with information from other publicly available sources. We downloaded copies of published regulations from the website maintained by the Government Printing Office, which securely controls content to ensure the integrity and authenticity of the Federal Register. We used a data collection instrument to collect standardized information about individual regulations as described below. We did not evaluate the agencies’ decisions regarding procedural requirements or their determinations regarding the effects of their rules. Instead, consistent with our practice in preparing major rule reports to Congress under CRA and prior reports on federal rulemaking, we are providing information about what the agencies published in the Federal Register. To assess the number of regulations and other variations related to the scope and transparency of these regulations, we first reviewed and refined our lists of economically significant and significant regulations published during each of the transition and nontransition periods. For economically significant regulations, we compared the initial lists compiled from Reginfo.gov against lists of major rules agencies had submitted to us under CRA to look for potential omissions. We then reviewed each of the published regulations to identify explanations agencies may have provided of a selected regulation’s classification as economically significant under E.O. 12866 to tally total numbers of economically significant regulations published during each of the time periods and the agencies publishing them. To identify economically significant regulations published annually, we looked for indications in the title or summary of the regulation and confirmed that these regulations appeared in multiple time periods reviewed. For significant regulations, we obtained data from Reginfo.gov concerning the number of regulations reportedly published and the agencies reported to have published them. We also reviewed the published regulations for explanations of the regulations’ classification under E.O. 12866. Our sample of significant regulations was not designed to make estimates for individual agencies, so we used data from Reginfo.gov instead. For both economically significant and significant final regulations, we compiled information on the rulemaking procedures used by agencies to determine whether the agencies had published a prior notice of proposed rulemaking (NPRM). We did this by looking for discussion of a proposed regulation in the published final regulation. As necessary, we supplemented that review with information from our major rule reports, if available, and data from Reginfo.gov concerning the rulemaking history. To describe the extent to which regulations had been advertised in the previous spring’s Unified Agenda, we searched for the regulation’s identification number(s) in the online database for the Unified Agenda. To assess the extent to which there were variations in agencies’ reported compliance with procedural requirements for promulgating the regulations, we reviewed the published text of the regulations and, for regulations that were also major rules, the major rule reports that we prepared for Congress under CRA. We reviewed agencies’ reported compliance with procedural requirements for promulgating regulations under five statutes—CRA, the Regulatory Flexibility Act (RFA), the Paperwork Reduction Act (PRA), the Unfunded Mandates Reform Act of 1995 (UMRA), and the Small Business Regulatory Enforcement Fairness Act (SBREFA)—including whether and, if so, how the agency addressed the requirement in the published regulation. To determine whether the Environmental Protection Agency and the Occupational Safety and Health Administration held the panels they were required to hold under SBREFA, we also reviewed the information on the Small Business Administration’s website summarizing these panels. We took multiple steps to identify noncompliance with CRA. We first determined whether every regulation had been submitted to us, and for regulations that had been submitted, we recorded the date we received it. We used the date a regulation had been submitted to us when assessing whether a regulation’s stated effective date was consistent with CRA requirements. We also reviewed whether agencies had claimed “good cause” for not delaying the effective date. For regulations not submitted to us or those regulations submitted to us after they should have been submitted, we conducted additional checks of the Congressional Record to see if we could find evidence that the agency had provided a copy of the regulation to either of the Houses of Congress in time for the regulation’s stated effective date to be consistent with CRA requirements. If we could find evidence that any of these requirements had been met, we removed the regulation from further consideration as potentially noncompliant. As such, our methodology was designed to identify instances of noncompliance. Our methodology does not allow us to conclude that the remaining regulations were fully compliant. In addition, it was beyond the scope of our review to evaluate the appropriateness of agencies claiming “good cause” for not providing the required delay. To assess the extent to which there were variations in agencies’ reported anticipated economic effects resulting from the regulations, we reviewed the published regulations to see whether they contained a section clearly identified as economic analysis or discussion of the analytical requirements concerning E.O. 12866. We used selected elements from OMB Circular A-4 to review the analyses included in the published regulations to identify expected costs, benefits, or transfers, and whether that information was provided in monetary, quantitative, or qualitative terms. To help identify regulations that involved transfers, we also reviewed the annual reports OMB prepares for Congress on the costs and benefits of federal regulations. OMB includes in these reports a list of transfer regulations and has used a consistent definition over time. We also looked for indication in the published regulation’s economic analysis that the regulation involved such topics as transfers, or federal payments to certain groups in society (for example, Medicare recipients), subsidies for certain economic activities, or user fees or royalties people pay the government to name several common examples. To determine the extent to which agencies discussed whether they expected that the benefits would justify the costs, we looked for “bottom line” or other concluding statements agencies may have provided in their economic analysis. We also looked, when relevant, for a discussion of what the net benefits or costs were expected to be. For transfer regulations that were economically significant, we examined the extent to which agencies quantified or monetized the expected transfers. If available, we used accounting statements agencies may have prepared summarizing the anticipated economic effects to help collect all of this information. We did not assess whether the agencies’ determinations regarding the benefits and costs were reasonable. In addition, we did not assess whether the agencies analyzed regulatory alternatives and uncertainty. We conducted this performance audit from May 2016 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Additional GAO Analysis of Final Regulations Published During Specified Periods, 1996-2017 The figures and tables in this appendix provide more detailed information on the results of additional analyses we completed for this report related to each of our three objectives. For economically significant regulations, we provide precise statistics on the extent of a finding, because we reviewed the universe. For significant regulations, our findings are based on a sample and include the upper and lower bounds of confidence intervals for estimated values. Analyses of Indicators Related to the Numbers, Scope, and Transparency of Regulations In this section, we provide additional information from our analyses of: the extent to which economically significant regulations were published before or after the presidential elections in 2000, 2008, and 2016; the most active rulemaking agencies for economically significant and significant regulations among the three administrations’ transition and nontransition periods; the number of economically significant regulations for which agencies reported they were under a statutory or judicial deadline to promulgate the regulation; and the median length, in days, of Office of Information and Regulatory Affairs (OIRA) regulatory reviews under Executive Order (E.O.) 12866 for draft final economically significant and significant regulations during transition and nontransition periods. We reviewed the extent to which economically significant regulations were published before or after the presidential elections in 2000, 2008, and 2016 and found that Presidents Clinton and Obama’s administrations increased their rate of rulemaking following the election, while President Bush’s administration decreased its rate of rulemaking. (See figure 14.) We identified the most active rulemaking agencies for economically significant regulations among the three administrations’ transition and nontransition periods and did the same for significant regulations. (See tables 4-9.) Agencies can indicate on Reginfo.gov whether they are required by a statutory or judicial deadline to promulgate a regulation. We did additional analysis for economically significant regulations and found agencies were less likely to indicate they were under such a deadline during the three administrations’ transition periods compared to nontransition periods. Under E.O. 12866, agencies are expected to submit regulations deemed significant to OIRA for review. Nearly all regulations we reviewed had been reviewed by OIRA. For a small number of economically significant regulations (13 across all periods or approximately 2 percent of the economically significant population), we could not find evidence on Reginfo.gov that OIRA reviewed the regulation. However, the absence of evidence on Reginfo.gov does not necessarily mean that OIRA did not review those regulations and may instead indicate that the review dates were not entered into Reginfo.gov. Our review found that the median length of OIRA’s review increased for economically significant regulations during each transition. (See figure 15.) For significant regulations, there were no statistical differences among the three transition periods and compared to nontransition periods combined. (See figure 16.) Analyses of Indicators Related to Agencies’ Reported Compliance with Selected Procedural Requirements for Promulgating Regulations In this section, we provide additional information from our analyses of: agencies’ determinations regarding their regulations under the Regulatory Flexibility Act (RFA); agencies’ determinations regarding their regulations under the Paperwork Reduction Act (PRA); agencies’ determinations regarding their regulations under the Unfunded Mandates Reform Act of 1995 (UMRA); and Congressional Review Act (CRA) noncompliance rates for the agencies publishing the largest number of regulations. We reviewed agencies’ discussions of three procedural requirements– RFA, PRA, and UMRA–for economically significant regulations. (Figures 17-19 summarize the determinations agencies reached.) We reviewed agencies’ discussions of three procedural requirements– RFA, PRA, and UMRA–for significant regulations. Figures 20-22 summarize the determinations agencies reached. We found the following statistical differences in comparing the determinations agencies reached for significant regulations: RFA: There were no statistical differences among the three transition periods and nontransition periods in the determination that regulations might have a significant economic impact on a substantial number of small entities. Regulations published during President Clinton’s transition period were less likely than regulations published during President Bush’s transition period and nontransition periods to determine that the regulation would not have a significant economic impact on a substantial number of small entities. There were no statistical differences between Presidents Clinton’s and Obama’s transition periods for this determination under RFA. We also found statistical differences in the remaining two categories–regulations not subject to RFA and those not discussing RFA. PRA: Significant regulations published during Presidents Obama’s and Clinton’s transition periods more frequently contained information collection requirements covered by PRA compared to nontransition periods. In addition, significant regulations published during President Clinton’s transition period more frequently contained information collections requirements compared to President Bush’s transition period. There were no other statistical differences in significant regulations containing information collection requirements. For the other categories, there were no statistical differences, except that significant regulations published during nontransition periods were less likely to discuss PRA than those published during President Obama’s transition period. UMRA: There were no statistical differences among the transition and nontransition periods in potential federal mandates covered by UMRA. We examined the CRA noncompliance rates for the agencies publishing the largest number of economically significant regulations. (See table 10). Analyses of Indicators Related to the Anticipated Economic Effects Agencies Reported would Result from the Regulations In this section, we provide additional information from our analyses of the extent to which: agencies indicated benefits justified costs for economically significant agencies estimated net costs or benefits for economically significant agencies anticipated costs, benefits, or transfers resulting from significant regulations. We examined additional indicators related to the economic analyses that E.O. 12866 and Circular A-4 encourage agencies to conduct when promulgating regulations. E.O. 12866 states that an agency should propose or adopt a regulation only upon a reasoned determination that the benefits of the intended regulation justify its costs. We examined the extent to which agencies indicated that the anticipated benefits from economically significant regulations would justify their costs and found that agencies during Presidents Clinton’s and Obama’s transition periods were more likely to indicate that benefits justified costs compared to these administrations’ nontransition periods. (See figure 23.) During President Bush’s transition period, agencies were less likely to indicate that the anticipated benefits of the regulation would justify its anticipated costs. We did not extend this analysis to significant regulations because the examples were too limited to provide statistically reliable estimates for the three transition periods and nontransition periods combined. Monetizing both costs and benefits potentially allows an agency to calculate the net costs or benefits of a regulation and thus estimate how much better or worse off society will be as a result of the chosen regulatory approach. We found that agencies during Presidents Bush’s and Obama’s administrations, during both transition and nontransition periods, were more likely to calculate net costs or benefits than agencies during President Clinton’s transition and nontransition periods. (See figure 24.) We did not extend this analysis to significant regulations because the examples were too limited to provide statistically reliable estimates for the three transition periods and nontransition periods combined. For significant regulations that did identify anticipated costs, benefits, or transfers, we found the following statistical differences in comparing the three transition periods and nontransition periods combined as explained below and in figure 25: Economic Costs or Benefits or Both: For regulations falling into this category, the only statistical difference we found was that agencies were more likely during President Clinton’s transition period to identify anticipated economic costs or benefits or both compared to President Bush’s transition period. Transfers: For regulations falling into this category, the only statistical difference we found was that agencies were less likely during President Obama’s transition period to identify anticipated transfers compared to President Bush’s transition period and all three administrations’ nontransition periods combined. Both economic costs or benefits and transfers: For regulations falling into this category, the only statistical difference we found was that agencies were less likely during President Clinton’s transition period to indicate this compared to President Bush’s transition period. No economic analysis: An estimated 43 percent of significant regulations across all periods reviewed contained no economic analysis and there were no statistical differences among the three transition periods reviewed and the nontransition periods combined. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Tim Bober (Assistant Director), Michael O’Neill (Analyst in Charge), Carl Barden, Tim Guinane, Krista Loose, Ned Malone, Alexander Ray, Cynthia Saunders, Christie Stassel, and Andrew J. Stephens made key contributions to this report. Donna Miller, John Hussey, Steven Flint, and Rob Letzler also contributed.
Why GAO Did This Study The Presidential Transitions Improvements Act of 2015 includes a provision for GAO to assess multiple characteristics of final significant regulatory actions promulgated by executive departments during presidential transition periods (September 23 through January 20) at the end of Presidents Clinton, Bush, and Obama's administrations and compare them to each other and to regulations issued during the same 120-day period in nontransition years since 1996. Among other objectives, GAO assessed the extent to which there was variation in 1) the number of regulations, their scope, and other indicators; and 2) agencies' reported compliance with procedural requirements for promulgating the regulations. To address these objectives, GAO reviewed the text of the regulations published in the Federal Register , and reviewed the universe of all 527 economically significant final regulations (generally those with an annual effect of $100 million or more) published during the specified transition and nontransition periods and a generalizable stratified random sample of 358 of the 1,633 significant final regulations published during the same time periods. What GAO Found During transition periods at the end of presidential administrations, agencies published more final regulations and more frequently provided advanced notice to the public on those regulations compared to nontransition periods. The Clinton, Bush, and Obama administrations published on average roughly 2.5 times more economically significant regulations during transition periods than during nontransition periods. But agencies more often, relative to nontransition periods, provided the public an opportunity to influence the development of the transition-period regulations by providing advanced notice of their issuance and opportunities to comment on proposed regulations before they were finalized. In their published regulations, agencies reported that compliance with four of five procedural requirements was high during both transition and nontransition periods, but not with the Congressional Review Act (CRA). During all periods, agencies reported complying with requirements, such as the Regulatory Flexibility Act, for nearly all economically significant regulations and the majority of significant regulations. Agencies less often complied with one or more CRA requirements. (See figure.) Though agencies are responsible for complying with CRA, the Office of Management and Budget (OMB) is responsible for oversight of agencies' rulemaking, consistent with law, and reviews regulations before publication, which provides an opportunity to identify and help agencies avoid potential noncompliance. The most common CRA deficiency was agencies' failure to provide Congress the required time to review and possibly disapprove regulations, which GAO has also identified as a deficiency in previous work. Economically significant regulations for which OMB completed its review within 3 months before the planned effective date were at high risk of not complying with CRA, thus increasing the risk that agencies would not provide Congress with the required time for its reviews. Economically Significant Regulations Determined to be Noncompliant with the Congressional Review Act What GAO Recommends GAO recommends that OMB, as part of its regulatory review process, identify economically significant regulations at potential risk of not complying with CRA and work with agencies to ensure compliance. OMB staff did not agree or disagree with the recommendation.
gao_GAO-19-220
gao_GAO-19-220_0
Background State is the lead agency involved in implementing American foreign policy and representing the United States abroad. According to State and USAID’s joint strategic plan for fiscal years 2018 through 2022, State’s goals are to (1) protect America’s security at home and abroad, (2) renew America’s competitive advantage for sustained economic growth and job creation, (3) promote American leadership through balanced engagement, and (4) ensure effectiveness and accountability to the American taxpayer. State’s Foreign Service employees serve in a variety of functions at overseas posts as either generalists or specialists. Foreign Service generalists help formulate and implement U.S. foreign policy and are assigned to work in one of five career tracks: consular, economic, management, political, or public diplomacy. Generalists at overseas posts collect information and engage with foreign governments and citizens of foreign countries and report the results of these interactions back to State headquarters in Washington, D.C., among other functions. Foreign Service specialists abroad support and maintain the functioning of overseas posts and serve in one of 25 different skill groups, in positions such as security officer or information management. Specialists at overseas posts play a critical role in ensuring the security and maintenance of the posts’ facilities, computer networks, and supplies as well as the protection of post staff, their family members, and local staff, among other functions. State may require Foreign Service employees to be available for service anywhere in the world, as needed, and State has the authority to direct Foreign Service employees to any of its posts overseas or to its headquarters in Washington, D.C. However, as noted in our 2012 report, State generally does not use this authority, preferring other means of filling high-priority positions, according to State officials. The process of assigning Foreign Service employees to their positions typically begins when they receive a list of upcoming vacancies for which they may compete. Foreign Service employees then submit a list of positions for which they would like to be considered, known as bids, to the Office of Career Development and Assignments and consult with their career development officer. The process varies depending on an officer’s grade and functional specialty, and State uses a variety of incentives to encourage Foreign Service employees to bid on difficult-to-fill posts. State groups countries of the world—and corresponding U.S. overseas posts in these countries—into areas of responsibility under six geographic regional bureaus: Bureau of African Affairs Bureau of East Asian and Pacific Affairs Bureau of European and Eurasian Affairs Bureau of Near Eastern Affairs Bureau of South and Central Asian Affairs Bureau of Western Hemisphere Affairs Overseas posts report to State headquarters through their respective regional bureaus. For example, because the Bureau of African Affairs has responsibility for developing and managing U.S. policy concerning parts of the African continent, U.S. overseas posts in Nigeria report through the bureau to State headquarters. According to State officials, State maintains personnel data on State employees in its GEMS database. GEMS includes information on Foreign Service and Civil Service positions; in particular, it shows the total number of authorized Foreign Service positions at State and whether each position is currently filled or vacant. As displayed in figure 1, the GEMS data show that the majority of Foreign Service employees (73 percent) work in positions at overseas posts. However, some Foreign Service staff (27 percent) are assigned to positions in the United States, where they may complete required language or other training, serve as desk officers for the regional bureaus, or work in other functions at State headquarters. Overseas Foreign Service Vacancies Have Persisted over Time While Overseas Foreign Service Staffing Has Increased, Staffing Gaps Persist According to State data, the number of both staffed and vacant overseas Foreign Service positions increased between 2008 and 2018. As shown in figure 2, the number of positions staffed grew from 6,979 in 2008 to 8,574 in 2018—a more than 20 percent increase. Despite the increase in the number of positions staffed, our analysis found that as of March 31, 2018, overall, 13 percent of State’s overseas Foreign Service positions were vacant. This vacancy percentage is similar to the percentages of vacancies in overseas Foreign Service positions that we reported in 2012 and 2008. In 2012, we reported that 14 percent of State’s overseas Foreign Service positions were vacant as of October 31, 2011, and we reported that the same percentage of overseas Foreign Service positions—14 percent—were vacant as of September 30, 2008. According to State officials, State’s ability to hire Foreign Service employees to fill persistent vacancies has been affected by factors such as reduced appropriations. For instance, according to State officials and State’s Five Year Workforce Plan, because of funding cuts enacted in fiscal year 2013, State could only hire one employee for every two leaving the Foreign Service. From fiscal years 2014 to 2016, funding for State’s annual appropriations supported hiring to replace Foreign Service employees projected to leave the agency, according to State officials. These officials indicated, however, that Foreign Service hiring was again impacted from January 2017 through May 2018 by a hiring freeze. As a result, State hired below levels required to replace full projected attrition of Foreign Service employees. State’s Data Show Higher Vacancy Rates in Foreign Service Specialist Positions Compared to Foreign Service Generalist Positions While State’s data show persistent vacancies in both generalist and specialist positions at overseas posts, specialist positions remain vacant at a higher rate. State’s data show that 12 percent (680 of 5,660) of overseas Foreign Service generalist positions were vacant as of March 31, 2018, a slight decrease from the 14 percent of overseas Foreign Service generalist positions that we reported vacant in 2012. State’s data also show that 14.2 percent (594 of 4,188) of all overseas Foreign Service specialist positions were vacant, close to the 14.8 percent vacancy rate that we reported in 2012. Foreign Service Generalists State’s data show persistent vacancies in Foreign Service generalist positions responsible for analysis, engagement, and reporting at overseas posts. As shown in table 1, among Foreign Service generalist career tracks, the political, economic, and “other” tracks had the largest percentage of vacant positions, with, respectively, 20 percent, 16 percent, and 14 percent of positions vacant as of March 31, 2018. Our 2012 report noted vacancies in the same three career tracks. Political officers at overseas posts are responsible for collecting and analyzing information on political events, engaging with foreign governments, and reporting back to State headquarters. Economic officers at overseas posts work with foreign governments and other U.S. agencies on technology, science, economic, trade, and environmental issues. The “other” generalist career track includes positions designated as “Executive” or “International Relations,” which, according to State officials, may be filled by generalists from any of State’s five career tracks. State’s data show persistent vacancies in Foreign Service specialist positions that support and maintain the functioning of overseas posts. Among the 10 largest Foreign Service specialist skill groups, security officer, office management specialist, and information management had the largest percentages of vacant positions. As shown in figure 3, in these three groups, respectively, 16 percent, 16 percent, and 14 percent of positions were vacant. The vacancies in these three specialist skill groups are persistent; in 2012, we reported that the same three groups had the largest numbers of vacant positions. Security officers are typically responsible for responding to various threats to the physical security of overseas posts and for ensuring the protection of post staff, their family members, and local staff. Office management specialists provide professional management and administrative support. Information management staff are typically responsible for maintaining and ensuring the security of State’s computer networks and communications systems at overseas posts. State Faces Challenges Recruiting Personnel to Fill Some Foreign Service Specialist Positions That Often Require Specialized Skills and Competencies State officials said that State has had difficulty in recruiting and hiring Foreign Service employees to fill specialist positions in some skill groups at overseas posts. According to State officials and staff at overseas posts, some vacant specialist positions are more difficult to fill than others because candidates for these positions must often possess skills in fields such as medicine or information technology that tend to be highly sought after in the private sector. According to staff at overseas posts, it is not uncommon for specialist candidates in these fields to choose higher- paying jobs in the private sector rather than specialist positions in the Foreign Service. Additionally, in some circumstances, State must compete with other federal agencies to recruit specialists from the same limited pool of talent. Consequently, according to State officials, State has been unable to attract and retain personnel with the skills necessary to fill some Foreign Service specialist positions, which has led to persistent vacancies in specialist positions. Vacancies in Foreign Service specialist positions at overseas posts present additional challenges because specialized skills and competencies are often required to perform the work of these positions. According to State officials, because Foreign Service generalists may be assigned to work outside of their career tracks, in some circumstances, State has more flexibility in filling a generalist vacancy than a specialist vacancy. For example, generalists outside the consular career track can serve as a consular officer for one or more tours of duty. However, specialist positions often require specialized skills or experience that generalists may not possess. In addition, according to staff at overseas posts, it is generally not possible for a Foreign Service specialist from one skill group to perform the work of a Foreign Service specialist from a different skill group. For instance, a Foreign Service specialist assigned to the medical section at a post will not be able to help address the workload of a vacant position in the information management section. Thus, according to staff at overseas posts, vacancies in specialist positions at the posts may create greater challenges than vacancies in generalist positions. State’s Data Show Persistent Foreign Service Vacancies at Overseas Posts with State’s Highest Foreign Policy Priorities According to State’s data, as of March 31, 2018, overseas posts with State’s highest foreign policy priorities had the highest percentages of vacant Foreign Service positions. Using its Overseas Staffing Model process, State assigns each embassy to one of seven categories based primarily on the level and type of work required to pursue the U.S. government’s diplomatic relations with the host country at post. As we previously reported, the rankings are closely associated with the department’s foreign policy priorities; the higher the category, the greater the resources needed to conduct the work of the overseas post and the higher the post’s foreign policy priority. For example, the highest-level category, level 5+, includes the largest, most comprehensive full-service posts, where the host country’s regional and global role requires extensive U.S. personnel resources. The lowest-level category includes small embassies with limited requirements for advocacy, liaison, and coordination in the host country’s government. As shown in table 2, according to State’s data, as of March 31, 2018, overseas posts in the “Embassy 5+” category had the highest percentage of vacant positions. The results of this analysis were similar to those we reported in 2012. State’s Data Show Higher Vacancy Rates in Regions with Security Risks That Could Threaten U.S. Foreign Policy Interests While State has Foreign Service vacancies worldwide, as of March 31, 2018, the highest percentages of vacancies were in the South and Central Asian Affairs Bureau (SCA) and Near Eastern Affairs Bureau (NEA)—bureaus representing regions with heightened security risks that could threaten U.S. foreign policy interests, according to State. SCA, which includes countries such as Afghanistan, Pakistan, and India, faces a host of security and stability challenges that could threaten U.S. interests, according to a February 2018 report from State’s Office of Inspector General. NEA includes countries, such as Egypt, Iraq, and Saudi Arabia, which have faced numerous security threats in recent years that could also threaten U.S. interests overseas. As shown in figure 4, among State’s regional bureaus, as of March 31, 2018, SCA and NEA had the highest percentages of overseas Foreign Service vacancies at 21 percent (238 of 1,115 positions) and 18 percent (234 of 1,279 positions), respectively. In 2012, we reported that these two bureaus also had the highest percentages of overseas Foreign Service vacancies among regional bureaus. Overseas Foreign Service Vacancies Have Adverse Effects on State’s Diplomatic Readiness Vacancies in Overseas Foreign Service Positions Increase Workloads and Affect Employee Morale, According to Staff at Overseas Posts Vacancies in Foreign Service positions at overseas posts increase workloads and adversely affect the morale of Foreign Service employees. According to State officials in headquarters and staff at overseas posts, when a Foreign Service position at an overseas post is vacant, Foreign Service employees at that post are generally responsible for covering the workload of the vacant position. Further, Foreign Service employees at some posts—particularly posts with fewer Foreign Service staff—may be responsible for covering the workload of multiple vacant positions. For example, at two African posts we heard examples of Foreign Service employees covering the workload of multiple vacant Foreign Service positions. As a result of increased workloads, Foreign Service employees are also more likely to have less time available to perform some important functions, according to staff at overseas posts. According to staff at overseas posts, such functions include training and supervising entry- level Foreign Service employees, local staff, and eligible family members (EFM); reducing the risk of fraud, waste, and abuse; improving and innovating processes at post that could reduce inefficiencies; initiating and implementing projects that could enhance various diplomatic efforts; and conducting maintenance of systems. In addition, according to staff at overseas posts, vacancies adversely affect staff morale. Staff at multiple posts said that vacancies and the resulting increased workloads had created substantial stress and increased “burnout” of Foreign Service employees at the posts. They noted that these levels of stress and burnout had contributed to Foreign Service employees’ ending their overseas assignments early for medical or personal reasons. These curtailments, in turn, had increased the overall vacancies and their effects at overseas posts. Vacancies in Overseas Foreign Service Generalist Positions, Especially in the Political and Economic Career Tracks, Adversely Affect State’s Diplomatic Readiness According to staff at overseas posts, vacancies in Foreign Service generalist positions at overseas posts adversely affect State’s diplomatic readiness. Among Foreign Service generalist career tracks, the political and economic career tracks had the two largest percentages of vacant positions—20 percent and 16 percent, respectively—as of March 31, 2018. According to staff at overseas posts, vacancies in political and economic positions at overseas posts—particularly posts with fewer Foreign Service employees—limit the amount of reporting on political and economic developments that posts are able to submit back to State headquarters. For example, Foreign Service employees from three posts in Africa told us that persistent, long-term vacancies in those posts’ political and economic positions had constrained their abilities to provide full reporting on political and economic developments in their host countries. According to staff at overseas posts, reporting on political and economic developments in other countries—submitted by overseas posts back to State headquarters—is essential for State to make informed foreign policy decisions. Foreign Service employees from two posts in large countries in East and South Asia also told us that vacancies in these sections had limited their capacity to engage with host government officials on important, strategic issues for the United States, such as reducing nuclear proliferation or enhancing trade and investment relationships with the United States. Vacancies in the political and economic career tracks at overseas posts could adversely affect State’s ability to achieve two of the goals in State and USAID’s joint strategic plan for fiscal years 2018 through 2022—(1) renew America’s competitive advantage for sustained economic growth and job creation and (2) promote American leadership through balanced engagement. Vacancies in Overseas Foreign Service Specialist Positions May Heighten Security Risks at Overseas Posts and Disrupt Post Operations According to staff at overseas posts, vacancies in Foreign Service specialist positions at overseas posts may heighten the level of security risk at the posts and disrupt post operations. Among Foreign Service specialist skill groups with the highest number of vacant positions, security officer, office management specialist, and information management had the largest percentages of vacant positions—16 percent, 16 percent, and 14 percent, respectively—as of March 31, 2018. Security Officer According to staff at overseas posts, vacancies in security officer positions at overseas posts reduce the amount of time that security staff can spend identifying, investigating, and responding to potential security threats to the post. Security officers are also responsible for identifying and analyzing host-country intelligence-gathering efforts at their respective overseas posts—and post staff told us that, because of vacancies in these positions, some security officers had been unable to complete this work for their posts, potentially increasing the risk of foreign government officials gaining access to sensitive information. Also, post staff told us that security officer vacancies limit the amount of time that security officers present at posts can devote to important security oversight activities, including regular training, drilling, and supervising of local guard forces and security contractors. Post staff noted, for example, that security officers at overseas posts should conduct regular training and drilling exercises to evaluate their local guard force’s effectiveness in searching a vehicle entering the post compound for explosive devices. According to post staff, when these important security oversight activities are not properly and regularly conducted, the level of security risk at these overseas posts may increase. Information Management According to State officials in headquarters and staff at overseas posts, as well as reporting by State’s OIG, vacancies in information management positions at overseas posts have increased the vulnerability of posts’ computer networks to potential cybersecurity attacks and other malicious threats. State officials told us that the Foreign Service had faced chronic shortages of information management staff available to fill these positions worldwide. According to State officials, because of ongoing information management vacancies, some required tasks—such as conducting planned network maintenance—were performed infrequently or not at all. In another example, staff at overseas posts said that because of vacancies, information management staff had been unable to regularly check their computer system logs to ensure that security breaches had not taken place. Post staff added that, if a breach did occur, vacancies could increase the amount of time needed to identify an attack and deploy countermeasures, further increasing the risks to posts’ computer networks. Inspections conducted by State’s OIG from fall 2014 to spring 2016 found that information management staff at 33 percent of overseas posts had not performed various required information management duties. According to State’s OIG, neglect of these duties may leave the department vulnerable to increased cybersecurity attacks. Office Management Specialist According to staff at overseas posts, the office management specialist position at overseas posts has evolved considerably over time; these specialists increasingly play a critical role in ensuring that the work of overseas posts is effectively completed. Post staff said that office management specialists provide administrative and other support services to other Foreign Service employees and are assigned to various sections of post. For example, staff at one post noted that office management specialists assigned to the Security Officer sections at overseas posts reduce the workload of security officers by completing more routine security tasks and allowing the security officers to focus on more challenging or involved tasks necessary to secure overseas posts. Post staff told us that vacancies in office management specialist positions reduce the amount of work that can be completed by other Foreign Service employees at overseas posts. For example, when office management specialist positions assigned to the Security Officer or Information Management sections of posts are vacant, these vacancies further exacerbate the higher number of vacancies that already exist in these sections. According to staff at overseas posts, higher numbers of office management specialist vacancies require other Foreign Service employees to spend a significant amount of time on administrative tasks, reducing the amount of time these staff can spend on mission-critical activities. State Described Various Efforts to Address Overseas Foreign Service Vacancies, but These Efforts Are Not Guided by an Integrated Action Plan to Reduce Persistent Vacancies State Officials Described Various Efforts to Help Address Vacancies Officials in headquarters and at overseas posts described various State efforts to help address overseas Foreign Service vacancies. According to State officials, Foreign Service vacancies at overseas posts are a complex problem that multiple offices within State address on an individual basis. State’s Efforts to Address Overseas Foreign Service Vacancies Are Not Guided by an Integrated Action Plan to Reduce Persistent Vacancies State’s various efforts to address overseas Foreign Service vacancies are not guided by an integrated action plan to reduce persistent vacancies. Our 2017 High-Risk Series report calls for agencies to, among other things, design and implement action plan strategies for closing skills gaps. The action plan should (1) define the root cause of all skills gaps within an agency and (2) provide suggested corrective measures, including steps necessary to implement solutions. This report also emphasizes the high risk that mission-critical skills gaps in the federal workforce pose to the nation. While various State offices have implemented the efforts we identified, State lacks an action plan that is integrated—or consolidated—across its relevant offices to guide its efforts to address persistent overseas Foreign Service vacancies. Moreover, some staff at overseas posts acknowledged that the efforts State has taken to help address vacancies have not reduced persistent Foreign Service vacancies, notably in specialist positions. In response to our inquiry about an action plan, State officials said that the agency does not have a single document that addresses Foreign Service staffing gaps at overseas posts. Instead, State officials directed us to State’s Five Year Workforce Plan: Fiscal Years 2016-2020, stating that it was the most comprehensive document that outlines State’s efforts to address Foreign Service vacancies at overseas posts. The workforce plan notes that it provides a framework to address State’s human capital requirements and highlights State’s challenges and achievements in recruiting, hiring, staffing, and training Foreign Service staff. However, in reviewing the portions of the workforce plan that State indicated were most relevant, we found that the workforce plan does not include an integrated action plan that defines the root causes of the persistent overseas Foreign Service vacancies we identified or suggest corrective measures to reduce vacancies in these positions, including steps necessary to implement solutions. State officials also noted that they frequently meet to discuss and address workforce issues. For example, they said they convene a multi-bureau planning group that meets biweekly to discuss strategic workforce issues such as hiring needs based on attrition and other issues. However, according to State officials, this group has not developed an action plan to reduce persistent Foreign Service vacancies at overseas posts. State lacks an integrated action plan to guide its efforts to address persistent Foreign Service vacancies that includes corrective measures to address the root causes of the vacancies. Without defining the root causes of persistent Foreign Service vacancies at overseas posts and identifying appropriate corrective measures, overseas vacancies may persist and continue to adversely affect State’s ability to achieve U.S. foreign policy goals. Conclusions Foreign Service generalists and specialists at overseas posts are critical to advancing U.S. foreign policy and economic interests abroad. However, for at least a decade, the Foreign Service has had persistent vacancies in both generalist and specialist positions at overseas posts. In particular, large numbers of vacant positions have persisted over time in certain overseas Foreign Service positions, such as information management and security officer positions. These vacancies in critical positions at overseas posts have adversely affected State’s ability to carry out its mission effectively and threaten State’s ability to ensure the security and safety of its employees, their families, and post facilities. While State has made some efforts to address Foreign Service vacancies, addressing chronic vacancies in critical positions at overseas posts requires a thoughtful, coherent, and integrated action plan that defines the root causes of persistent Foreign Service vacancies at overseas posts along with suggested corrective measures to reduce such vacancies, following what was called for in our 2017 High-Risk Series report. Developing such an action plan would help State address its persistent staffing gaps, improve its ability to achieve U.S. foreign policy goals, and help ensure secure and efficient operations. Recommendation for Executive Action The Secretary of State should develop an integrated action plan that defines the root causes of persistent Foreign Service vacancies at overseas posts and provides suggested corrective measures to reduce such vacancies, including steps necessary to implement solutions. (Recommendation 1) Agency Comments We provided a draft of this report to State for review and comment. In its comments, reproduced in appendix III, State concurred with our recommendation. State also noted that it has taken actions and identified some causes of vacancies, but acknowledged that it lacks an integrated action plan and will take steps to develop such a plan. State also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6881 or bairj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines (1) vacancies in the Department of State’s (State) Foreign Service staffing at overseas posts, (2) reported effects of Foreign Service vacancies on diplomatic readiness, and (3) State’s efforts to address Foreign Service vacancies. To address these three objectives, we interviewed State officials from the department’s Bureau of Human Resources and Bureau of Consular Affairs as well as State officials representing the Offices of the Executive Director for State’s six regional bureaus. We also interviewed staff at 10 overseas posts. We conducted in-person interviews with staff at 3 of these posts—the U.S. Embassy in Beijing and the U.S. Consulate in Shanghai, China, and the U.S. Embassy in New Delhi, India. We conducted telephone interviews with staff at the other 7 posts—the U.S. Embassies in Abuja, Nigeria; Bogota, Colombia; Kinshasa, Democratic Republic of the Congo; Kabul, Afghanistan; Mexico City, Mexico; and N’Djamena, Chad; and the U.S. Consulate in Frankfurt, Germany. We used the following criteria to select overseas posts for interviews: (1) posts with larger numbers of Foreign Service vacancies; (2) posts with diversity in the types of Foreign Service positions that were vacant; (3) posts with higher relative importance to U.S. economic, national security, and other foreign policy interests; and (4) posts in a range of geographic locations by State region. To examine vacancies in State’s Foreign Service staffing at overseas posts, we analyzed State’s personnel data on Foreign Service staffing at overseas posts from the department’s Global Employment Management System (GEMS), as of March 2018. Our analysis of the GEMS data includes Foreign Service positions filled by permanent Foreign Service employees as well as positions filled by nonpermanent Foreign Service employees, such as Consular Fellows. This analysis does not include the number of staffed and vacant positions at overseas posts in Libya, Syria, and Yemen, which, at the time of our review, were in suspended operations status, as well as U.S. Mission Somalia, which was operating under special circumstances at a different location. To calculate vacancy rates, we divided the total number of positions by the number of positions listed as vacant in GEMS. For example, a post with 10 positions and 2 vacancies would have a vacancy rate of 20 percent. We calculated vacancy rates for each of the following categories: type (i.e., generalist or specialist), function (e.g., consular or information management), regional bureau (i.e., Bureau of African Affairs or Bureau of Western Hemisphere Affairs), and embassy and nonembassy rankings from State’s Overseas Staffing Model (i.e., Embassy 3+ or 5). According to State officials, the data in GEMS have a number of limitations: The number of vacant positions at overseas posts listed in GEMS may be overstated, because State has not yet decided to remove some of these positions from its database. Some of the vacancies in GEMS are short-term or temporary. Foreign Service employees periodically rotate out of their positions at their overseas posts, sometimes creating temporary vacancies until the positions are filled by incoming Foreign Service employees. The GEMS data show larger numbers of vacant Foreign Service positions at posts in Afghanistan, Iraq, and Pakistan than actually were unstaffed at these posts. According to State officials, this discrepancy results from State’s relying heavily on shorter-term assignments to fill Foreign Service positions at these locations. These shorter-term assignments are not reflected in GEMS, and the positions therefore appear vacant. The GEMS data may not reflect Foreign Service employees who have been temporarily reassigned from one overseas post to another. The GEMS data may show positions as filled although the Foreign Service employee filling the position has not yet arrived at post. To assess the reliability of the GEMS database, we asked State officials whether State had made any major changes to the database since our 2012 report, when we assessed the GEMS data to be sufficiently reliable. State officials indicated that no major changes had been made. We also tested the data for completeness, confirmed the general accuracy of the data with officials at selected overseas posts, and interviewed knowledgeable officials from State’s Office of Resource Management and Organizational Analysis concerning the data’s reliability. We found the GEMS data to be reliable for the purpose of determining the numbers and percentages of vacant Foreign Service positions at overseas posts. We did not validate whether the total number of authorized overseas Foreign Service positions was appropriate or met State’s needs. We also reviewed State workforce planning documents and budget documents, such as State’s Five Year Workforce and Leadership Succession Plan: Fiscal Years 2016-2020 and Quadrennial Diplomacy and Development Review. In addition, we reviewed State Office of Inspector General reports as well as our previous reports on human capital challenges at State and effective strategic human capital management across the federal government. In particular, our report High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others notes that strategic human capital management is a high-risk issue across the federal government and lists five key elements as a road map for agency efforts to improve and ultimately address such issues. For our third objective, we assessed whether State’s efforts to address vacancies were guided by a corrective action plan that identifies the root causes of persistent Foreign Service vacancies at overseas posts and suggests corrective measures to reduce such vacancies, including steps necessary to implement solutions. We conducted this performance audit from August 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Analysis of Vacant Foreign Service Positions at Overseas Posts in Various Categories as of March 31, 2018 Management 10 largest specialist skill groups at overseas posts Security Officer Staffed positions at overseas posts Overseas posts by Overseas Staffing Model category Embassy 1 or 2 655 The “Economic” generalist career track includes positions from the “Science Officer” staffing skill group in the GEMS data. 170 Foreign Service employees were not staffed to one of the six regional bureaus. Appendix III: Comments from the Department of State Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Godwin Agbara (Assistant Director), Ian Ferguson (Analyst-in-Charge), Anthony Costulas, Natalia Pena, Debbie Chung, Chris Keblitis, Reid Lowe, Justin Fisher, and Alexander Welsh made key contributions to this report.
Why GAO Did This Study State staffs Foreign Service employees to more than 270 embassies and consulates worldwide to advance U.S. foreign policy and economic interests. In 2009 and 2012, GAO identified ongoing Foreign Service staffing gaps. GAO was asked to review State's Foreign Service staffing. This report examines (1) vacancies in State's Foreign Service staffing at overseas posts, (2) reported effects of Foreign Service vacancies on diplomatic readiness, and (3) State's efforts to address Foreign Service vacancies. To address these objectives, GAO analyzed State's Global Employment Management System data as of March 2018. The system includes information on Foreign Service and Civil Service positions, including the total number of authorized Foreign Service positions and whether each position is filled or vacant. GAO also reviewed its relevant prior reports and State workforce planning documents. In addition, GAO interviewed State staff at 10 overseas posts, selected on the basis of large numbers of Foreign Service vacancies and diversity in the types of Foreign Service positions that were vacant at these posts, among other factors. What GAO Found The Department of State's (State) data show persistent Foreign Service vacancies at overseas posts since 2008. According to the data, 13 percent of overseas Foreign Service positions were vacant as of March 2018. This percentage is similar to the percentages GAO reported for 2008 and 2012, when 14 percent of these positions were vacant. In addition, State's data show persistent vacancies at overseas posts in generalist positions that help formulate and implement U.S. foreign policy and in specialist positions that support and maintain the functioning of overseas posts. State's data also show persistent Foreign Service vacancies at overseas posts with State's highest foreign policy priorities and in regions with security risks that could threaten U.S. foreign policy interests. According to staff at overseas posts, Foreign Service vacancies adversely affect State's ability to carry out U.S. foreign policy. Staff at overseas posts told us that vacancies increase workloads, contributing to low morale and higher stress for Foreign Service staff and that vacancies in Political and Economic positions—20 percent and 16 percent, respectively—limit the reporting on political and economic issues that posts are able to provide to State headquarters. Notably, officials also stated that vacancies in specialist positions may heighten security risks at overseas posts and disrupt post operations. For instance, some overseas post staff said that vacancies in Information Management positions had increased the vulnerability of posts' computer networks to potential cybersecurity attacks and other malicious threats. State described various efforts—implemented by multiple offices in the department —to help address overseas Foreign Service vacancies, but these efforts are not guided by an integrated action plan to reduce persistent vacancies. An example of State's efforts is the “Hard-to-Fill” program, which allows Civil Service staff an opportunity to fill a Foreign Service vacancy on a single overseas tour. According to GAO's 2017 High-Risk Series report, an agency should design and implement an action plan—integrated across its relevant offices—that defines the root causes of all skills gaps and suggests corrective measures. However, State has not developed such an action plan for reducing persistent overseas Foreign Service vacancies. Without developing an integrated action plan, overseas Foreign Service vacancies may persist. As a result, State's ability to achieve U.S. foreign policy goals and help ensure secure and efficient operations could be adversely impacted. What GAO Recommends GAO recommends that State develop an integrated action plan that defines the root causes of persistent Foreign Service vacancies at overseas posts and suggests corrective measures to reduce such vacancies. State concurred with our recommendation and noted that it will take steps to develop an integrated action plan.
gao_GAO-18-447
gao_GAO-18-447_0
Background Patriot Weapon System and Equipment The Patriot weapon system is a mobile Army surface-to-air missile system designed to counter tactical ballistic missiles; cruise missiles; and other threats such as airplanes, helicopters, and unmanned aerial vehicles. The Patriot system was first deployed in the early 1980s; since that time, it has received a number of substantial updates to keep pace with growing threats. Patriot units are deployed worldwide—in Germany and South Korea, for example—in defense of the United States’ and its allies’ key national interests, ground forces, and critical assets. The Army currently has 15 Patriot battalions, all in its active component. Each battalion is organized into groups known as fire units, along with a headquarters and headquarters battery. Each battalion is controlled by its own command and control station and can manage up to six fire units, although a battalion is typically deployed with four. A fire unit is made up of four basic components: (1) a ground-based radar to detect and track targets; (2) launchers; (3) interceptor missiles; and (4) a command, control, and communication station. Overall, a fire unit’s equipment includes eleven unique major end items, including the radar, the launchers, and an electric power plant, among other items. Figure 1 provides a listing of the major end items in a Patriot fire unit (top) along with the notional employment of some of these items (bottom). Reset and Recapitalization Processes Two of the primary processes the Army utilizes to maintain the Patriot system are reset and recapitalization, summarized in Table 1. The Army’s reset program seeks to bring Patriot equipment returning from the U.S. Central Command area of responsibility back to Army standards. The reset process seeks to return Patriot equipment to a pre-deployment condition in order to prevent Patriot units from having to spend home station training funds to keep their equipment functional after returning from operations in austere environments for extended periods. The Army also relies heavily on recapitalization to restore Patriot equipment. A longer and more intensive process than reset, recapitalization seeks to restore equipment to what the Army considers a “like-new” condition, and according to Army guidance is a “near zero time or zero mile” maintenance process. The recapitalization process seeks to add life to the system, and it provides an opportunity for the Army to make incremental modernization upgrades, such as the insertion of new software, technology insertions, or replacing obsolete parts. For example, the Army is upgrading the Patriot system to prepare for its integration into the Integrated Air and Missile Defense Battle Command System. As the Army fields this modernized command and control system, the Patriot equipment undergoing recapitalization will also change, but the Army plans to continue recapitalization to support the Patriot system’s mission through 2048. Specifically, the Army expects that the transition to the Integrated Air and Missile Defense Battle Command System will allow it to replace current command and control elements. However, remaining end items, such as launchers, would continue to require recapitalization through the full life of the system to 2048. If the Integrated Air and Missile Defense Battle Command System, which is currently planned for initial fielding in 2022, is delayed, program and depot officials expect that they can continue to recapitalize current Patriot equipment as long as needed to support the Army’s long-term goal. However, Army officials noted that delays could require mitigation actions, such as the need to continue repairing parts that the Army would otherwise have replaced. Aside from the degree of work performed, the recapitalization and reset processes differ in several other key ways. For instance, the Army generally provides units undergoing recapitalization with another set of Patriot equipment in a one-for-one exchange. In contrast, units undergoing reset receive the same set of equipment back after work is completed and are not provided other equipment while the unit’s equipment undergoes reset at the depot. Additionally, the target length for each process differs; the Army aims to recapitalize one battalion’s worth of equipment each year, while reset work is expected to be completed in 180 days to meet the timelines of the Army’s process to prepare units for potential deployment. Letterkenny Army Depot primarily conducts the maintenance work for both of these efforts under the management of Army Materiel Command and via coordination with the Patriot program office. Patriot Demands and Equipment Mission Capable Rates Patriot units are in high demand. As we found in October 2017, the Army believes its Patriot force is operating at capacity given a consistently high pace of operations, and Army studies have found that any additional operational demands and potential wartime demands would exceed current capacity. We also found that the Army was planning to increase the capacity of its Patriot force in two ways: first, by fielding five small detachments in fiscal year 2018 that would provide the ability to deploy a Patriot battery without a full battalion-level command and control element, and second, by increasing the size of an existing test detachment in order to relieve the Patriot battalion currently assigned to conduct testing for Patriot modernization efforts of that mission. The Army intends for the test detachment to begin supporting Patriot modernization test events starting in the second quarter of fiscal year 2019. From fiscal years 2014 through 2017, Patriot equipment across the force was reported to be fully mission capable at least 90 percent of the time on average, in accordance with the Army’s goal, as established in Army regulation. These fully mission capable rates continue an overall trend since 2009, which a 2014 Army assessment of Patriot readiness attributed to the recapitalization program. Specifically, this assessment noted that the worldwide average for Patriot unit fully mission capable levels was above 90 percent, and that units that underwent recapitalization consistently experienced positive spikes in readiness. Further, this assessment highlighted the importance of the Army’s reset program, noting that it must be sustained because deployed Patriot units are subjected to the highest pace of operations in the Patriot force. Reset Equipment Is Often Returned Late to Units, and the Army Has Not Analyzed the Relative Importance of Factors Contributing to the Delays During the period we reviewed, the Army often did not return reset equipment to units in accordance with the timelines established in Army regulation, which affected unit training. Although the Army has identified several factors that caused delays in returning equipment to units and monitors these factors, it has not assessed their relative importance. The Army Often Returns Reset Equipment to Patriot Units Late, Which Affects Training From fiscal years 2014 through 2017, the Army often did not return reset equipment to units in accordance with the timelines established in the Army’s keystone regulation governing its process to build ready forces. This regulation establishes phases through which a unit passes as it prepares for a potential deployment. The first of these, the reset phase, begins when a majority of the unit’s personnel have returned from deployment and must last a minimum of 180 days. At the conclusion of the 180 days, the unit enters the train/ready phase, at which point it may be deployed again, and needs to have its equipment back in order to do so. Because of this standard, the Army must return a unit’s equipment from reset within 180 days from the start of the unit’s reset phase. From fiscal years 2014 through 2017, the Army reset seven battalions and for six of these battalions the Army did not return all of the units’ equipment within 180 days. Two of these battalions—the 2-43 Air Defense Artillery and 4-3 Air Defense Artillery—experienced delays that were deliberately planned. Specifically, Army officials told us that the installation of system upgrades for these battalions extended the overall reset timeline by 60 days. One official stated that this was requested and approved, and explained that if the upgrades had been installed separately after equipment had been reset, it would have taken 4 months to conduct the work. However, as shown in figure 2, of the remaining five Patriot units that completed reset during the period we reviewed, only one received all of its returned equipment within 180 days. Patriot battalion officials we interviewed told us that delays in the receipt of reset equipment forced them to modify their scheduling and execution of required collective training. For example, one battalion commander we spoke with said that without equipment his battalion could not effectively train for some collective tasks, such as exercises that require moving the system. Additionally, leadership from two battalions we spoke with told us that the late return of reset equipment compressed the training time available for them to conduct field exercises. This can create unnecessary challenges in meeting Army training requirements as units progress through the Army’s process for building ready units. Specifically, according to the Army’s force generation guidance, a unit is expected to be ready to redeploy on day 181 after returning from its last deployment to its home station. As one battalion commander described, the collective-level training that units conduct during these shortened windows is “sufficient, but not optimal.” Patriot units have utilized a series of actions to mitigate the impact of delays in equipment receipt after maintenance, but such mitigation actions are sometimes not feasible or optimal. For example, Patriot unit officials told us that the Army shares equipment between battalions that are collocated on the same installation, but at different points in the readiness building timeline. Specifically, when one battalion turns in equipment for reset, certain pieces of equipment from another battalion on the same installation, if available, might be borrowed to conduct training. Battalion officials noted, however, that this measure may not always be feasible. Leadership from two Patriot battalions, for example, cited instances where their units were unable to train during their reset periods and could not borrow equipment from other battalions located on the same installation because those battalions were deployed. In addition, units use simulators to conduct individual-level training to give personnel experience with new system upgrades, though Patriot brigade officials noted this is a stopgap measure while units are without equipment and does not allow for collective training. Lastly, Patriot units can—once delayed equipment arrives or via borrowing equipment—conduct some collective training for extended hours (i.e. during evenings) each day while at their home station, but a battalion official noted that doing so is also not optimal for unit morale. Battalion commanders we spoke with told us that their units were sufficiently trained and ready to deploy, despite the delays in the return of the equipment to the units. However, a memorandum from a brigade commander noted that given the high pace of operations, it is important that units receive their equipment in a timely manner to enable them to complete training for their next deployment, as delays can create a notable impact on crew and collective training. The late return of reset equipment could therefore have a detrimental impact on units’ ability to conduct training to meet assigned missions. The Army Has Identified Factors Affecting Maintenance Timeliness The Army has identified several factors affecting the timeliness of Patriot maintenance as shown in table 2. Some of the factors affecting timeliness, as identified by Army officials, are directly within the control of Letterkenny, where reset is conducted, and some are not. Specifically, Army officials stated that U.S. Transportation Command and the Defense Logistics Agency also have responsibilities related to some of the factors that can affect timeliness, such as the transport of equipment and availability of parts, respectively. These factors are discussed in more detail below. Preventive maintenance. According to Army officials and Army documentation, the unit leadership of some deployed Patriot battalions do not emphasize preventive maintenance. As a result, equipment may not be properly maintained to Army standards and can create additional work tasks for depot personnel when they receive it, such as conducting additional or more detailed inspections. Unexpected damage. Army officials cited some instances where equipment sent to the depot arrives in worse than expected condition, either due to damage incurred during transport or because unit personnel did not accurately report the condition of the equipment prior to turning it in. For example, in December 2017 Letterkenny officials documented that a battalion’s missile launcher was returned to the depot with unexpected severe corrosion on power cables, and certain equipment items, such as generators, were completely inoperable. Officials cited another instance where a radar was pressure-washed prior to its return to the depot, causing extensive damage. These kinds of unexpected conditions result in greater repair work than anticipated for depot employees. Supply chain challenges. Officials at Letterkenny told us that their forecasts for parts orders have not been consistently met via Army and Department of Defense supply chain processes, but that the depot was taking steps to improve its own forecasting. An official also noted that problems can arise if sole-source suppliers for critical parts go out of business, or if they have to order parts that are no longer regularly produced by vendors due to obsolescence. Patriot program office officials provided an example of a radio that is part of the Patriot system and is no longer in production, and noted that the program office was working with Army headquarters officials to identify a solution. The Army uses a series of measures to mitigate parts availability issues, such as having the depot utilize its own equipment to fabricate some items on short notice (see fig. 3) and, according to Army officials, by taking parts from incoming equipment and using them for equipment nearing completion of maintenance. Additionally, in July 2017, the depot received permission to purchase critical “long-lead” parts for specific Patriot items in advance of anticipated need, although, according to officials, as a general rule and practice, the depot is not allowed to purchase items without funding in place. Letterkenny officials told us that in cases where they are unable to acquire critical parts, or lack the funds to do so, delays can occur. Depot quality controls. Time spent remedying maintenance errors and quality defects—such as incorrect assemblies, defective parts, or improper painting during depot operations—may contribute to the depot’s timeliness challenges. Army officials stressed that the Patriot system is complex, and certain maintenance tasks can be challenging because it can be difficult to isolate equipment faults. For example, the Patriot radar system is composed of thousands of elements (see fig. 4), which, according to officials, requires extensive testing to ensure that each element is operational. Depot officials told us that their processes are designed to ensure that finished products meet operational standards, and that doing so sometimes takes longer than expected. Letterkenny uses a series of metrics and reporting methods, such as internal tracking of defects and surveys and reports from customers, to monitor, document, and correct quality defects during the Patriot maintenance process to ensure that any maintenance errors or defects are identified before the equipment is returned to units. However, quality defects that may affect timeliness can still arise. Each fiscal year Letterkenny establishes a target for hours spent at the depot correcting quality defects that arise during maintenance, which are then tracked and used as indicators of the overall quality of the maintenance process. As tracked by the depot, the monthly time spent correcting quality defects varied, when averaged across each year. Specifically, the average in fiscal year 2015 was below the depot’s set target, but the averages in fiscal years 2014, 2016, and 2017 exceeded the targets. For example, the time spent correcting quality defects ranged from 846 hours a month in fiscal year 2016 to 1,242 hours a month in fiscal year 2017, above those years’ monthly target of 800 hours. Equipment transportation. Transportation time is included in the 180- day policy for returning equipment from reset to Patriot units, and it often takes a significant amount of time before equipment is transported to the depot from theater. As such, according to Army documentation, the depot can be left with only 120 days to complete reset work before it has to return equipment back to units if it is to meet the 180-day policy. According to Army documentation, to mitigate this issue the Army airlifts a number of critical Patriot equipment items, such as radars, from theater to the depot so that reset work can begin earlier on these items. Additionally, unit officials and a program official involved in planning for the Army’s reset process noted that equipment items are sent back from the depot as soon as reset work is completed; the depot does not wait until the entire unit equipment set is complete. However, as shown previously in figure 2, these kinds of mitigation actions with respect to transportation have not been sufficient in ensuring that units receive all of their equipment back within the 180 days allowed by policy. Army Monitors Factors Affecting Maintenance Timeliness, but Has Not Conducted an Analysis of Their Relative Importance Although the Army monitors the factors that have affected maintenance timeliness, it has not conducted an analysis to identify their relative importance. According to Army documents and officials we interviewed, the Army monitors and uses a number of processes to identify, discuss, and select mitigation actions for factors affecting maintenance timeliness, such as: Quarterly working group meetings of Patriot stakeholders. The Army monitors maintenance timeliness via a quarterly working group, which includes representatives from key Army Patriot stakeholder organizations such as Training and Doctrine Command, Aviation and Missile Command, Letterkenny, and Patriot unit higher command headquarters. Any timeliness issues discussed at such meetings, such as potential training impacts and transportation delays, are conveyed to units afterwards. Letterkenny weekly production meetings. Letterkenny command staff hold weekly production meetings to discuss various issues affecting maintenance production, identify potential factors that could delay depot work, and select mitigation measures against such factors. Army Materiel Command oversight of Letterkenny production. Army Materiel Command monitors and tracks Letterkenny’s actual and projected maintenance performance against the scheduled completion dates for Patriot maintenance projects, and depot officials internally review the depot’s performance for each Patriot equipment item each week before submitting the results to Army commands monthly. Although Army officials are aware of challenges in returning reset equipment to Patriot units within the 180-day policy and have taken some steps to minimize these impacts, they could not quantify how much each of the factors affecting timeliness contributes to delays in completing maintenance and returning equipment to units. Moreover, based on our discussions with different stakeholders associated with the sustainment of the Patriot system, there are different perceptions as to the degree to which the various factors contributed to delays in completing maintenance and returning reset equipment to units. For example, during our meetings, depot officials indicated that supply chain issues were the primary timeliness challenge. In contrast, a senior program office official and unit officials emphasized the importance of transportation of equipment and its effects on timeliness. In addition, Letterkenny and Army stakeholders told us that while they work to identify and correct issues as they arise through the processes described above, their efforts to remedy these issues are conducted in isolation from one another and not compiled and compared to enable the Army to identify their relative importance in terms of each factor’s effect on timeliness. Although aware of the challenges of returning equipment to units in a timely manner, the Army has not comprehensively analyzed the relative importance of the various factors identified above that affect Patriot maintenance timeliness. Army Regulation 702-11 states that fact-based decision-making and the use of performance information to foster continuous improvement are essential activities of quality management and assurance. Specifically, activities supporting logistics missions should engage in continued review, evaluation, and improvement. This regulation further states that Army Material Command, as the manager of the Army’s quality program, should conduct performance reviews and assist other applicable organizations in developing corrective action plans, such as establishing protocols to mitigate risks and prevent recurrence of issues when nonconforming performance is identified. Although not required by Army regulation, one means of doing this is through conducting comprehensive analysis, such as comparing the relative importance of factors affecting performance in order to target improvement efforts. A comprehensive analysis to identify the relative importance of factors could better position the Army to fully understand current and historic issues affecting its ability to complete Patriot equipment maintenance in a timely manner. Such an understanding would better inform corrective actions than isolated efforts and would position the Army to determine where best to target its efforts in order to ensure units receive equipment back in a timely manner to conduct training. The Army Plans to Recapitalize Patriot Equipment Every 15 Years, but This Approach Introduces Some Challenges The Army has decided to recapitalize each battalion set of Patriot equipment once every 15 years, while recognizing that this approach introduces some challenges to upgrading and supporting the system’s readiness to meet its assigned missions through 2048. While the Army would prefer to recapitalize Patriot equipment every 10 years, the Army has reviewed two options for recapitalizing Patriot equipment more frequently and determined that these options are not feasible. According to Army documentation, the Army plans to continue sustaining and upgrading Patriot equipment to meet its long-term goal—which is to keep the system viable through 2048—by, for example, improving system reliability and enhancing its warfighting capabilities. The Army considers recapitalization a key program to achieve this goal. Specifically, in its 2014 readiness assessment of the Patriot force, the Army concluded that recapitalization is the single most important program with respect to keeping Patriot equipment viable and sustainable. Officials from multiple Army organizations also told us that the age of the Patriot system makes replacement of expendable and aged components and insertion of new technology during recapitalization important to Patriot sustainment, readiness, and its ability to meet emerging threats. While the Army has emphasized the importance of recapitalization in achieving its long-term goals for the Patriot system, the Army is not planning to adjust its recapitalization pace in the near term, as of March 2018. According to Army documentation, recapitalizing equipment every 10 years would maintain the equipment at the Army’s desired condition. However, the Army’s near-term schedule for recapitalization in fiscal years 2018 through 2022 and its long-term notional schedule for recapitalization of Patriot equipment through fiscal year 2031 both outline cycling one battalion per year through recapitalization. With 15 Patriot battalions, the pace of one battalion per year does not restore the equipment to its desired condition every 10 years. According to Army Patriot officials, there are two main options for the Army to increase the pace of recapitalization, but each of these options poses challenges. These two options are: Reduce the amount of equipment available for ongoing commitments and recapitalize it at the depot. Officials told us that one way the Army could increase the pace of recapitalization would be to reduce the amount of equipment available for ongoing commitments, but that this is not feasible given the current high pace of operations. Further, the Army does not anticipate that operational requirements will lessen under the projected security environment. The near-term schedule assumes that ongoing operational commitments will not change and is designed to synchronize recapitalization with currently scheduled operational deployments and training. Army officials responsible for coordinating the near-term schedule told us that the near-term schedule has little flexibility given the Army’s limited force structure of 15 battalions, and program and depot officials stated that if the Army were to recapitalize more than one battalion per year, the pool of battalions available to meet these current commitments would decrease. Procure additional equipment to provide to units turning in equipment for recapitalization. Army officials said that the Army could buy extra equipment to provide to additional units turning in their equipment for recapitalization if the Army wanted to accelerate the recapitalization pace. At the current pace of recapitalization, the Army has sufficient quantities of major equipment items to ensure that as a Patriot battalion turns in equipment for recapitalization it receives recently recapitalized equipment back on a one-to-one basis and thus is generally not without equipment. This process prevents removing Patriot battalions from operational rotations during the recapitalization period. However, officials stated that if the Army were to adjust the pace to recapitalize more than the current one battalion per year, it would require buying more equipment to ensure that any additional units undergoing recapitalization would not be left without equipment. Army documents indicate that the Army has assessed whether to acquire additional equipment to enable an accelerated pace of recapitalization. However, an official with responsibility for the Patriot capability and senior Army headquarters officials with responsibility for Patriot resourcing and planning told us that the Army instead has prioritized developing a replacement for the Patriot radar. This replacement radar is expected to address capability needs related to radar reliability and range to better defend against advanced threats. Army documentation indicates that this replacement radar is expected to reach initial operational capability in fiscal year 2025. If the Army decided to reduce the amount of equipment available for ongoing commitments or buy more equipment, then the Army would also need to make additional investments in depot resources to support accelerating the pace of recapitalization. According to Army documents and officials we interviewed, these include personnel, facilities, and equipment. However, there are a number of challenges related to putting these resources in place. Personnel. Army documentation shows and depot officials stated that they would likely hire contractors to meet workload demands and the depot could add shifts if the Army decided to adjust the pace of recapitalization to what it considers an optimal pace. Depot officials also told us they would try to hire contractors with some Patriot experience and place them alongside more experienced personnel in order to preserve work quality, as they have done in response to previous surges in reset work. However, the Army recognizes that Letterkenny faces challenges in expanding its workforce due to a limited pool of available workers in the area around the depot. Developing skilled Patriot maintenance personnel is also difficult. An Army study of the organic industrial base found that 11 of the 15 most critical personnel positions at Letterkenny are directly associated with Patriot maintenance and officials noted that, due to the complexity of the system, it can take up to 5 years for Patriot maintenance personnel to become proficient. Facilities and equipment. Depot officials stated that if the Army decided to adjust the pace of recapitalization to what it considers optimal, they would likely need to review, among other things, the tools, equipment, and facilities needed to support such an adjustment, as well as supply availability. They also told us that Letterkenny already has proposed expanding its facilities to meet projected future work, and the depot has planned for the plant equipment it will need to continue maintaining the Patriot system as upgrades are incorporated. However, they noted that it takes a full year to recapitalize the Patriot radar, including 3 months of testing, and that Letterkenny has one of only two radar test sites. Given the time required and the single test site, if the Army wanted to recapitalize more than one battalion a year, program officials stated that current conditions probably would not support doing so. Continuing the current pace of recapitalization could introduce other challenges in meeting the Army’s long-term goals for the Patriot system, and Army officials stated they are aware of these challenges. Specifically, Army documentation shows, and Army officials told us, that the current pace is not optimal and that it could introduce the possibility of equipment failure as specific items remain in use past the Army’s desired timeframe for recapitalizing equipment every 10 years. Additionally, depot officials told us that their biggest concern with continuing recapitalization at its current pace is that there may be increased costs to conduct recapitalization due to the system’s increasing age. As an example, they stated that there may be increased corrosion issues, adding that they have already seen a significant deterioration in the condition of some trailers. Also, the Army’s decision to continue recapitalizing equipment every 15 years instead of every 10 years provides fewer opportunities to conduct modernization, which is often done in conjunction with recapitalization. Program officials stated that modernizing the system is important because upgrades reduce the number of items that can fail, thereby making field maintenance easier. Moreover, officials from one Patriot brigade stated that their main concern with respect to Patriot is that additional operational commitments could potentially slow modernization progress and affect the Army’s capability to meet threats, particularly since the capabilities and sophistication of enemy threats continue to increase. The Army has reviewed its options and the associated challenges related to increasing the pace of recapitalization and has decided the best path forward based on its review is to continue recapitalizing Patriot battalion equipment sets once every 15 years. However, this pace of recapitalization includes some risk—as identified by Army officials—and will likely create challenges in meeting the Army’s long-term goals for the system. Conclusions Maintaining good equipment condition is particularly important given the current high pace of operations for Patriot units, as well as the potential for a further increase in operational requirements. However, the Army’s reset process has often delivered equipment to units late, affecting units’ ability to schedule and execute training as they prepare for their next mission. The Army is aware of the challenges in completing maintenance and returning reset equipment to units, and has identified several factors that contribute to delays, but has not analyzed how much each of the factors contribute to delays. Unless the Army conducts a comprehensive analysis of the relative importance of the factors affecting Patriot reset timeliness and develops and implements appropriate corrective actions to address the results of the analysis, it will not be positioned to target its efforts most effectively to take corrective actions. Recommendation for Executive Action We recommend that the Secretary of the Army ensure that Army Materiel Command, in coordination with its subordinate and other Army organizations as appropriate, conducts a comprehensive analysis of the primary factors affecting timeliness to identify their relative importance in the Army’s Patriot reset program and develops and implements appropriate corrective actions. (Recommendation 1) Agency Comments In written comments on a draft of this report, the Department of the Army concurred with our recommendation. The department stated that it is taking steps to address the recommendation, noting that it will continue analysis between Army Materiel Command, Headquarters Department of the Army, and the Patriot program office to identify and address factors that may affect reset timeliness. The Department of the Army’s comments are reprinted in their entirety in appendix II. The department also provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Army. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Scope and Methodology To evaluate the extent to which the Army’s reset process supports the timely delivery of Patriot equipment back to units, we analyzed Army documents concerning recapitalization and reset activities. This included analysis of, among other things, documents describing the processes for Patriot battalion equipment transfers to and from Letterkenny Army Depot (Letterkenny), depot activities to recapitalize and reset equipment, and testing to ensure the equipment’s proper operation. We also reviewed, among other documents, Army guidance on Patriot equipment status reporting, reset, materiel maintenance, and on ensuring the quality of Army programs; as well as planning schedules and documents on backorders and critical items. We evaluated the Army’s processes to identify and correct factors causing any reset delays against Army guidance on program performance improvement. Additionally, we analyzed data provided by the Army on Patriot equipment fully mission capable rates and the timeliness of Army Patriot reset activities from fiscal years 2014 through 2017—the most recent data available—to identify any trends. Specifically, we analyzed Patriot unit fully mission capable data as recorded by Army Aviation and Missile Command G-3 (Readiness) based on data submitted by Patriot operational units. We analyzed it to corroborate statements regarding equipment readiness and the quality of maintenance work made by program and operational unit officials and to compare against the Army’s goal for fully mission capable rates. To determine depot timeliness, we analyzed aggregate monthly data provided by the Army on Letterkenny’s timeliness in completing Patriot maintenance activities against performance schedules. We also analyzed Patriot battalion-specific Army data on reset timeliness in order to determine the frequency with which Letterkenny met the reset timeliness policy. Finally, we reviewed Army data on the time spent re-working and re-inspecting equipment with quality deficiencies found during internal inspections at Letterkenny in order to inform our assessment of the potential effects of addressing quality deficiencies on depot timeliness. We assessed the reliability of these data by reviewing available system documentation, such as user manuals and data dictionaries for each of the automated information systems from which the respective data were drawn. We manually checked the data for obvious errors and missing or outlier values. We administered data reliability questionnaires to officials familiar with the data systems and assessed their responses and answers to follow-up questions, and we interviewed cognizant officials about their data management practices and use of the data. Based on these steps, we found these data to be sufficiently reliable for our purposes, to include providing fiscal years 2014 through 2017 Patriot equipment fully mission capable rates, battalion-specific reset timeliness, and the time spent by the depot on correcting quality defects identified during internal inspections. To describe the Army’s plans for supporting the long-term viability of the Patriot system through recapitalization and any challenges associated with its plans, we analyzed Army regulations, guidance, and planning documents, as well as Army studies. These included, among others, the Army’s recapitalization management policy; Army documents proposing and approving a recapitalization program for Patriot; Army studies of its depot workforce, worldwide Patriot equipment readiness, and Patriot operational demands in relation to available assets; and Army guidance on materiel maintenance and useful equipment life. We also analyzed, among other documents, the Army’s near-term schedule synchronizing Patriot recapitalization, reset, incremental modernization, training, and deployment schedules for fiscal years 2018 through 2022 and a long-term notional schedule for the recapitalization of Patriot equipment, by battalion set, through 2031. We also reviewed depot equipment and personnel planning documents and the Patriot life-cycle management plan, among other planning documents. For both objectives, we interviewed cognizant Army personnel involved in the planning and conduct of Patriot recapitalization and reset. We visited Letterkenny to speak with officials and observe the facilities and the conduct of Patriot maintenance activities. In addition, we interviewed officials with responsibility for Patriot funding; for monitoring Patriot unit readiness; as well as officials from two Patriot battalions that recently underwent reset and their brigade headquarters; and one Patriot battalion that recently underwent recapitalization and its brigade headquarters to identify challenges, if any, with respect to these maintenance processes, such as any training or equipment transfer delays or maintenance deficiencies. The list of the organizations and offices we interviewed during the course of our review is below. Assistant Secretary of the Army for Acquisition, Logistics, and Acquisition Policy and Logistics Group Program Executive Office, Missiles and Space, Redstone Arsenal, Huntsville, Alabama Lower Tier Project Office, Redstone Arsenal, Huntsville, Headquarters, Department of the Army G-3, Readiness Directorate G-4, Logistics Maintenance Directorate: G-44 (M) Maintenance G-4, 3/5/7, Current Operations and Strategic Readiness Division G-8, Programs and Priorities, Fires Division Army Aviation and Missile Life Cycle Management Command, Redstone Arsenal, Huntsville, Alabama Army Aviation and Missile Command Logistics Center, Redstone Arsenal, Huntsville, Alabama Letterkenny Army Depot, Chambersburg, Pennsylvania 32nd Army Air and Missile Defense Command, Fort Bliss, Texas 11th Air Defense Artillery Brigade, Fort Bliss, Texas 3-43 Air Defense Artillery Battalion, 11th Air Defense Artillery Brigade, Fort Bliss, Texas 31st Air Defense Artillery Brigade, Fort Sill, Oklahoma 3-2 Air Defense Artillery Battalion, 31st Air Defense Artillery Brigade, Fort Sill, Oklahoma 4-3 Air Defense Artillery Battalion, 31st Air Defense Artillery Brigade, Fort Sill, Oklahoma U.S. Army Training and Doctrine Command Fires Center of Excellence, Fort Sill, Oklahoma Training and Doctrine Command Capability Manager – Army Air and Missile Defense Command, Fort Sill, Oklahoma We conducted this performance audit from June 2017 to June 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of the Army Appendix III: GAO Contact and Staff Acknowledgments Appendix III: GAO Contact and Staff Acknowledgments Error! No text of specified style in document. GAO Contact Staff Acknowledgments In addition to the contact named above, individuals who made key contributions to this report include Kevin O’Neill, Assistant Director; Jason Blake, Vincent Buquicchio, Clarice Ransom, Michael Silver, Erik Wilkins- McKee, and Matthew Young.
Why GAO Did This Study Patriot is a mobile Army surface-to-air missile system deployed worldwide to defend critical assets and forces. The Army plans to extend the life of Patriot equipment until at least 2048 through maintaining and modernizing the system. To achieve this, the Army performs two maintenance processes, restoring equipment returning from combat back to pre-deployment conditions (“reset”) and comprehensively overhauling ("recapitalizing") a portion of its equipment annually. The conference report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2017 included a provision that GAO assess the Army's Patriot maintenance and recapitalization plans to ensure that operational needs are met. This report (1) evaluates the extent to which the Army's reset process supports the timely delivery of Patriot equipment back to units; and (2) describes the Army's plans for supporting the long-term viability of the Patriot system through recapitalization and any challenges associated with its plans. GAO analyzed Army guidance and equipment and maintenance data; interviewed Army officials; and assessed the Army's recapitalization plans. What GAO Found The Army uses reset and recapitalization to extend the life of its Patriot surface-to-air missile system. The reset process—which is intended to repair recently-deployed equipment—has often returned equipment to Patriot units late, which has affected unit training. GAO found that of the seven Patriot battalions that underwent reset from fiscal years 2014 through 2017, only one received its equipment within 180 days, in accordance with Army policy (see figure). Patriot unit officials told GAO that such delays reduced the time available for unit training, creating challenges in meeting training requirements as units prepare for their next mission. The Army has identified and analyzed several factors affecting reset timeliness, ranging from supply chain issues to transportation. However, the Army has not comprehensively analyzed the relative importance of these factors. Such an analysis would better position the Army to target its efforts effectively to ensure units receive equipment back in a timely manner. Patriot Equipment Reset Timeliness for Units, Fiscal Years 2014-2017 With respect to recapitalization, the Army has decided to recapitalize each battalion set of Patriot equipment once every 15 years to support the system's long-term viability through 2048, while recognizing that this approach introduces some challenges. The Army would prefer to recapitalize Patriot equipment every 10 years, but Army officials stated this is not feasible for the following reasons: Reducing the amount of equipment for ongoing operational commitments to increase the pace of recapitalization is not feasible given current commitments and the projected security environment. Buying extra equipment to provide to additional units undergoing recapitalization is not feasible because the Army has prioritized replacing the Patriot radar to improve its capability to defend against advanced threats. Army officials told GAO that the current pace of recapitalization is not optimal and could introduce challenges, such as the possibility of equipment failure and increased maintenance costs. However, the Army has concluded that the current pace is the best path forward. What GAO Recommends GAO recommends that the Army conduct an analysis of the primary factors affecting the Patriot program's reset timeliness to identify their relative importance and develop and implement appropriate corrective actions. The Department of the Army concurred with GAO's recommendation.
gao_GAO-19-147T
gao_GAO-19-147T_0
Long-Term Strategic Planning in Acquisitions Enables Better Tradeoff Decisions Key elements of strategic planning include establishing long-term goals and strategies for how those goals are to be achieved. Specifically for managing Coast Guard acquisitions, we have noted that a long-term plan that includes acquisition implications would enable tradeoffs to be addressed in advance, which leads to better informed choices and makes debate possible before irreversible commitments are made to individual programs. Without this type of plan, decision makers do not have the information they need to better understand and address an agency’s long-term outlook. Similarly, according to the Office of Management and Budget’s capital planning guidance referenced by the Coast Guard’s Major Systems Acquisition Manual, each agency is encouraged to have a plan that justifies its long-term capital asset decisions. This plan should include, among other things, (1) an analysis of the portfolio of assets already owned by the agency and in procurement, (2) the performance gap and capability necessary to bridge the old and new assets, and (3) justification for new acquisitions proposed for funding. In June 2014, we found that the Coast Guard—a component within the Department of Homeland Security (DHS)—did not have a long-term fleet modernization plan that identified all acquisitions needed to meet mission needs over the next two decades within available resources. Without such a plan, the Coast Guard repeatedly delayed and reduced its capabilities through its annual budget process and did not know the extent to which it could meet mission needs and achieve desired results. We recommended that the Coast Guard develop a 20-year fleet modernization plan that identifies all acquisitions needed to maintain the current level of service and the fiscal resources necessary to build the identified assets. DHS agreed with our recommendation but it has not yet approved a 20-year plan. Further, in July 2018, we found the Coast Guard continues to manage its acquisitions through its annual budget process and the 5-year Capital Investment Plan, which is congressionally mandated and submitted to Congress annually. Coast Guard officials told us the Capital Investment Plan reflects the highest priorities of the department and that trade-off decisions are made as part of the annual budget process. However, the effects of these trade-off decisions, such as which acquisitions would take on more risk so others can be prioritized and adequately funded, are not communicated in the Capital Investment Plan to key decision makers. Over the years, this approach has left the Coast Guard with a bow wave of near-term unfunded acquisitions, negatively affecting recapitalization efforts, and limiting the effectiveness of long-term planning. As a result of this planning process, the Coast Guard has continued to defer planned acquisitions to future years and left a number of operational capability gaps unaddressed that could affect future operations. We recommended that the annual Capital Investment Plans reflect acquisition trade-off decisions and their effects. DHS concurred with this recommendation and plans to include additional information in future Capital Investment Plans to address how trade-off decisions could affect other major acquisition programs. According to Coast Guard officials, the Coast Guard plans to implement this recommendation by March 2020. Examples of other fleet modernization plans include the Navy’s annual naval vessel construction plan (also known as the Navy’s long range shipbuilding plan), which reflects the quantity and categories of assets that the Navy needs to buy as well as the total number of assets in operation for each year. While we found in March 2006 that the Navy faced challenges associated with its long range shipbuilding plan, we also observed that such a plan is beneficial in that it lays out a strategic approach for decision making. In October 2016, NOAA—which is within the Department of Commerce—approved a fleet plan that is intended to identify an integrated strategy for long-term recapitalization, including acquisition of up to eight new ships. In March 2017, NOAA indicated that long-term recapitalization of the NOAA fleet requires an annual, stable funding profile on the order of its fiscal year 2016 appropriations—about $80 million. NOAA noted that it will continue to proceed on schedule, as laid out in its fleet plan, or make adjustments based on available funding. Successful Acquisition Programs Begin with Sound Business Cases Our prior work has repeatedly found that successful acquisition programs start with solid, executable business cases before setting program baselines and committing resources. A sound business case requires balance between the concept selected to satisfy operator requirements and the resources—design knowledge, technologies, funding, and time— needed to transform the concept into a product, such as a ship. At the heart of a business case is a knowledge-based approach—we have found that successful shipbuilding programs build on attaining critical levels of knowledge at key points in the shipbuilding process before significant investments are made (see figure 1). We have previously found that key enablers of a good business case include firm, feasible requirements; plans for a stable design; mature technologies; reliable cost estimates; and realistic schedule targets. Without a sound business case, acquisition programs are at risk of experiencing cost growth, schedule delays, and reduced capabilities. In September 2018, we found the Coast Guard did not have this type of sound business case when it established the cost, schedule, and performance baselines for its polar icebreaker program in March 2018. This was primarily due to risks in four key areas: Technology. The Coast Guard intends to use proven technologies for the program, but did not conduct a technology readiness assessment to determine the maturity of key technologies—which include the integrated power plant and azimuthing propulsors— prior to setting baselines. As a result, the Coast Guard does not have full insight into whether these technologies, which we believe are critical technologies and merit such an assessment, are mature. Without a technology readiness assessment, the Coast Guard is potentially underrepresenting technical risk and increasing design risk. Cost. The cost estimate that informed the program’s $9.8 billion cost baseline—which includes lifecycle costs for the acquisition, operations, and maintenance of three polar icebreakers—substantially met our best practices for being comprehensive, well-documented, and accurate, but only partially met best practices for being credible. The cost estimate did not quantify the range of possible costs over the entire life of the program, such as the period of operations and support. As a result, the cost estimate was not fully reliable and may underestimate the total funding needed for the program. Schedule. The Coast Guard’s planned delivery dates of 2023, 2025, and 2026 for the three ships were not informed by a realistic assessment of shipbuilding activities, but rather were primarily driven by the potential gap in icebreaking capabilities once the Coast Guard’s only operating heavy polar icebreaker—the Polar Star— reaches the end of its service life (see figure 2). The Polar Star’s service life is estimated to end between fiscal years 2020 and 2023. This creates a potential heavy polar icebreaker capability gap of about 3 years, if the Polar Star’s service life were to end in 2020 and the lead polar icebreaker were to be delivered by the end of fiscal year 2023 as planned. If the lead ship is delivered later than planned in this scenario, the potential gap could be more than 3 years. The Coast Guard is planning to recapitalize the Polar Star’s key systems starting in 2020 to extend the service life of the ship until the planned delivery of the second polar icebreaker (see figure 3). Further, our analysis of selected lead ships for other shipbuilding programs found the icebreaker program’s estimated construction time of 3 years is optimistic. An unrealistic schedule puts the Coast Guard is at risk of not delivering the icebreakers when promised and the potential gap in icebreaking capabilities could widen. Design. The Coast Guard set program baselines before conducting a preliminary design review—a systems engineering event that is intended to verify that the contractor’s design meets the requirement of the ship specifications and is producible—which puts the program at risk of having an unstable design, thereby increasing the program’s cost and schedule risks. Although the Coast Guard set the program baselines prior to gaining knowledge on the feasibility of the selected shipbuilder’s design, it has expressed a commitment to having a stable design prior to the start of lead ship construction. This is consistent with shipbuilding best practices we identified in 2009. To address these four areas and other risks, we made six recommendations to DHS, Coast Guard, and the Navy in our September 2018 report. DHS concurred with all six recommendations and identified actions it planned to take to address them. In its October 2016 fleet plan, NOAA indicated the need to construct up to eight new ships by 2028 to maintain its capabilities for at-sea requirements. Ensuring a sound business case for each acquisition will be important as NOAA moves forward. Leveraging Navy’s Shipbuilding Experience May Create Efficiencies Given the Navy’s experience in shipbuilding, agencies have partnered with the Navy to take advantage of its expertise. For example, in April and September 2018, we found examples of how the Coast Guard had leveraged the Navy’s resources and acquisition approaches when acquiring the polar icebreakers, including: Establishing an integrated program office and potentially using funding from both organizations. In 2016, in response to a congressional report, the Navy and the Coast Guard established an integrated program office to acquire the icebreakers for Coast Guard operations. This relationship was officially memorialized through three memorandums in 2017. Given potential plans to fund the polar icebreaker program with both Navy and Coast Guard appropriations, the Navy and the Coast Guard had a memorandum of agreement with a budgeting and financial management appendix. In September 2018, however, we found that the Coast Guard and the Navy interpreted the meaning of “cost overruns” differently in the context of their agreement. We also found that the agreement itself did not address how the Coast Guard and the Navy plan to handle any cost growth stemming from changes to the scope, terms, and conditions of the detail design and construction contract. We recommended that the Coast Guard, in collaboration with the Navy, revise the agreement to clarify and document how cost growth in the polar icebreaker program, including changes in scope, will be addressed between the two organizations. The Coast Guard concurred with this recommendation and plans to update the agreement by March 2019. Establishing an integrated ship design team. The ship design team includes Coast Guard and Navy technical experts who develop ship specifications based on the polar icebreaker program’s operational requirements document. The ship design team is under the supervision of a Coast Guard ship design manager, who provides all technical oversight for development of the polar icebreaker’s design. Leveraging Navy cost estimating and contracting functions. With input from the integrated program office and ship design team, Navy cost estimators developed the polar icebreaker program’s cost estimate, which informed the program’s cost baselines and affordability constraints. In addition, the Navy plans to award the polar icebreaker’s detail design and construction contract under the Navy’s contracting authority and use a tailored DHS acquisition process. Supplementing the DHS acquisition process with the Navy’s gate review process. Coast Guard and Navy agreed to manage the polar icebreaker program using a tailored acquisition approach that supplements DHS acquisition decision event reviews with additional “gate” reviews that were adopted from Navy’s acquisition processes. The gate reviews allow both Coast Guard and Navy leadership to review and approve key documents before proceeding to the acquisition decision events. Each acquisition decision event is also overseen by acquisition oversight board with members from both the Coast Guard and the Navy (see figure 4). By collaborating with the Navy, the Coast Guard is leveraging the Navy’s experience in ship design, cost estimating, contracting, and other shipbuilding processes. This partnership may allow the Coast Guard to more efficiently manage the polar icebreaker program. In March 2017, NOAA indicated that it had partnered with the Navy through an interagency agreement to leverage the Navy’s acquisition expertise for Auxiliary General Purpose Oceanographic Research Vessels, which will be the basis for a new class of NOAA ships. In April 2018, the Navy released the request for proposal for the preliminary contract design of this new class of ships. Estimated Savings and Requirements Stability Should be Considered When Selecting Contracting Mechanisms When acquiring multiple quantities of a product, agencies generally have several options for contracting mechanisms. Annual contracting, which can be considered the typical method, refers to awarding a contract for one year’s worth of requirements. Annual contracting allows for the use of options for subsequent requirements. Options give the agency the unilateral right to purchase additional supplies or services called for by the contract, or to extend the term of the contract. Besides annual contracting with options, agencies may also be able to choose among other contracting mechanisms—multiyear contracting and “block buy” contracting, which are discussed in more detail below. Multiyear Contracting Requirements and Considerations Multiyear contracting allows agencies to acquire known requirements for up to 5 years under a single contract award, even though the total funds ultimately to be obligated may not be available at the time of contract award. Before DOD and Coast Guard can enter into a multiyear contract, certain criteria must be met. Table 1 provides some of the multiyear contracting requirements for DOD and the Coast Guard. Multiyear contracts are expected to achieve lower unit costs compared to annual contracts through one or more of the following sources: (1) purchase of parts and materials in economic order quantities, (2) improved production processes and efficiencies, (3) better utilized industrial facilities, (4) limited engineering changes due to design stability during the multiyear period, and (5) cost avoidance by reducing the burden of placing and administering annual contracts. Multiyear procurement also offers opportunities to enhance the industrial base by providing contractors a longer and more stable time horizon for planning and investing in production and by attracting subcontractors, vendors, and suppliers. However, multiyear procurement entails certain risks that must be balanced against the potential benefits, such as the increased costs to the government should the multiyear contract be changed or canceled and decreased annual budget flexibility for the program and across an agency’s portfolio of acquisitions. In February 2008, we found that it is difficult to precisely determine the impact of multiyear contracting on procurement costs. For example, for three multiyear procurements (Air Force’s C-17A Globemaster transport, the Navy’s F/A-18E/F Super Hornet fighter, and the Army’s Apache Longbow helicopter), we identified unit cost growth ranging from 10 to 30 percent compared to original estimates, due to changes in labor and material costs, requirements and funding, and other factors. In some cases, actual costs for the multiyear procurement were higher than original estimates for annual contracts. We noted that we could not determine how cost growth affected the level of savings achieved, if any, because we did not know how an alternative series of annual contracts would have fared. Although programs using annual contracts also have unit cost growth, it is arguably more problematic when using multiyear contracting because of the up-front investments and the government’s exposure to risk over multiple years. Block Buy Contracting Considerations Block buy contracting generally refers to special legislative authority that agencies seek on an acquisition-by-acquisition basis to purchase more than one year’s worth of requirements, such as purchasing supplies in economic order quantities. Unlike multiyear contracting, block buy contracting does not have permanent statutory criteria and, therefore, can be used in different ways. We have previously analyzed several cases where block buy contracts were considered or used and have not found evidence of savings. For example: In September 2018, we found that for the polar icebreaker program, the Navy gave offerors an opportunity to provide the estimated savings that the government could achieve if it were to take a “block buy” approach in purchasing the ships or purchasing supplies in economic order quantities. The Navy told us that they did not receive any formal responses from industry on potential savings from block buys or economic order quantities. In April 2017, we found that the Navy’s Littoral Combat Ship contracts’ block buy approach could affect Congress’s funding flexibility. Specifically, the block buy contracts provided that a failure to fully fund a purchase in a given year would make the contract subject to renegotiation, which provides a disincentive to the Navy or Congress to take any action that might disrupt the program because of the potential for the government to have to pay more for ships. In February 2005, we found that the Navy believed that a block-buy contract contributed to increased material costs for the Virginia class submarine. Under this block-buy contract, subcontracts for submarine materials were for single ships spread over several years. According to the Navy, this type of acquisition approach did not take advantage of bulk-buy savings and incurred the risk that funding will not be available in time to order the material when needed. Based on our prior work, it is important for agencies to consider multiple factors such as estimated savings, the stability of the requirements, quantities required, and potential contract terms and conditions before committing to a contracting mechanism approach. In conclusion, as the Coast Guard and NOAA continue investing taxpayer dollars to modernize their fleets, they could benefit from the lessons learned from prior recapitalization and acquisition efforts. It is important for agencies to develop strategic and comprehensive approaches for managing their respective portfolios so that future requirements and capability gaps can be addressed in a timely manner. For each acquisition within their portfolios, agencies should ensure that they have established a sound business case before committing significant resources. Additionally, leveraging the Navy’s resources and expertise in shipbuilding, such as by establishing integrated teams, could be beneficial by helping agencies be more efficient. Finally, when it comes to contracting mechanisms, factors such as estimated savings and program risks should be assessed before committing to a particular approach. Chairman Sullivan, Ranking Member Baldwin, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this statement, please contact Marie A. Mak, (202) 512-4841 or makm@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Rick Cederholm, Assistant Director; Peter Anderson; Laurier Fish; Kurt Gurka; Claire Li; and Roxanna Sun. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Both the Coast Guard—a component of the Department of Homeland Security (DHS)—and the Department of Commerce's National Oceanic and Atmospheric Administration (NOAA) are investing significant resources to recapitalize their aging fleets of ships. Ensuring that the Coast Guard and NOAA maintain their ships and address potential capability gaps is vital for protecting national security and scientific interests. This statement summarizes lessons that GAO has identified from its prior reviews of Coast Guard and Navy acquisitions, which can be applied to the Coast Guard's and NOAA's shipbuilding efforts. Specifically, this testimony provides information on, among other things, (1) long-term strategic planning for acquisitions, (2) the need for a sound business case, and (3) the leveraging of the Navy's acquisition resources and shipbuilding expertise. In its prior work, GAO reviewed Coast Guard and Navy programs and interviewed officials. For this testimony, GAO obtained publicly available information on NOAA's ship acquisition efforts. What GAO Found GAO has found that acquisition programs can benefit from long-term strategic planning that identifies how tradeoff decisions would affect the future of the acquisition portfolio. In July 2018, GAO found the Coast Guard continues to manage its acquisitions through its annual budget process and the 5-year Capital Investment Plan. As a result of this planning process, the Coast Guard has continued to defer planned acquisitions to future years and left a number of operational capability gaps unaddressed. Incorporating the use of a long-term strategic plan and additional tradeoff discussion into the Capital Investment Plan could lead to more informed choices before irreversible commitments are made. GAO's prior work has also found that acquisition programs should start with solid business cases before setting program baselines and committing resources. At the heart of a business case is a knowledge-based approach—successful shipbuilding programs build on attaining critical levels of knowledge at key points in the shipbuilding process before significant investments are made (see figure). In September 2018, GAO found the Coast Guard did not have this type of sound business case when it established the program baselines for its polar icebreaker program in March 2018 due to risks in technology, design, cost, and schedule. For example, the Coast Guard's planned delivery dates were not informed by a realistic assessment of shipbuilding activities, but rather were primarily driven by the potential gap in icebreaking capabilities once the Coast Guard's only operating heavy polar icebreaker reaches the end of its service life. Agencies have partnered with the Navy to take advantage of its resources and shipbuilding expertise, including the Coast Guard when acquiring the polar icebreakers. For example, in September 2018, GAO found that the Coast Guard and the Navy had established an integrated program office and a ship design team. These teams provided input to Navy cost estimators, who developed the polar icebreaker program's cost estimate. What GAO Recommends GAO has previously recommended that the Coast Guard develop a 20-year fleet modernization plan, reflect acquisition trade-off decisions in its annual Capital Investment Plans, and address risks to establish a sound business case for its polar icebreakers acquisition. DHS concurred with these recommendations and is taking steps to implement them.
gao_GAO-19-131
gao_GAO-19-131_0
Background Although the federal government has undertaken numerous initiatives to better manage the billions of dollars that federal agencies annually invest in IT, these investments too frequently fail or incur cost overruns and schedule slippages, while contributing little to mission-related outcomes. We have previously reported that the federal government has spent billions of dollars on failed IT investments. These investments often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. As a result of these failures, we added Improving the Management of IT Acquisitions and Operations to our biennial high-risk list in 2015. With its enactment in 2014, FITARA was also intended to improve agencies’ acquisitions of IT and facilitate Congress’ efforts to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. The act included specific provisions related to seven areas, including the five areas selected for our review: CIO authority enhancements—Covered agencies’ CIOs are required to (1) approve the IT budget requests of their respective agencies, (2) certify that agencies’ IT investments are adequately implementing OMB’s incremental development guidance, (3) review and approve contracts for IT, and (4) approve the appointment of other agency employees with the title of CIO (e.g., component agency CIOs). Enhanced transparency and improved risk management in IT investments—OMB and covered agencies are to make detailed information on federal IT investments publicly available, and department-level CIOs are to categorize their major IT investments by risk. Additionally, in the case of major investments rated as high risk for 4 consecutive quarters, the act required that the department- level CIO and the investment’s program manager conduct a review aimed at identifying and addressing the causes of the risk. Portfolio review—OMB and the CIOs of covered agencies are to implement a process to assist agencies in reviewing their portfolios of IT investments. This review process is intended to, among other things, identify or develop opportunities to consolidate the acquisition and management of IT services; identify potential duplication, waste, and cost savings; develop a multi-year strategy to identify and reduce duplication and waste within the agencies’ portfolios, including component agency investments, and to identify projected cost savings resulting from such a strategy. Federal data center consolidation initiative—Agencies are required to provide OMB with a data center inventory, a strategy for consolidating and optimizing the data centers (to include planned cost savings), and quarterly updates on progress made. The act also requires OMB to develop a goal for how much is to be saved through this initiative, and provide annual reports on cost savings achieved. Government-wide software purchasing program—GSA is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law states that, to the maximum extent practicable, GSA should allow for the purchase of a software license agreement that is available for use by all executive branch agencies as a single user. GAO Has Previously Reported on Agencies’ FITARA Implementation and Identified Areas for Improvement We have issued a number of reports that have identified actions that OMB and federal agencies needed to take to improve their implementation of the FITARA provisions. In reporting on incremental software development in November 2017, we noted that department-level CIOs certified only 62 percent of major IT software development investments as implementing adequate incremental development in fiscal year 2017. Officials from 21 of the 24 agencies in our review reported that challenges had hindered their CIOs’ ability to implement incremental development. These challenges included: (1) inefficient governance processes; (2) procurement delays; and (3) organizational changes associated with transitioning from a traditional software methodology that takes years to deliver a product, to incremental development, which delivers products in shorter time frames. We made recommendations to department-level CIOs to improve reporting accuracy and update or establish certification policies. As of February 2019, agencies had taken steps to address eight of the 19 recommendations. Additionally, our August 2018 report on department-level CIOs noted that none of the 24 agencies had policies that fully addressed the role of their CIOs consistent with federal laws and guidance, including FITARA. In addition, the majority of the agencies had not fully addressed the roles of their CIOs for any of six key areas that we identified. Although officials from most agencies stated that their CIOs were implementing the responsibilities even when not addressed in policy, the 24 CIOs acknowledged in a survey that they were not always very effective in implementing all of their responsibilities. Further, the shortcomings in agencies’ policies were attributable, at least in part, to incomplete guidance from OMB. We noted that, until OMB improved its guidance to clearly address all CIO responsibilities, and agencies fully addressed the role of CIOs in their policies, CIOs would be limited in effectively managing IT and addressing long-standing management challenges. We made 27 recommendations for agencies to improve the effectiveness of CIOs’ implementation of their responsibilities. Most agencies agreed with the recommendations and described actions they planned to take to address them. Enhanced transparency and improved risk management In June 2016, we reported on rating the risk of IT investments and noted that agencies underreported the risk of almost two-thirds of the investments their CIOs reviewed. All 17 selected agencies incorporated at least two of OMB’s factors into their risk rating processes and nine used all of the factors, interpreted differently, less often than on a monthly basis. Our assessments generally showed more risk than the associated CIO ratings. We also issued a series of reports about the IT Dashboard that noted concerns about the accuracy and reliability of the data on the Dashboard. In total, we have made 25 recommendations to OMB and federal agencies to help improve the accuracy and reliability of the information on the Dashboard and to increase its availability. Most agencies agreed with the recommendations or had no comments. As of February 2019, 11 of these recommendations remained open. In April 2015, we reported on actions needed by 26 federal agencies to ensure portfolio savings were realized and tracked. We noted that these agencies had decreased their planned PortfolioStat savings by at least 68 percent from what they reported to us in 2013. Specifically, while the agencies initially had planned to save at least $5.8 billion between fiscal years 2013 and 2015, these estimates were decreased to approximately $2 billion. We made recommendations to OMB and the Department of Defense aimed at improving the reporting of achieved savings, documenting how savings are reinvested, and establishing time frames for PortfolioStat action items. As of February 2019, OMB had addressed one of the five recommendations. Our September 2016 report on application inventories noted that most of the 24 agencies in the review fully met at least three of the four practices we identified to determine if agencies had complete software application inventories. Additionally, six of the agencies relied on their investment management processes and, in some cases, supplemental processes to rationalize their applications to varying degrees. However, five of the six agencies acknowledged that their processes did not always allow for collecting or reviewing the information needed to effectively rationalize all their applications. We made recommendations that 20 agencies improve their inventories and five of the agencies take actions to improve their processes to rationalize their applications more completely. Agencies had addressed four of the 25 recommendations as of February 2019. Federal data center consolidation initiative We have reported annually on agencies’ efforts to meet FITARA requirements related to the federal data center consolidation initiative. For example, in March 2016 we reported that, as of November 2015, the 24 agencies participating in the initiative had identified a total of 10,584 data centers, of which they reported closing 3,125 through fiscal year 2015. In total, 19 of the 24 agencies reported achieving an estimated $2.8 billion in cost savings and avoidances from fiscal years 2011 to 2015. We recommended that 10 agencies take action to address challenges in establishing, and to complete, planned data center cost savings and avoidance targets. We also recommended that 22 agencies take action to improve optimization progress, including addressing any identified challenges. As of February 2019, agencies had addressed 14 of our 32 recommendations. Our May 2018 report on data center consolidation noted mixed progress toward achieving OMB’s goals for closing data centers by September 2018. Over half of the agencies reported that they had either already met, or planned to meet, all of their OMB-assigned goals by the deadline. This was expected to result in the closure of 7,221 of the 12,062 centers that agencies reported in August 2017. However, four agencies reported that they did not have plans to meet all of their assigned goals and two agencies were working with OMB to establish revised targets. No new recommendations were made to agencies in this report because agencies had yet to fully address our previous recommendations. In May 2014, we reported on 24 federal agencies’ management of software licenses and the potential for achieving significant savings government-wide. Specifically, we found that OMB and the vast majority of the 24 agencies reviewed did not have adequate policies for managing software licenses. We also reported that federal agencies were not adequately managing their software licenses because they generally did not follow leading practices in this area. Consequently, we could not accurately describe the most widely used software applications across the government, including the extent to which they were over and under purchased. We recommended that the 24 agencies improve their policies and practices for managing licenses. Most agencies generally agreed with the recommendations or had no comments. We then reported in September 2014 that the 24 agencies had either provided a plan to address most of the recommendations we made to them, partially disagreed with the report’s prior findings, or did not provide information on their efforts to address the recommendations. As of February 2019, the agencies had addressed 109 of the 136 recommendations. Selected Agencies Identified Practices That Facilitated Effective Implementation of FITARA Provisions The nine selected agencies identified a total of 12 practices that helped them to successfully implement the FITARA provisions considered in our review. Among the practices, a number of the agencies identified four that were overarching—that is, the practices were not unique to a specific provision, but, instead, better positioned agencies to implement the five provisions selected for our review. In addition, agencies identified one practice that helped ensure effective implementation of CIO one practice that helped ensure enhanced transparency and improved one practice that ensured effective portfolio review, four practices that facilitated data center consolidation, and one practice that facilitated software purchasing. Figure 1 identifies the 12 practices that the nine agencies used to effectively implement the selected FITARA provisions. In addition, the narrative following the figure provides details on how these agencies implemented the provisions and realized associated IT management improvements or cost savings. Overarching Practices Vital to Implementing FITARA Four of the nine agencies that we reviewed—Commerce, HHS, NASA, and USDA—identified one or more overarching practices that have been vital to their efforts in implementing FITARA: obtain support from senior leadership, treat the implementation of FITARA as a program, establish FITARA performance measures for component agencies, appoint an executive accountable for FITARA implementation in each component agency. As a result of implementing these practices, each of the agencies was better positioned to implement FITARA. Obtain support from senior leadership Three of the agencies—USDA, NASA, and Commerce—emphasized that the support of senior leadership was essential to implementing requirements in FITARA. This support was demonstrated, for example, by senior officials highlighting the act’s importance during key executive-level meetings and in their key memorandums and other communications to the agencies’ workforce. We have previously reported that having senior leadership support is critical to the success of major programs. According to USDA’s Director of FITARA Operations, the agency made a decision to raise the topic of FITARA implementation at each monthly executive leadership meeting that is attended by the Deputy Secretary, Chief Operating Officer, and Assistant Secretary for Administration, in order to keep attention focused on the act’s implementation. In addition, the agency’s October 2016, Concept of Operations for The Oversight, Management, and Operations of FITARA document, which is the primary document used by the agency to assist with the implementation and execution of the act, was signed by the Deputy Secretary, CIO, and Deputy CIO. The officials reported that obtaining support from senior leadership had helped ensure buy-in to changes resulting from implementing provisions of the act. NASA officials also highlighted senior leadership support as being essential to their actions to implement FITARA. For example, the NASA Deputy Administrator and Associate Administrator for Mission Support signed and distributed a memorandum in August 2010 that emphasized the agency’s commitment to the data center consolidation effort. The memorandum stated that Mission Directorate Associate Administrators and Center Directors shall direct their staff to cooperate fully and openly with NASA’s data center consolidation plan. An official in the Office of the CIO stated that the memorandum was evidence of the support the agency had from senior leadership to close data centers. Further, a Commerce official stated that FITARA implementation activities at the agency have had support from agency leadership, including the Deputy Secretary and the CIO. For example, according to the official, the Deputy Secretary provided each of the component agency FITARA sponsors with a signed memorandum asking for assistance from the components. This action resulted in increased cooperation throughout the agency when components were asked to respond to FITARA-related requests for information. Treat implementation of FITARA as a program Commerce and USDA reported that treating FITARA implementation as if it were an IT program was important to implementing the requirements of the act. The two agencies demonstrated this practice by assigning staff to manage implementation of FITARA and regularly discussing implementation of the act at meetings with senior-level officials. According to a Commerce lessons learned document, the agency has managed FITARA like a program by reporting regularly on its implementation status to internal agency stakeholders. In addition, the agency has assigned a program manager to assist with implementation of the act and to track progress on implementing the act’s provisions. As a result, Commerce officials reported that the importance of FITARA has been regularly discussed throughout the agency in bi-weekly meetings within the Office of the Secretary. These meetings led to an increased sense of cooperation between different disciplines (e.g., IT, budget, acquisition, legal, and human resources) and reduced the impression that FITARA was solely focused on the department-level CIO office. Further, USDA created the position of Executive Director for FITARA Operations within the department-level CIO office. This position has responsibility for, among other things, establishing the processes and procedures to bring the agency into compliance with the act and IT management controls that meet the FITARA requirements. The Director stated that treating the implementation of FITARA as if it were an IT program has led the agency to develop key documentation that has assisted in the implementation of the act, including its Concept of Operations for the Oversight, Management, and Operations of FITARA and Data Center Closure Process. Establish FITARA performance measures for component agencies HHS established internal FITARA performance measures for its component agencies that officials believe have led to increased effectiveness in implementing the act. Specifically, the agency undertook an effort to increase its FITARA scorecard grades—called “A by May”— with a goal to attain an ‘A’ on the May 2018 FITARA 6.0 scorecard. As part of this effort, HHS created its own internal scorecard for each of its component agencies that mirrored the agency’s FITARA scorecard. According to an HHS lessons learned document, aligning the FITARA metrics to component agency performance resulted in greater transparency between the department-level CIO and component agency CIOs. The effort to establish internal performance measures received support from senior agency leadership. Specifically, it was endorsed by the Assistant Secretary for Administration and the Principal Deputy for Administration, which agency officials believed was a key factor in the effort’s success. HHS officials also reported that their internal scorecard was helpful because it let component agencies know how well they were doing relative to each other. The officials also believed that establishing FITARA performance measures led to increased cooperation and communication between component agencies and the department-level CIO office. For example, the increased cooperation allowed HHS to more easily collect data required to update the House Committee on Oversight and Government Reform’s FITARA scorecard. At the December 2018 House Committee on Oversight and Government Reform hearing on FITARA, the HHS Acting CIO attributed the agency’s increased scorecard grade—from a ‘D’ on the initial November 2015 scorecard to a ‘B+’ on the December 2018 scorecard—to the “A by May” initiative. According to this official, the measurement of component agencies’ performance had elevated the importance of meeting FITARA objectives and paved the way for agency-wide participation in improvement efforts. Appoint an executive accountable for FITARA implementation in each component agency According to a Commerce memorandum, the Assistant Secretary for Administration asked each component agency to identify a FITARA executive sponsor. The sponsors were assigned responsibility for gathering the necessary information on component agencies’ efforts to implement FITARA and for alerting the agency’s CIO of any issues that needed to be addressed. Once the sponsors were identified, the Commerce Deputy Secretary sent a letter to each sponsor, asking them to help ensure cooperation between their component agencies and the department’s CIO office. A Commerce official reported that having a sponsor in component agencies with responsibility for providing the information needed to report on FITARA results to the department’s CIO office had increased component agencies’ responsiveness to information requests and improved cooperation throughout the agency. CIO Authority Enhancements Commerce and DHS developed policies to explain how the specific authorities that FITARA provided to the agency CIO are to be carried out. The agencies identified the policies as essential to their ability to implement the CIO authority enhancements provision in FITARA. Commerce officials stated, for example, that their agency established a policy to ensure that the CIO certified major IT investments as adequately implementing incremental development. Specifically, Commerce’s capital planning guidance required component agency CIOs or other accountable officials within the component agencies to certify the adequate implementation of incremental development for these investments. Commerce’s guidance described the role of the CIO in the certification process and how the CIOs’ certification should be documented. The guidance also included definitions of incremental development and time frames for delivering functionality. Officials in Commerce’s Office of the CIO reported that the certification policies assisted them in overseeing the management of IT investments and ensuring the use of incremental development throughout the agency, as called for by FITARA. Also, Commerce changed its personnel policy to require the department- level CIO to approve all senior level IT positions, which addressed the FITARA requirement for the CIO to approve the appointment of other staff with the title of CIO (e.g., component agency CIOs). Specifically, in February 2016, Commerce developed a new human capital policy to give its department-level CIO input into the hiring of all senior level IT positions, including component CIOs. As a result, a Commerce official reported that the policy ensures that the CIOs’ authority has been enhanced to include significant involvement in the hiring of IT leaders throughout the agency. For its part, DHS established a policy to ensure that the department-level CIO certified major IT investments as adequately implementing incremental development. Specifically, DHS’s technical investment review guidance states that the CIO is to conduct a review of each investment using an investment review checklist that includes information provided by project managers as to whether the investments have used incremental development adequately. The CIO is to certify whether the project is implementing incremental delivery at least every 6 months and is to document this certification in the checklist. As a result, officials in DHS’s Office of the CIO said that they can now use information from the incremental certification checklist to improve incremental development processes and to make corrections to projects that were not adequately implementing incremental development. Enhanced Transparency and Improved Risk Management Three agencies—Commerce, DHS, and USDA—identified one practice that was key to their effective implementation of the enhanced transparency and improved risk management provision of FITARA. The practice is to implement a risk rating process for IT investments that incorporates risks (e.g., funding cuts or staffing changes). Commerce’s Office of the CIO implemented a process where this office reviewed at least the top three risks for each investment, verified that these risks were specific to the investment and were appropriately managed and mitigated, and verified that the risk register was updated regularly. In addition, DHS implemented a process that included a review of investment risks, ensured that the risks were current, and that risk mitigation plans were in place. Also, in November 2017, USDA updated its risk rating process to incorporate risks. Specifically, it updated its risk management scoring criteria to include an evaluation of the management and risk exposure scores of risks. The actions that Commerce, DHS, and USDA took to incorporate reviews of risks into their risk rating processes better positioned the agencies to provide more detailed and accurate information on their IT investments to the public. Portfolio Review Four of the agencies—GSA, Justice, DHS, and USAID—identified performing application rationalization activities as vital to their effective implementation of the portfolio review provision of FITARA. Application rationalization activities can include establishing a software application inventory, collecting information on each application, or evaluating an agency’s portfolio of IT investments to make decisions on applications (e.g., retire, replace, or eliminate). We have previously reported that the principles of application rationalization are consistent with those used to manage investment portfolios. GSA and Justice performed application rationalization by engaging in efforts to establish complete and regularly updated application inventories. To do so, component agencies specified basic application attributes in their inventories (e.g., application name, description, owner, and function supported), and regularly updated the inventories. As we have previously reported, by having an application inventory that is complete and regularly updated, agencies such as GSA and Justice are better positioned to realize cost savings and efficiencies through activities such as consolidating redundant applications. For its part, DHS utilized application rationalization to identify duplicate investments and consolidate systems. Part of the effort included the regular assessment of programs against criteria such as the program’s cost, schedule, and performance relative to established targets. According to the agency, this resulted in the consolidation of site services, including help desk operations. DHS reported that this consolidation resulted in savings that cumulatively accrued to $202 million by fiscal year 2015. In addition, as an application rationalization activity, USAID reviewed its portfolio of IT investments in order to identify systems to potentially retire or decommission—a requirement of the portfolio review provision of FITARA. Specifically, the agency developed an information system decommissioning plan to retire old systems. The plan described USAID’s three-step approach to decommissioning systems: (1) identifying decommissioning candidates, (2) conducting system reviews and decommissioning decisions and (3) decommissioning planning and execution. As a result of this approach to implementing the portfolio review provision of FITARA, the agency reported in its Information Systems Decommissioning Plan that it has decommissioned 78 old systems and identified additional systems to decommission in future years. Agency officials reported that USAID achieved cost savings of almost $10 million since 2016 as a result of decommissioning systems. Data Center Consolidation GSA, Justice, NASA, USAID, and USDA identified four practices that were essential to their effective implementation of the data center consolidation provision of FITARA and resulted in agencies realizing cost savings or other IT management improvements: conduct site visits to all data centers, transition to a virtual or cloud-based environment, incentivize component agencies to accelerate the pace of data center consolidation, and utilize data centers with excess capacity. Agencies’ actions to implement these practices have led to the retirement of older systems, increased cost savings and future cost avoidance, and a reduction in the number of data centers. In addition, as a result of applying these practices, the agencies were better able to make progress in consolidating and optimizing data centers. Conduct site visits to all data centers USDA and Justice conducted site visits to all of their data centers to more effectively address the data center provision of FITARA. Both agencies stated that the site visits had allowed them to more thoroughly document the inventory of applications and IT hardware in each of the data centers and to validate progress made toward closing data centers. USDA officials stated that conducting site visits to their data centers played a pivotal role in the successful implementation of data center consolidation by providing more direct communication with data center staff to address concerns and issues that staff had about consolidation of the centers. Additionally, agency officials reported that they were able to obtain more detailed information necessary to meet the FITARA requirements for reporting to OMB on USDA’s data center inventory and progress made on data center closures as a result of conducting site visits. Further, Justice officials stated that site visits conducted by staff in the CIO’s office that were responsible for data center consolidation played a key role in the closure of many of the agency’s data centers. Specifically, the officials said that conducting site visits in person showed data center staff that data center consolidation was a priority for the agency. The officials added that the site visits also showed data center staff that they were valued as partners in the consolidation effort. Transition to a virtual or cloud-based environment USDA, GSA, NASA, and USAID have taken actions to transition to a virtual or cloud-based environment as a way to effectively implement the data center consolidation provision of the act. The agencies’ actions consisted of moving data from agency-owned data centers to cloud- based environments, which helped the agencies make progress toward meeting the cost savings and data center optimization requirements of FITARA. USDA officials reported that the agency has been successful in having its components use cloud technology to reduce the number of data centers. For example, the USDA Forest Service developed a migration strategy to move all of the Forest Service production systems and applications from its data centers to USDA’s Enterprise Data Center and Cloud Infrastructure as a Service located at the National Information Technology Center in Kansas City, Missouri. As a result of moving its production systems and applications, the Forest Service increased virtualization, resolved many long-term security vulnerabilities, and reduced the number of duplicative and stand-alone applications by 70 percent. The Forest Service reported that it had identified cost savings of up to $6.1 million annually as a result of these efforts. In addition, GSA developed a data center consolidation strategy which included migrating services from agency-owned data centers to more flexible and optimized cloud computing environments, shared service and co-location centers, and more optimized data centers within their own inventory. For example, the agency migrated numerous systems to provisioned services via cloud computing services. GSA officials reported that their agency has encouraged virtualization and cloud computing as preferred options above new physical implementations. The agency also continues to migrate away from hardware-dependent operating systems and to utilize, build upon, and mature its enterprise service virtualization platform offerings and capabilities. As a result of these actions, the agency has been able to more effectively retire older systems in order to shift them to newer, virtualized technologies. NASA officials stated that their agency is transitioning to a cloud-based environment to close its data centers. For example, NASA moved all of the data from the Earth Observing System to a new commercial cloud- based model that hosts all the data in one location. The Earth Observing System was designed over a decade ago and its data were held at different partner locations based on science discipline (e.g., land, oceans, and atmosphere) and provided data that were used by the public in various capacities. The agency funded data center hardware at each of the locations and transported data between the locations, as necessary, to create integrated data products. According to NASA officials, transitioning to a cloud-based environment has resulted in easier access to NASA data by the public, elimination of recurring capital investments in data center hardware, and improved IT security. USAID reported that it saved money and increased efficiency by consolidating all of its data centers into a single data center in 2012 and then transitioning its single data center to a cloud-based environment. USAID completed the migration of its data center to the cloud in June 2018. According to the agency, moving to the cloud is expected to result in $36 million in future cost avoidance for the agency. Incentivize component agencies to accelerate the pace of data center consolidation Data center consolidation activities can be costly, requiring agencies to use resources to, for example, analyze the need for IT equipment (e.g., servers, processors, networking, and other hardware) and to move such equipment between locations. Our May 2018 report on the results of agencies’ efforts to consolidate data centers noted mixed progress toward achieving OMB’s goals for closing data centers. Justice incentivized a component agency to accelerate its participation in data center consolidation by providing supplemental funding for costs associated with consolidation. For example, the agency’s CIO office provided funding for a component agency to offset the cost to move servers and data center equipment to another location. Justice officials noted that the agency has seen increased cooperation from component agencies as a result of offering supplemental funding to participate in its data center consolidation effort. Utilize data centers with excess capacity A part of GSA’s strategy for consolidating data centers was to move existing data to other government data centers that had the capacity to store its data. To do so, GSA established shared service agreements with the Environmental Protection Agency’s National Computer Center and NASA’s Stennis Space Center data centers. As a result of moving its data to other government data centers with excess capacity, GSA was able to consolidate numerous data centers, resulting in increased efficiency and cost savings. Software Purchasing USDA, VA, GSA, NASA, and USAID identified the practice of centralizing the management of software licenses as essential to their effective implementation of the software purchasing provision of FITARA. These five agencies did this by, for example, establishing a software management team, creating contracts with vendors to centrally manage licenses, and establishing governance processes for software license management. USDA employed a centralized software license management approach by establishing a Category Management Team. This team was responsible for the oversight of all software license enterprise agreements, which included collecting, reviewing, consolidating, and reporting on all software procurements. The agency also created Enterprise IT Category Management guidance that supported the central oversight authority for managing enterprise software license agreements. Further, according to USDA officials, management has been supportive in ensuring that all organizations and components join existing enterprise contracts that are already in place. USDA’s actions to centralize the management of its software licenses have led to effective agency-wide decisions regarding software purchases that the agency reported have yielded cost savings. For example, the agency identified instances where multiple software contracts at different price points among component agencies could be consolidated into one contract at the lowest price. This resulted in reducing the cost per license for a software product from $250 to $15.75, saving the agency approximately $85,000 between 2016 and 2017, according to USDA documentation. VA established an Enterprise Software License Management Team to centralize the management of its efforts to purchase software. According to officials in VA’s Office of Information and Technology, this team consisted of knowledgeable staff that had experience with software management and development, and was familiar with software that was deployed across the entire agency. These officials also stated that the Enterprise Software License Management Team conducted weekly meetings with GSA to discuss software licensing and category management to ensure they were aware of other opportunities for cost savings. VA also established an Enterprise Software Asset Management Technical Working Group that was formed to define and document a framework that employed a centralized software license management approach. By centralizing the management of its software licenses, VA has been able to make effective agency-wide decisions regarding the purchase of software products and reported that it has realized cost savings. Specifically, VA provided documentation showing that it had implemented a solution to analyze agency-wide software license data, including usage and costs. The agency identified approximately $65 million in cost savings between 2017 and 2020 due to analyzing one of their software licenses. We previously reported that GSA and USAID had centralized the management of their software licenses. We reported that GSA’s server- based and enterprise-wide licenses were managed centrally, whereas non-enterprise-wide workstation software licenses were generally managed regionally. GSA also issued a policy that established procedures for the management of all software licenses, including analyzing software licenses to identify opportunities for consolidation. Centralizing the management of its purchase of software licenses has led GSA to make effective agency-wide decisions regarding its software licenses and avoid future costs, according to agency documentation. For example, in fiscal year 2015, the agency consolidated licenses for one of its software products, saving the agency over $400,000 and avoiding over $3 million in future costs. For its part, USAID had a contract in place with a vendor for centrally managing licenses for all of its operating units. Further, according to officials within USAID’s Office of the CIO, the agency established a governance process to manage the introduction of new software. As part of this governance process, USAID’s Software and Hardware Approval Request Panel was responsible for reviewing requests to procure new software. USAID’s actions on centralizing the management of its software licenses have led to effective agency-wide decisions regarding software purchases that the agency reported have yielded cost savings. For example, USAID identified opportunities to reduce costs on its software licenses through consolidation or elimination of software. This resulted in the agency reporting a cumulative savings from fiscal year 2016 to fiscal year 2018 of over $2.5 million on software licenses. NASA issued a software license management policy that included the roles and responsibilities for central management of the agency’s software licenses. In addition, in May 2017, NASA’s Administrator issued a memorandum requiring component agencies to use the agency’s Enterprise License Management Team to manage software licenses. By employing a centralized software license management approach, NASA made effective agency-wide decisions on software licenses which the agency reported led to cost avoidance. For example, the agency increased the number of software agreements managed by its enterprise license management team from 24 to 42 in fiscal year 2014, and analyzed its software license data to identify opportunities to reduce costs and make better informed investments moving forward. As a result, NASA reported that it realized cost avoidance of approximately $224 million from fiscal years 2014 through 2018. In summary, as a result of applying the practices identified in this review, the selected agencies were better positioned to implement FITARA provisions and realized IT management improvements and cost savings. Agency Comments and Our Evaluation We requested comments on a draft of this report from each of the nine agencies included in our review, as well as from OMB. In response, one agency—USAID—provided written comments, which are reprinted in appendix I. Another agency—DHS—provided technical comments, which we incorporated in the report, as appropriate. The other 7 agencies and OMB did not provide comments on the draft report. In its comments, USAID described actions that it had taken to enhance the authority of its CIO. Specifically, the agency stated that it had proposed that the CIO report directly to the Administrator and had notified the congressional committees of jurisdiction about this intended action. Further, USAID stated that, as of April 2019, the Administrator would be expected to approve revisions to internal policy to clarify and strengthen the authority of the CIO in line with FITARA and our report. We are sending copies of this report to the appropriate congressional committees, the heads of the Departments of Agriculture, Commerce, Health and Human Services, Homeland Security, Justice, and Veterans Affairs; the General Services Administration; the National Aeronautics and Space Administration; the U.S. Agency for International Development; the Director of the Office of Management and Budget; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the US Agency for International Development Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Carol C. Harris, (202) 512-4456 or harriscc@gao.gov. Staff Acknowledgments In addition to the contact named above, Dave Powner (Director), Mark Bird (Assistant Director), Eric Trout (Analyst-in-Charge), Justin Booth, Chris Businsky, Quintin Dorsey, Rebecca Eyler, Dave Hinchman, Valerie Hopkins, Kaelin Kuhn, Sabine Paul, Monica Perez-Nelson, Meredith Raymond, Bradley Roach, Andrew Stavisky, Niti Tandon, Christy Tyson, Adam Vodraska, Kevin Walsh, Jessica Waselkow, and Eric Winter made key contributions to this report.
Why GAO Did This Study Congress has long recognized that IT has the potential to enable federal agencies to accomplish their missions more quickly, effectively, and economically. However, fully exploiting this potential has presented challenges to covered agencies, and the federal government's management of IT has produced mixed results. As part of its effort to reform the government-wide management of IT, in December 2014 Congress enacted FITARA. The law included specific requirements related to enhancing Chief Information Officers' (CIO) authorities, improving the risk management of IT investments, reviewing agencies' portfolio of IT investments, consolidating federal data centers, and purchasing software licenses. GAO has reported numerous times on agencies' effectiveness in implementing the provisions of the law and highlighted agencies that have had success in implementing selected provisions. In this report, GAO identifies practices that agencies have used to effectively implement FITARA. GAO selected five provisions of FITARA to review: (1) CIO authority enhancements; (2) enhanced transparency and improved risk management; (3) portfolio review; (4) data center consolidation; and (5) software purchasing. GAO then selected nine agencies that had success in implementing at least one of the five provisions. GAO compiled practices where at least one agency was better positioned to implement a provision or realized an IT management improvement or cost savings. What GAO Found Nine selected agencies (the Departments of Agriculture, Commerce, Health and Human Services, Homeland Security, Justice, and Veterans Affairs; the Agency for International Development; the National Aeronautics and Space Administration; and the General Services Administration) identified 12 practices that helped them to effectively implement one or more Federal Information Technology Acquisition Reform Act provisions (commonly referred to as FITARA). The following figure identifies the 12 practices, including the four overarching ones, considered vital to implementing all provisions. By applying the overarching practices, covered agencies were better positioned to implement FITARA. In addition, by implementing the practices relative to the five FITARA provisions GAO selected, covered agencies realized information technology (IT) management improvements, such as decommissioning old systems and cost savings.
gao_GAO-18-123
gao_GAO-18-123_0
Background The RAD program was authorized by Congress and signed into law by the President in November 2011 under the Consolidated and Further Continuing Appropriations Act, 2012 with amendments in 2014, 2015, 2016, and 2017. The RAD program consists of two components. The first component of the RAD program—and the focus of our review— provides PHAs the opportunity to convert units subsidized under HUD’s public housing program and owned by the PHAs to properties with long- term (typically, 15–20 years) project-based voucher (PBV) or project- based rental assistance (PBRA) contracts. These are two forms of Section 8 rental assistance that tie the assistance to the unit to provide subsidized housing to low-income residents. In a RAD conversion, PHA- owned public housing properties can be owned by the PHA, transferred to new public or nonprofit owners, or transferred to private, for-profit owners when necessary to access LIHTC financing, if the PHA preserves its interest in the property in a HUD-approved manner. The second component of RAD converts privately owned properties with expiring subsidies under old rental assistance programs to PBV or PBRA in order to preserve affordability and encourage property rehabilitation. The goals of the RAD program include preserving the affordability of federally assisted rental properties and improving their physical and financial condition. Specifically, postconversion owners (PHAs, nonprofits, or for-profit entities) can leverage the subsidy payments under the newly converted contracts to raise capital through private debt and equity investments, or conventional private debt, to make improvements. The RAD program provides added flexibility for PHAs to access private and public funding sources to supplement public housing funding. These financing sources may include debt financing through public or private lenders; mortgage financing insured by FHA; PHA operating reserves; replacement housing factor funds; seller or take-back financing; deferred developer fees; equity investment generated by the availability of 4 percent and 9 percent LIHTC; or other private or philanthropic sources. PHAs also may pursue various options for their conversions, which often depend on property needs and available financing, including property rehabilitation or new construction. Additionally, PHAs may undertake conversion involving no property rehabilitation or new construction to meet certain financial goals or for future rehabilitation or new construction, as long as the PHA can demonstrate to HUD that the property does not need immediate rehabilitation and can be physically and financially maintained for the term of the Section 8 Housing Assistance Payment contract (HAP contract). The RAD authorizing legislation and RAD Notice also specify requirements for ownership and control of converted properties. That is, converted properties must have public or nonprofit ownership or control, with limited exceptions. The RAD authorizing legislation, RAD Notice, HAP contracts, and RAD Use Agreement also establish procedures to help ensure that public housing remains a public asset should challenges arise, such as default, bankruptcy, or foreclosure. Oversight of RAD conversion and properties is primarily divided among three HUD offices. The Office of Recapitalization is responsible for administering the conversion process but generally does not oversee converted properties. Before conversion, the Office of Public and Indian Housing oversees the properties. After conversion, oversight remains with Public and Indian Housing for properties that convert to PBV contracts and transfers to the Office of Multifamily Housing Programs for PBRA. The RAD program has been implemented and expanded in phases. Since its authorization, the RAD unit cap gradually increased from 60,000 units in 2011 to 225,000 units in May 2017. The RAD program is currently fully subscribed with all 225,000 units allocated. As of September 30, 2017, 689 conversions were closed that involved a total of 74,709 units (see fig. 1 for a breakdown by fiscal year). Additionally, 706 conversions involving 79,078 units were in the process of structuring conversion plans. The remaining conversions under the cap were allocated to specific properties and in the process of having commitments issued or reserved under multi-phase or portfolio awards, according to HUD officials. RAD conversions begin with the submission of an application by PHAs after which they are notified of selection. The PHA is then required to submit a financing plan within 180 days or a later deadline based on the nature of the financing proposed. A RAD conversion is considered closed when the HAP contract is signed and financial documents are executed. The properties are considered converted to Section 8 assisted housing on the effective date of the HAP, which is generally the first day of the following month. Once the RAD conversion is closed, the PHA or ownership entity can move forward with its submitted proposals or RAD-related rehabilitation or new construction and is responsible for complying with RAD requirements and associated contracts. In some cases, rehabilitation can take place in advance of conversion closing if public housing funds are being used. Most RAD Conversions Involved Construction and Tax Credits, but HUD’s Leveraging and Construction Metrics Are Limited Most RAD Conversions Involved Property Rehabilitation or New Construction, and Financing Often Included Tax Credits Most Conversions Involved Construction and Many Used Tax Credits Most RAD conversions involved some type of construction. Our analysis of HUD data showed that as of September 30, 2017 417 of 689 closed conversions (61 percent) involved planned rehabilitation to the property, 86 (12 percent) new construction, and 186 (27 percent) no construction; and 361 of 706 active RAD conversions (51 percent) involved planned rehabilitation, 89 (13 percent) new construction, and 256 (36 percent) no construction. HUD officials stated that they approve conversions that involve no immediate planned rehabilitation or new construction as long as the property has no immediate needs to be addressed. Such conversions allow PHAs to better position themselves to access additional capital to address future rehabilitation or construction plans. Our review of 31 conversion files also showed that the scope of proposed physical changes varied among RAD conversions. For properties that included scope of work narratives, physical changes included renovations to mitigate hazardous materials, aesthetic renovations, code and accessibility compliance, and construction of new buildings, among other changes. Financing for RAD conversions involved multiple public and private sources, but many conversions used LIHTC. Our analysis of HUD data showed that as of September 30, 2017, 173 of 689 closed RAD conversions (25 percent) utilized 4 percent LIHTC, 99 (14 percent) utilized 9-percent LIHTC, and 416 (60 percent) did not use LIHTC. By dollar amount, major financing sources were 4 percent LIHTC at $2.4 billion; new first mortgages at $1.8 billion; and 9 percent LIHTC at $1.1 billion. Construction costs constituted the highest-dollar use of financing for RAD conversions, but not all conversions incurred construction costs, as discussed earlier. On average, construction costs per closed conversion were $6.4 million (ranging from no construction costs to $236 million) and nearly $60,000 per-unit converted to RAD. Construction costs represented the highest-dollar use of financing for closed RAD conversions at $4.4 billion followed by building and land acquisition costs, and developer fees. For more information on financing sources and uses, see appendix II. Stakeholders Cited Various Factors Influencing Financing for RAD Conversions PHA officials and developers we interviewed cited various factors that influence financing sources needed for RAD conversions. For example, property needs assessments help establish the level of rehabilitation or new construction that would address the capital needs of the property. In turn, needs assessments can derive from physical assessment results and incorporate federal, state, or local compliance requirements. For instance, rehabilitation or construction would need to address the accessibility requirements of the Americans with Disabilities Act and local building codes, among other requirements. PHA officials and developers we interviewed also said they had to consider competition or access to financing for RAD conversions. For example, PHAs noted that tax credit applications and other financing had to be competitive. Some PHAs we interviewed also noted that while the 9 percent LIHTC provides more equity to finance low-income units (finances 70 percent of the costs of the units), there is more competition for the 9 percent LIHTC, while the 4 percent LIHTC can be automatically awarded for certain deals involving tax exempt bonds and federally subsidized projects. Thus, while some PHAs and developers might prefer to obtain 9 percent LIHTC, they often apply for 4 percent LIHTC to increase the chances of obtaining some tax credit equity. For example, one particular PHA that had used both 4 percent and 9 percent LIHTCs noted that in one transaction it had to compete against 74 applicants for 25 available awards of 9 percent credits. HUD’s Metric for Financial Outcomes—the RAD Leverage Ratio—May Not Be Accurately Calculated, Partly because Final (Postcompletion) Financial Information Is Not Used The RAD Leverage Ratio Does Not Reflect the Amount of Private-Sector Leveraging The RAD authorizing statute requires HUD to assess and publish findings regarding the amount of private capital leveraged as a result RAD conversions. A leverage ratio relates the dollars other sources provide to the dollars a program provides to an institution or a project. HUD uses various quantitative, qualitative, and processing and efficiency metrics to measure conversion outcomes. To meet the RAD statutory requirement, HUD published an overall RAD leverage ratio that has fluctuated between 19:1 and 9:1 since 2014. HUD’s most recent leverage ratio in fiscal year 2017 was 19:1, nearly double what the agency reported the prior year. We asked HUD officials why the leverage ratio nearly doubled between 2016 and 2017 and received conflicting information during the course of our audit. Initially, officials noted that the ratio was intended to replicate the methodology used by PD&R in its September 2016 report. Subsequently, the officials clarified that they did not follow PD&R’s methodology for categorizing financial source data. Specifically, officials did not review or make manual adjustments to the financial data PHAs entered in open source fields to ensure sources actually represented public, private, or other funding categories when calculating the leverage ratio. Finally, they noted that they disagreed with the methodology used in the PD&R September 2016 report and stated that there are various ways to calculate leverage. For the purposes of announcing the most recent leverage ratio in 2017, HUD officials decided that a leverage ratio comparing federally appropriated public housing resources would reflect the amount of financing leveraged had RAD not existed. We found, and officials from HUD acknowledged, three limitations to the RAD leverage calculation. First, HUD generally had data on funding sources and amounts a RAD conversion proposed to use (at the time of its application to HUD and at the time of closing of construction financing) rather than data after construction is completed on funding sources and amounts. HUD officials stated that they were reviewing final closing packages to confirm that the data reflect the latest reported information on sources and uses of funds for each conversion at closing. However, sources and uses of funds and amounts at the time the RAD conversion is closed may differ from amounts upon completion of construction. In October 2017, HUD implemented procedures to verify completion of planned construction activities and costs, which we discuss later in this report. Second, HUD’s leverage ratio, published in 2017, did not manually adjust funding source data to accurately account for all sources in calculating the leverage ratio for RAD. Specifically, HUD did not isolate funding sources that were federally appropriated, contributed by the PHA, or contributed by state or local municipalities to calculate leverage. For example, among approximately $2 billion from other financial sources, HUD included Moving to Work (MTW) funding (which may include public housing capital funds, public housing operating funds, and voucher funds) and tax credit equity as leveraged sources. However, these are not necessarily private sources, which we explain later in this report. As a result, HUD’s current calculation does not reflect the amount of private- sector leveraging. HUD calculated and published a RAD leverage ratio in May of 2017 using the following formula: Total leverage ratio = (total dollars from all sources – public housing dollars) To calculate the RAD leverage ratio, HUD uses some but not all financial source data it collects (see app. II for a list of data fields collected by HUD). For example, HUD mistakenly excluded data that capture private funds, reducing the amount of total sources in the numerator. HUD calculates “public housing dollars” by adding data that capture replacement factor funds, public housing operating reserve funds, and prior-year public housing capital funds. HUD considers tax credit equity, new first mortgages, and “other funding” data to be non-public housing dollars (see app. II for a list of fields in HUD’s calculation). PHAs enter a description and amount for other funding sources in “other funding” data fields (see app. II). For example, a PHA may enter a federal financial source in one of the open-entry “other funding” data fields, requiring a manual adjustment to properly account for the financial source. According to HUD, additional fields were included in mid-2016 to better differentiate certain sources such as from the HOME Investment Partnerships Program (HOME) and seller take-back financing. Prior to this point, these financial sources were placed into “other” fields, and the standard resource desk report had not been updated until mid-2017 to include all of these fields. Third, HUD does not categorize and report its leveraging by private and public sources. According to HUD officials, informative leverage methodologies could calculate the ratio based on the leveraging of public housing program funds, the leveraging of all federally appropriated funds, or the leveraging of PHA funds (i.e., sources in the transaction that have come from the PHA itself even if not federally appropriated through the public housing program), among other methodologies. The RAD authorizing statute requires HUD to assess and publish findings on the amount of private-sector leveraging. In addition, Standards for Internal Control in the Federal Government require agencies to communicate quality information with external parties, such as other government entities, to make informed decisions and evaluate the entity’s performance in achieving key objectives. HUD also does not use final (postcompletion) funding data in another metric of RAD leveraging. Specifically, in June 2017 HUD publicly reported that RAD “has leveraged more than $4 billion in capital investment in order to make critical repairs and improvements.” HUD calculates this figure by summing the construction costs—a subcomponent of total costs—with data from the time a conversion closes and not upon completion of construction. HUD officials we spoke with clarified that this metric solely reports construction investments and does not reflect any conclusion regarding private leverage of public funds. But, HUD publically characterized this measure in different ways, including as the amount of “public-private investment in distressed public housing,” the amount of “construction achieved under RAD,” and the amount of “new private and public funds leveraged by RAD.” HUD’s 2016 interim report calculated and published multiple leverage ratios, but chose to highlight a RAD leverage ratio that is consistent with ratios used for other HUD programs. However, the ratio does not specifically follow the prescribed ratio language in the authorizing statute because the report states that the ratio represents the amount of private and public external sources invested for every dollar invested by PHAs but the statutory language only discusses private-sector leveraging. Officials further noted that the statute does not require a particular methodology and HUD relies on PD&R—and its independent contractor— to determine the appropriate methodology for purposes of compliance with the statute. Lastly, the statute does not preclude the use of other leverage metrics for other purposes, such as using the ratio to measure the amount of nonpublic housing funds leveraged in RAD transactions that would not be available to the property absent RAD. As a result, HUD’s leverage metrics announced in May 2017 do not accurately reflect the amount of private-sector leveraging achieved through RAD, do include public funding as private sources, and inconsistently measured sources that were federally appropriated or contributed by PHAs, potentially under- or over-reporting the program’s performance. Additionally, in October 2017, HUD began implementing procedures to collect data after construction is completed and is not yet able to calculate a leverage metric using final (postcompletion) financial sources rather than the financial sources collected at closing. The lack of a consistent metric for private leveraging could also lead to inconsistent reporting of the leverage ratio, as has occurred in prior years. Recalculations, Including of Funding Sources, Can Increase Accuracy of the RAD Leverage Ratio We recalculated RAD leverage ratios in a number of different ways, including to correct errors we identified during our review. For example, HUD’s 2016 interim report noted that data on closed transactions do not provide detailed description of “other sources,” requiring a crosswalk between applications and closed transactions to develop estimates for the allocation of “other sources” across financial source categories. Abbreviated descriptions are provided in the form of notes that are not always clear and consistent; therefore public housing sources may include federally appropriated sources, as well state, city, or county sources. Through our estimates, we found that the overall leverage ratio could range from 7.44:1 for a ratio recalculating HUD’s leverage ratio to 1.23:1 for a ratio estimating private-sector leveraging. Recalculation with HUD methodology and financial source recategorization. As discussed previously, HUD’s methodology does not account for all financial data collected by HUD and includes “other” funding sources erroneously considered as leveraged funds. Thus, we manually adjusted RAD funding source data and found that nearly $1.2 billion were erroneously considered leveraged funds because they are not private funds. For example, HUD included MTW funds; public housing operating reserves; public housing capital funds; replacement housing factor funds; other federal funds; other state, local, or county funds; and take-back financing funds as leveraged financial sources. For more information, see appendix II. We obtained documentation from HUD to replicate their methodology and recategorized financial sources that corrected errors in the data, and found that the RAD leverage ratio was less than half of HUD’s most recently publicly reported leverage ratio (19:1), approximately 7.44:1 (see app. II). Recalculation to exclude LIHTC and other federal sources. We previously reported that LIHTCs are considered a federal source because tax credit equity represents foregone federal tax revenue and, therefore, are a direct cost to the government. Accordingly, we recalculated the RAD leverage ratio by excluding all federal funding sources and obtained a ratio of approximately 1.43:1 (see app. II). Recalculation of private-sector leveraging. Lastly, the RAD authorizing statute requires HUD to assess and publish findings on the amount of private-sector leveraging, but HUD’s current calculation does not present the amount of private-sector leveraging and does not include all available data (for example, the “Other Private” funds collected by HUD). We estimated the amount of private-sector leveraging by grouping public housing sources, other public sources, and private sources, resulting in a leverage ratio of approximately 1.23:1 (see app. II). HUD Implemented Procedures to Verify Completion of Planned Construction Activities and Costs in October 2017, but Does Not Collect Final Comprehensive Financial Data In October 2017, HUD implemented procedures to certify completion after developers finish RAD-approved rehabilitation or construction. Previously, HUD had a limited ability to monitor and evaluate final (postcompletion) physical and financial changes in RAD projects with existing data. According to HUD officials, HUD did not implement completion certification procedures before October 2017 because it had been addressing what it considered to be the highest risks first (such as clarifying requirements for RAD participants, resident safeguards, and other procedural and administrative requirements). HUD’s October 2017 completion certification procedures include instructions for owners to report final construction costs and documentation on completion of repairs or construction within 45 days of the completion date recorded in the RAD Conversion Commitment. More specifically, HUD requires owners to list a final construction cost amount—a subcomponent of total costs—in the RAD resource desk, describe variances from the approved construction cost amount in a comment box, and describe how increases in costs were addressed. Additionally, a third-party must certify that the repairs in the scope of work were completed by providing an attestation to HUD. However, HUD’s procedures do not require documentation from the owners to support the final total cost figures, which include not only construction costs but also building and land acquisition costs, and developer fees, among others as noted earlier in this report. These procedures also do not require a certification from owners on all financing sources and costs recorded in the RAD Conversion Commitment. Standards for Internal Control in the Federal Government require that management implement control activities through documented policies and procedures to provide reasonable assurance that the objectives of the agency will be achieved, and also communicate quality information with external parties to make informed decisions and evaluate the entity’s performance in achieving key objectives. While HUD now has certification completion procedures in place, this process provides the agency limited financial information from owners. As a result, HUD is unable to report metrics that reflect final (postcompletion) RAD financial outcomes after construction is completed. Furthermore, HUD is limited in its ability to effectively oversee conversion budget and cost variances, and expenditures that require HUD approval. Lastly, the RAD authorizing statute requires that the Secretary of HUD demonstrate the feasibility of the RAD conversion model to recapitalize and operate public housing properties under various situations and by leveraging other sources of funding to recapitalize properties. Without metrics that reflect the final (postcompletion) financial outcomes of RAD after construction is completed, HUD and congressional decisionmakers are unable to make informed decisions concerning the RAD program. HUD Has Not Systematically Analyzed Household- Level Data on Residents in RAD Conversions or Monitored Implementation of Some Resident Safeguards HUD has not systematically tracked or analyzed household data on residents in RAD-converted units that are available from its public housing or Section 8 databases or from PHAs or other postconversion owners—the main sources of resident data for the RAD program. In addition, HUD has not yet developed monitoring procedures for all the resident safeguards in the RAD program. Finally, residents told us of some concerns about information they received on RAD conversions, communications opportunities, and the relocation process. HUD Has Not Systematically Analyzed Household-Level Data on the Effects of RAD Conversion on Residents HUD officials told us that the agency does not systematically track or analyze household-level data on residents in RAD-converted units across existing program databases (HUD maintains household data for the public housing and Section 8 rental assistance programs in two databases). In particular, HUD does not track changes in household characteristics before and after conversion, such as changes in rent, as well as relocations or displacement of individual households. However, according to HUD officials, their databases are not designed to track the impact of RAD conversion on residents and they are unable to electronically link household information submitted before RAD conversion to information submitted after conversion. Once a property is converted, the property and corresponding household information are removed from the public housing database. Owners of converted properties are to use software to manually enter household information into the databases for the Section 8 program when submitting tenant certifications and information for assistance payments. This procedure is the standard for administration of all project-based Section 8 properties. HUD officials stated that they have explored the possibility of transferring household data from one system to another at the time of a property’s conversion. While HUD has not systematically analyzed household information from its public housing and Section 8 databases, we were able to perform a limited analysis. We requested and received data from HUD on the households affected by RAD. Using the data provided that were current as of June 2017, we were able to identify about 26,000 households that lived in units that were converted to a PBV subsidy, but we were unable to identify the total number of households converted to a PBRA subsidy. Based on our analysis of 26,000 PBV households, we found about 2,700 (about 11 percent of) households were headed by an about 6,800 (about 26 percent of) households were headed by an individual with a disability; about 2,700 households (about 10 percent of) households were headed by an elderly person who also had a disability; over half (about 14,000 or 54 percent) of the households were headed by an individual identified as black; close to 11,000 households (about 41 percent) were identified as white; and about 1,000 households (about 4 percent) were identified as Asian. Close to 3,100 households (about 12 percent) were headed by an individual identified as Hispanic; about half (about 49 percent) of the PBV households were single- person households; the median annual income of PBV households both before and after RAD conversion was about $10,000; and about 5,300 (about 20 percent) of households were paying a flat rent rather than income-based rent before RAD conversion. However, the data on PBV households were not comprehensive. For example, while about 10,000 residents (about 57 percent) experienced a rent increase following RAD conversion under PBV, we could not determine if the rent increase was the result of an increase in resident income. We also could not determine changes in location among the PBV households following RAD conversion. Rather than relying on the public housing and Section 8 databases for tracking household information during conversion, HUD officials indicated that the agency will rely on locally maintained resident logs, which contain household information collected by property owners, as the starting point when HUD determines a compliance review is warranted. The logs will be the primary way the agency collects household information for compliance reviews under the RAD program, according to HUD officials. In November 2016, HUD issued a notice, which requires the PHA or other postconversion owner to maintain a log about every household at a converting project, including information on race and ethnicity, household size, and disability. The notice also requires owners to track residence status throughout the relocation process, including whether the resident has returned, moved elsewhere, was permanently relocated or evicted; relocation dates; and details on any temporary housing and moving assistance provided. Owners are required to make the information available to HUD upon request for audits and other purposes. According to HUD officials, the agency expects the information in the resident logs to be more robust than what they would collect through the public housing and Section 8 databases, which do not track residents while they are relocated. HUD officials stated that the agency plans to review selected resident logs as part of an ongoing limited compliance review of about 90 RAD conversion projects. HUD officials told us they are developing procedures for performing compliance reviews—such as developing a mechanism to review a sample of logs on a periodic basis—but they have not yet done so because they have been focusing on developing procedures for activities that present a high risk to the program as described in the following section. HUD has not established a time frame for developing these procedures. However, HUD officials indicated that they plan to select resident logs for review based on risk of noncompliance and do not plan to analyze program-wide information currently collected in the public housing and Section 8 databases for program monitoring. HUD officials also noted that that PD&R is planning to track a sample of residents through its evaluation of the program, which we previously mentioned. While HUD has decided to rely on resident logs because of the difficulty of tracking household information across its program databases, using resident logs to assess the effects of the RAD program on residents has limitations. While the resident logs would contain detailed household information, they were not required prior to November 2016 and may not contain information on households converted before that date (RAD conversions started in 2013). HUD’s public housing and Section 8 databases contain information on such households. Second, as previously mentioned, HUD plans to review resident logs only when there is a risk of noncompliance, but they collect household information in their databases on a rolling basis. Standards for Internal Control in the Federal Government require agencies to use quality information to achieve their objectives, and obtain and evaluate relevant and reliable data in a timely manner for use in effective monitoring. Without a comprehensive review of household information—one based on information in HUD data systems as well as resident logs—HUD cannot reasonably assess the effects of ongoing and completed RAD conversions on residents and compliance with resident safeguards, as discussed in the next section. HUD Has Been Developing Procedures to Monitor Some RAD Resident Safeguards HUD has not yet developed monitoring procedures for certain resident safeguards under the RAD program. RAD requirements include those intended to ensure that residents whose units are converted through RAD are informed about the conversion process; can continue to live in a converted property following RAD conversion; are afforded certain protections carried over from the public housing are afforded a phase-in of any rent increases under Section 8 program requirements. Currently, based on HUD notice requirements, PHAs must document compliance with three safeguards (PHA plan amendments, resident notification, and procedural rights) in their RAD application and other conversion paperwork. For example, PHAs must submit comprehensive written responses to resident comments received in connection with the required resident meetings with their RAD application. For one safeguard, PHAs are not required to report to HUD but must retain documentation of compliance to be made available to HUD as part of the monitoring for the program. For others, the HUD notice does not specify reporting and monitoring requirements. Based on our review of files for selected conversions, which we previously discussed, we found PHAs generally submitted documentation of their efforts to inform residents about RAD conversion, such as providing evidence to HUD of meetings with residents and written responses to resident questions as required. However, the specific documents for these requirements were not available from HUD in all cases. HUD’s review of amendments to PHA plans was documented in all but one of the conversions we reviewed. Documentation requirements for resident relocations have changed since RAD was introduced, which made the documentation more difficult to assess. HUD developed and started implementing procedures in October 2017 that require owners to certify and provide data supporting compliance with the resident right-to-return requirements. For example, owners must certify the number of residents who exercised their right to return to a converted property compared with the number of residents who did not return. HUD is also developing standard operating procedures to review each conversion for compliance with RAD relocation provisions. Specifically, the procedures would describe the review steps required at different stages of the conversion process, a process for identifying risks, and how to address instances of noncompliance with RAD requirements. Additionally, HUD noted that they have 2 compliance reviews under way including 1 involving a set of HUD requirements that affect relocations of more than 1 year and the limited compliance review of 90 projects that we previously described. HUD officials noted that they are developing additional guidance in other areas. First, HUD officials indicated that as part of an overall update of RAD standard operating procedures, they are developing additional protocols on resident notification and how residents’ comments are addressed through conversion planning. Second, the agency had not been consistently collecting required documentation on “house rules,” which describe the conditions and procedures for evicting residents and terminating assistance at RAD PBRA properties, so it has developed and implemented additional legal review procedures as part of the implementation of RAD resident eviction and grievance procedural rights requirements. According to HUD officials, they have been focusing primarily on right-to-return and relocation requirements because they represent areas of highest risk. HUD has not developed separate monitoring procedures for other resident safeguards—the phase-in of tenant rent increases, resident representation through tenant organizations, and choice mobility requirements. However, HUD officials told us that they plan to assess how administrative data can be used to monitor choice mobility as part of the planning for a separate PD&R evaluation of this safeguard. HUD officials also indicated that there are procedures for residents to report complaints to HUD if resident representation and organization requirements are not met. Standards for Internal Control in the Federal Government require agencies to implement control activities through documented policies and procedures to provide reasonable assurance that agency objectives will be achieved. These standards also require agencies to design procedures to achieve goals and objectives, and identify, analyze, and respond to risks related to achieving the defined objectives. Table 1 includes a description and information on implementation of resident safeguards that most directly affect residents’ experience with the conversion process and ability to live at the property following conversion. Appendix III describes these and other RAD resident safeguards. HUD officials indicated that the safeguards for the phase-in of tenant rent increases, resident representation, procedural rights, and choice mobility presented a lower risk than the right-to-return requirements, so they were a lower priority, and in some cases were addressed through general monitoring of the Section 8 program. For choice mobility options, HUD indicated that its data systems are not designed to track whether residents are able to exercise these options, such as tracking whether residents left a property to exercise choice mobility or for other reasons. All but two of the resident safeguards do not take effect until after a property has been converted and is part of the Section 8 program. For example, residents are only eligible to use vouchers through choice mobility after they have lived in the converted property for 1 or 2 years depending on the assistance contract involved (PBV or PBRA). Moreover, certain RAD safeguards are not typically available for Section 8 residents. For example, RAD establishes resident representation provisions and procedural rights that are more in line with public housing rather than Section 8 requirements. While HUD has indicated that the Section 8 program has experience administering different types of assistance contracts, RAD nonetheless creates separate requirements for certain provisions from the public housing and Section 8 programs. As previously mentioned, RAD conversions have been completed at an increasing pace in the last 5 years. However, because HUD has not yet developed separate monitoring procedures for certain requirements—the phase-in of tenant rent increases, resident representation through tenant organizations, and choice mobility requirements, many of which take effect after a conversion—and without using all available household data, the agency will not be able to reasonably ensure that these safeguards were implemented. Residents Described Mixed Experiences during the RAD Conversion Process Residents who participated in our focus groups expressed some concerns about information they received on RAD conversions, communications opportunities, and the relocation process. Residents indicated that they were notified about RAD conversion in a variety of ways. Residents in 5 of 14 focus groups found the information presented to them on RAD to be helpful. Residents in 7 of 14 focus groups indicated that the information they received was not helpful. Across these focus groups, a range of concerns was expressed, including that the information provided was not always clear or reflective of the final changes resulting from RAD conversion, and that the PHA and management were not always forthcoming with information about the RAD changes. Residents in some focus groups also indicated that they were not involved in the RAD conversion. Residents in 5 of 14 groups indicated that they were not given the opportunity to provide input into the RAD changes, while residents in 6 of 14 groups indicated that their concerns were not addressed and their suggestions were not incorporated. Residents also described problems with relocations. Some of the concerns expressed by resident focus groups on relocation related to the location of the temporary units (3 of 14 focus groups), the timing of relocation or amount of notice given (7 of 14 focus groups), and moving issues (such as items damaged during moves). Residents were asked to describe ways in which RAD conversion improved or harmed their living conditions. Residents in several focus groups indicated that RAD improved their living conditions, including both the condition (7 of 14 focus groups) and appearance of their units or the property in which they lived (6 of 14 focus groups). Some of the changes residents liked included the installation of new appliances, mold and pest removal, and safety and energy efficiency improvements. However, residents in several of the focus groups identified problems with their living conditions following RAD conversion. The problems residents identified included security concerns (10 of 14 focus groups); renovations that were of poor quality (6 of 14 focus groups); and other problems with the units (10 of 14 focus groups), such as pest problems; decreased amenities (8 of 14 focus groups), such as the removal of common areas or in-unit washing machines; and issues with property management (11 of 14 focus groups). For example, in several instances, residents stated that new managers or owners in place following RAD conversion were not responsive to their needs or concerns. During our site visits, residents described other experiences with RAD conversion. Residents in all of the groups identified being notified about RAD. Residents in 9 of 14 focus groups indicated that their rent was the same following RAD conversion. Residents in a few focus groups indicated that their rent had increased because of changes in their income or conversion from a flat rent. However, residents in a few focus groups experienced challenges in how their income was certified for the purpose of calculating rents, such as problems with requests for information (2 of 14 focus groups) and other issues with the process (4 of 14 focus groups). For example, residents reported having to provide the same paperwork multiple times. No instances in which residents were permanently involuntarily displaced were reported. One resident organization expressed concerns about fewer eviction protections and resident representation after RAD conversion. PHAs Identified Benefits and Challenges of RAD Participation We spoke with 18 PHAs, some of which cited benefits as well as several challenges of RAD participation and some noted HUD responsiveness to their circumstances and concerns. According to many of the PHAs we spoke with, benefits of participating in the RAD program included reducing administrative requirements in Section 8 programs and opening avenues for additional sources of funding. In particular, many of the PHAs noted that RAD allowed them to access tax credit equity and other funding to complete the bulk of their repairs and renovations at once. Over half of the PHAs we spoke with also found HUD to be flexible and responsive to individual PHA circumstances. The majority of PHAs we spoke with indicated that remaining in the public housing program was not tenable because funding for the public housing program was not enough to meet their long-term capital needs. PHAs we contacted also noted several challenges of participating in RAD: financing constraints, timing challenges, and evolving requirements. Financing constraints. Some PHAs noted that program rent requirements can limit PHA participation in RAD. Each year, HUD calculates a contract rent—the total rent for a unit, including operating subsidy and resident contribution. PHAs must use the contract rent to calculate Section 8 subsidies for properties converting under RAD. According to HUD and several PHAs, contract rents for RAD-converted Section 8 units are lower than rents in traditional Section 8 assisted units, and are almost always lower than market-rate rents. Several PHAs and HUD officials have described the difficulty of converting units from the public housing program with this rent limitation. For example, when the cost of needed rehabilitation or construction is high, low allowable contract rents might not be sufficient to access appropriate capital for the conversion. In certain localities, PHAs have found solutions to augment rents and have used RAD flexibilities to allow them to convert and plan for operating expenses. For example, the PHA in Tacoma, Washington, used the Moving to Work program flexibilities to increase contract rents and housing officials in San Francisco used an allowable procedure to transfer RAD assistance from converted buildings to properties throughout its portfolio (each is a blend of traditional project-based vouchers with higher contract rents and RAD assistance). In Montgomery County, Maryland, the PHA similarly included RAD assistance in some mixed-finance properties that contain other high-rent subsidies and market-rate rents. Timing challenges. Some PHAs said they faced major challenges in coordinating RAD timelines with HUD, lenders, or other parties or with the requirements of the LIHTC process. HUD officials acknowledged that PHAs with more complex transactions, including those involved in the LIHTC process, struggle to implement their conversion plans within RAD time frames. HUD officials noted that because there is a statutory cap on the number of units that can be converted under RAD, they have established time frames to stay under the cap and ensure that PHAs that are planning to convert are ready to participate in the program. Additionally, according to HUD, it has made technical assistance available to all PHAs that receive a Commitment to enter into a Housing Assistance Payment contract during the RAD process to help ensure their readiness for RAD closing and to meet remaining conversion deadlines. On the other hand, some PHAs expressed concern to us about delays in the conversion process that put them at risk for missing state LIHTC deadlines. HUD officials described putting conversions on a fast-track on a case-by-case basis to meet LIHTC deadlines. For example, in one case a PHA relocated residents before closing and without HUD approval. HUD required the PHA to fund an escrow account until it was able to determine any payments that might need to be made to residents and any other necessary corrective action. This was done so that HUD could look into the issue while mitigating additional harm to the residents and continuing to move the PHA toward closing and aligned with tax credit application deadlines. The timing of conversion can also create gaps in the payment of Section 8 funds to PHAs. Section 8 funding should begin in January of the year following conversion. PHAs rely on annual public housing subsidies for the conversion year—public housing program funds are paid to PHAs annually and are not recaptured by HUD following RAD conversion. However, according to some PHAs we interviewed, Section 8 funding did not begin on time. For example, in Baltimore, Maryland, subsidy flow after conversion had not begun as of June of the following year. HUD officials told us inadequate guidance from HUD and confusion from PHAs regarding the necessary steps to request payment in a timely manner have been the major cause of the problems. HUD has tried to remedy delays and updated its notice to provide clearer guidance on the timing of subsidy flow around the time of conversion to Section 8. Moreover, HUD officials indicated that there has been confusion among PHAs on how to request funds, so HUD is currently revising and updating the guidance on steps PHAs must take to request payment under the PBRA program. HUD officials also indicated that it has begun monitoring whether new participants are taking the steps needed well before their first request for funding. Some PHAs we contacted also mentioned difficulty in coordinating with HUD on fulfilling internal RAD requirements and reviews. According to some, the different offices involved in RAD conversions within HUD were not well aligned and had different interpretations of the rules. For example, some RAD conversions require a civil rights review by HUD’s Office of Fair Housing and Equal Opportunity Office, including those transactions that require new construction or resident relocations. Some PHAs indicated that such reviews occurred too late in the conversion process even after other HUD offices had approved the conversion. HUD officials acknowledge that different HUD offices have different objectives in the RAD process. HUD officials indicated that the agency is trying to coordinate more effectively among these offices and streamline the conversion process as much as possible. Evolving requirements. While the majority of PHAs with which we spoke said that HUD provided clear, sufficient, and timely information, some PHAs noted that it also was challenging to adapt to evolving requirements. Some PHAs noted that as HUD identified problems in the early years of the program, it would change the guidance in response. For example, HUD officials explained that it had clarified fair housing review requirements in response to PHA concerns that the fair housing review occurred too late in the process and could affect successful conversion of projects. The most recent RAD notice (effective January 2017) is the third version since 2013 and revisions have involved substantial changes. For example, this notice provided PHAs with greater flexibilities on the funding sources they can use to raise initial contract rents and the ways they can demonstrate ownership and control of a converted property. In addition, HUD introduced a notice in November 2016 to strengthen resident protections. Some PHAs told us they found the pace or timing of the evolving requirements difficult to manage and also noted confusion about conversion instructions and guidance due to changing requirements. For example, one PHA indicated that the agency had problems reporting information into a new RAD data field in HUD’s Voucher Management System because there was no guidance at the time on how to complete this field. However, HUD has since included additional instructions in the user’s manual that became effective in April 2017. Strength of Protections Intended to Preserve Affordability Is Unknown and HUD Does Not Have Procedures to Address Preservation Risks RAD Provisions and Use Agreements Have Not Been Tested The Committee has included language to establish procedures that will ensure that public housing remains a public asset in the that event that the project experiences problems, such as default or foreclosure. In each RAD conversion, HUD and the property owner execute a use agreement, which specifies affordability and use restrictions for the property. The use agreement generally exists concurrently with the HAP contract, which is executed to govern the provision of either the PBRA or PBV subsidy for the unit. The use agreement must be recorded in a superior position to new or existing financing or other encumbrances on the converted property. Under a Section 8 HAP contract, residents pay 30 percent of adjusted household income. In the absence of the HAP contract, the use agreement is set up to control the amount paid: If the HAP contract is removed due to breach, noncompliance, or insufficiency of appropriations, under the use agreement new households in all units previously covered under the HAP contract must have incomes at or below 80 percent of the area median income for households of the size occupying an appropriately sized unit for their family size at the time of admission, and rents may not exceed 30 percent of 80 percent of area median income for the remainder of the term of the use agreement. For new residents at or below 80 percent of the area median income, under the use agreement the resident rent contribution without a HAP contract generally would be higher than that paid under a HAP contract, which is based on household income instead of the universally determined area median income. Although the use agreement maintains some level of affordability, the owner receives no subsidy under PBRA or PBV without a HAP contract and resident rent contribution is not tied to individual household income but rather based on a universal area income calculation (see fig. 3). According to HUD officials, other program requirements support the goal of long-term preservation: HAP contracts are executed for 20 years for PBRA or 15–20 years for PBV properties and compliance with all affordability requirements in the HAP and statute and regulation governing the PBRA and PBV programs must be maintained while the contract is in force. According to the authorizing statute, PHAs (for PBV contracts) and HUD (for PBRA contracts) shall offer and project owners shall accept a renewal contract at the expiration of the initial HAP contract and at each subsequent renewal. Each renewal contract will be subject to a RAD use agreement, governing the use of the property consistent with HUD requirements. According to the RAD notice, the project owner also is to establish and maintain a replacement reserve to aid in funding extraordinary maintenance and repair and replacement of capital items. The reserve account must be built up to and maintained at a level determined by HUD to be sufficient to meet projected requirements. According to HUD officials, during the conversion, HUD staff review each capital needs assessment to try to determine whether a property’s capital needs can be addressed over the forthcoming 20-year period. We reviewed 31 completed conversion files, the set of documentation required by HUD to enable a PHA to convert units from public housing to a Section 8 subsidy, and associated RAD contracts. In each file, key contractual protections appeared consistent with program requirements. Specifically, in all cases executed use agreements (which included requirements to limit residency eligibility to households making less than 80 percent of area median income) were included and not altered from the HUD template. In most files we reviewed, we found foreclosure riders were included and that they stated that use agreements would survive foreclosure, meaning that any new owners would take ownership subject to the agreements. Executed HAP contracts, requiring that residents’ contributions be set at 30 percent of adjusted household income, also were present in all files we reviewed. According to HUD officials, PHAs, and two housing groups we spoke with, provisions in the RAD use agreement to keep units affordable appear to be strong, with use and affordability protections designed to survive foreclosure, but the strength of provisions cannot yet be fully determined because the provisions have not yet been tested in foreclosure proceedings or in courts. According to HUD officials, as of October 2017 no RAD properties had entered foreclosure. The RAD authorizing statute requires that ownership be transferred to a capable public entity or, if not one, a capable entity as determined by HUD, or if necessary to fulfill LIHTC requirements for the property, to a HUD-approved for-profit entity (provided the PHA retained sufficient interest in the property). HUD also subjects any subsequent transfer of the property to HUD review and requires the successor ownership to meet these same requirements. As stated in the use agreement, a lien holder must give HUD notice prior to declaring a default and provide HUD concurrent notice with any written filing of foreclosure (providing that the foreclosure sale must not be sooner than 60 days after the notice), but the use agreement does not prohibit a lien holder from foreclosing on the lien or accepting a deed in lieu of foreclosure. The RAD use agreement, which is recorded superior to other liens and places use and affordability restrictions on the property, survives foreclosure. With or without a HAP contract in place, the lender or new owner must maintain the units for low- income households according to the terms of the use agreement. Therefore, according to HUD officials, the lender or new owner has an incentive to identify an appropriate owner and secure HUD approval to avoid a default under the HAP contract, which provides a Section 8 subsidy to the owner. That is, if no HAP contract were in place, the owner would collect only the tenant rent contribution (30 percent of 80 percent of area median income), rather than the tenant rent contribution plus the subsidy. HUD has discretion to enforce or waive certain use and affordability protections. According to the authorizing statute, in the case of foreclosure, bankruptcy, or termination and transfer of assistance for material violation or substantial default, the priority for ownership or control must be provided to a capable public entity, or, if no such entity can be found, to a capable entity as determined by the Secretary of HUD. Additionally, the statute allows the transfer of property to for-profit entities to facilitate the use of LIHTC financing, with requirements to maintain the PHA’s interest, which was discussed above. As of September 30, 2017, about 40 percent of RAD conversions involved LIHTC financing. According to the RAD notice, in the event of a default of a property’s use agreement or HAP contract, HUD may terminate the HAP contract and transfer assistance to another location to retain affordable units. HUD will determine the appropriate location and owner entity for the transferred assistance consistent with statutory goals and requirements for RAD. The RAD use agreement will remain in effect even in the case of abatement or termination of the HAP contract for the term the contract would have run, unless HUD agreed differently in writing. In this case, the RAD notice limits HUD discretion to terminate the use agreement to only cases involving a transfer of assistance to another property. HUD Does Not Have Procedures in Place to Identify and Respond to Preservation Risks HUD has not yet developed procedures to monitor RAD projects for risks to long-term affordability of units, including default or foreclosure. HUD officials described an ongoing effort to develop oversight procedures it would need to reasonably ensure compliance with RAD agreements and avoid risks to long-term affordability once conversions closed and units moved to Section 8 but, as previously discussed, the agency has not yet completed this effort or fully implemented a monitoring system. HUD officials told us they also planned to develop protocols to more closely monitor properties at risk of foreclosure, including developing indicators, procedures, roles, and responsibilities within HUD, but they have not finalized the design of procedures or fully implemented them. To develop protocols, HUD created an asset management working group in September 2016. The officials also stressed that no one can take possession of or foreclose on a property without HUD involvement and approval. For example, HUD officials said they expect few foreclosures among RAD-converted properties because lenders tend to communicate with the agency early so that it can become involved to prevent foreclosure. HUD officials pointed to a robust structure to oversee program properties in the PBRA program, but stated PBV property oversight continues to be developed by the Office of Public and Indian Housing. According to Standards for Internal Control in the Federal Government, agencies should design procedures to achieve goals and objectives, such as the preservation of unit affordability, and respond to risks, in this case the risk of default or foreclosure or noncompliance with program requirements. Additionally, management should identify, analyze, and respond to risks related to achieving its goals and objectives. According to HUD officials, the agency had not yet fully developed and implemented oversight procedures for postconversion monitoring because since 2012, the agency has focused on RAD start-up and review and oversight procedures for the conversion process. HUD officials also said that many projects would receive ongoing monitoring from other parties, which also could serve as a safeguard for unit affordability and help ensure the appropriate financial and physical condition of the property after RAD conversion. For example, just under half of all RAD properties use LIHTC financing as part of financing packages, which can also include local and state bonds. According to HUD officials, oversight by tax credit allocating agencies, investors, and lenders, while not alone sufficient, helps secure affordable units in a property for the long-term. However, tax credit allocating agencies, investors, and lenders are not signatories to the HAP contract or use agreement and have no formal role in reasonably ensuring that properties meet requirements exclusive to RAD. Although other entities may exercise some oversight of properties, by not developing and implementing procedures for ongoing oversight, HUD in its role as program administrator will not be able to reasonably ensure that properties adhere to requirements or meet basic program goals. Furthermore, without such monitoring HUD would be limited in its ability to identify and assist with properties at risk of foreclosure. Conclusions RAD was created to demonstrate the feasibility of converting public housing units to other rental assistance programs to help preserve affordable rental units and address the significant backlog of capital needs in the public housing program. However, demonstrating the feasibility of RAD conversion is contingent on collecting and assessing quality information about the conversion projects. HUD has an opportunity to improve the demonstration’s metrics. For instance, implementing robust postclosing oversight and collecting information on financial outcomes upon completion of construction would not only improve HUD’s oversight capabilities but also allow it to report quality information. Moreover, limitations in HUD’s methodology for calculating leverage ratios for RAD may obscure the effect of funding sources used to help fund RAD conversions, potentially under- or over-reporting the program’s capital leveraging. By collecting comprehensive information on final (postcompletion) financing sources and costs and developing quality metrics, HUD would be better positioned to more accurately report the results of the demonstration program. Additionally, a focus on the conversion process itself (and less on its results), and limitations in HUD’s data have contributed to limited monitoring by HUD in other areas. Specifically, by not developing and implementing monitoring procedures to assess the effect of RAD on residents HUD cannot ensure compliance with resident safeguards. Further, HUD collects and maintains household data for the public housing and Section 8 programs, yet it does not systematically use this information to ensure that resident safeguards are in place. Finally, HUD could benefit from additional procedures to assess RAD properties for risks to long-term preservation to be able to respond to property default or foreclosure. Recommendations for Executive Action We are making the following five recommendations to HUD: HUD’s Assistant Secretary for Housing should include provisions in its postclosing monitoring procedures to collect comprehensive high quality data on financial outcomes upon completion of construction, which could include requiring third-party certification of and collecting supporting documentation for all financing sources and costs. (Recommendation 1) HUD’s Assistant Secretary for Housing should improve the accuracy of RAD leverage metrics—such as better selecting inputs to the leverage ratio calculation and clearly identifying what the leverage ratio measures—and calculate a private-sector leverage ratio. (Recommendation 2) HUD’s Assistant Secretary for Housing should prioritize the development and implementation of monitoring procedures to ensure that resident safeguards are implemented. (Recommendation 3) HUD’s Assistant Secretary for Housing should determine how it can use available program-wide data from public housing and Section 8 databases, in addition to resident logs, for analysis of the use and enforcement of RAD resident protections. (Recommendation 4) HUD’s Assistant Secretary for Housing should prioritize the development and implementation of procedures to assess risks to the preservation of unit affordability. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to HUD for comment. HUD provided written comments on the draft report, which are summarized below and reproduced in appendix IV. HUD also provided technical comments, which we incorporated as appropriate. In its comment letter, HUD stated that it agreed with our findings that HUD can improve metrics used to assess program impact and build on existing oversight structures. HUD described actions it intends to take to implement our recommendations to the extent possible and consistent with resource limitations. More specifically, HUD agreed with our first recommendation to ensure it collects comprehensive quality data on financial outcomes in its postclosing monitoring procedures (which could include supporting documentation for all financing sources and costs). HUD agreed it should routinely collect an updated list of funding sources and uses and related documentation when projects had cost overruns or other significant changes. HUD intends to review and revise, as appropriate, required postcompletion certifications. HUD added that in most cases, funding sources and uses do not materially change between closing and construction completion. HUD stated that securing the postclosing information in such cases might be of minimal benefit relative to the additional reporting burden. However, it is not clear how HUD would determine if projects had significant changes in costs or uses because HUD lacks postcompletion information that would show the magnitude of changes. In relation to reporting burden, HUD already has implemented procedures to collect limited financial information following the completion of construction in October 2017. We believe any additional reporting would not be disproportionate to the benefits of improving HUD's oversight capabilities through project completion and enhancing its reporting to more accurately reflect the results of the demonstration program. For our second recommendation to improve the accuracy of RAD leverage metrics and calculate a private-sector leverage ratio, HUD agreed that RAD leverage metrics can be improved. HUD will ensure that the private-sector leverage ratio required by statute is clearly identified and included in its RAD evaluation. HUD also intends to identify a small number of relevant leverage ratios with distinct methodologies and will routinely publish these ratios with clear identification and explanations. In relation to our finding of misidentified funding sources, HUD plans to re- examine its chart of accounts and review prior transaction records to address errors and properly classify transaction sources. In response to our third recommendation to prioritize the development and implementation of monitoring procedures for resident safeguards, HUD agreed that it is important to better document and expedite development and implementation of monitoring procedures. HUD also agreed that additional monitoring was needed to ensure the right of residents to request and move with a tenant-based voucher after a period of residency (choice-mobility). HUD noted that its Office of Policy Development and Research is seeking funding for additional research on RAD with a focus on the use and effect of choice-mobility options, which would inform HUD's monitoring efforts. Finally, while HUD said that we did not find the safeguards to be weak or inadequate, we did not perform an audit designed to assess the safeguards and therefore cannot opine on their adequacy. On the basis of our findings, we found that HUD’s implementation of these safeguards could be strengthened. Regarding our fourth recommendation that HUD determine how it can use available program-wide data and resident logs for analysis of RAD resident protections, HUD agreed to examine how it could use its existing data systems to further enhance its monitoring efforts. HUD added that the systems have limitations, so that the agency also uses other mechanisms to track and monitor implementation of resident protections. For our fifth recommendation to prioritize the development and implementation of procedures to assess risks to the preservation of unit affordability, HUD agreed that it is important to assess and mitigate risks to unit affordability. HUD stated that it employs robust underwriting standards prior to permitting conversion, and relies on existing procedures to conduct ongoing oversight of Project-Based Rental Assistance (PBRA) properties, which we discussed in the draft. However, as we noted, HUD has not yet developed procedures to more closely monitor RAD properties at risk of foreclosure, though it plans to establish indicators of foreclosure risk and oversight roles and responsibilities within HUD. HUD said that since the summer of 2017, it has been evaluating what additional oversight procedures might be needed for RAD Project-Based Voucher properties. HUD also described plans to augment its existing oversight procedures to preserve affordable units in the event of foreclosure by developing protocols in the following areas: transfer of property ownership to a capable entity, transfer of the rental assistance to another site, and protection of residents in the event a Housing Assistance Payment contract was terminated. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Housing and Urban Development and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are listed on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines aspects of the Department of Housing and Urban Development’s (HUD) Rental Assistance Demonstration (RAD) program. More specifically, this report addresses (1) HUD’s assessment of the physical and financial outcomes of RAD conversion to date; (2) how RAD conversions affected residents and what safeguards were in place to protect them, including while temporarily relocated and during conversion; (3) what challenges, if any, public housing agencies (PHA) faced in implementing RAD; and (4) the extent to which RAD provisions are designed to help preserve the long-term affordability of units. To address all four objectives, we analyzed agency documentation and interviewed officials from HUD. The documentation we reviewed included policies and procedures for RAD; manuals describing HUD data systems; draft policies and procedures for implementing postclosing oversight; and reports on RAD performance. We interviewed HUD headquarters officials from the Office of Recapitalization within the Office of Housing, which oversees the administration of RAD, and the Office of Policy Development and Research (PD&R). We also interviewed PHA officials and developers involved in RAD transactions, as well as selected experts and other stakeholders to obtain their perspectives on RAD. Additionally, we conducted a literature search to identify publications related to RAD. We visited a nonprobability sample of eight PHAs in Maricopa County, Arizona; Alameda County, California; Montgomery County, Maryland; and in the cities of San Francisco, California; Baltimore, Maryland; New Bern, North Carolina; El Paso, Texas; and Tacoma, Washington, to observe housing units before, during, or after renovation when possible as well as common areas that had planned or undergone renovation. We selected sites to include varying PHA sizes, RAD subsidy types, planned rehabilitation and resident relocation, numbers and sizes of RAD transactions, closing dates, constructions costs, and geographic locations across the United States. At each site, we conducted semistructured interviews with PHA officials and, when available, developers (5 sites). We also conducted one or two focus-group interviews with groups of 6– 15 residents who lived at the converted properties to obtain their perspectives and experiences. In each location, we asked the PHAs to invite residents to participate in the focus groups based on their availability. We also met with the Resident Advisory Board in each location that had one. For 7 of 8 site visits, we selected two RAD properties to conduct resident focus groups (in Alameda County, California we held one focus group). We conducted a content analysis based on resident focus group interviews to describe resident experiences and the RAD program’s effects on residents. Utilizing the selection criteria noted above, we conducted semistructured telephonic interviews with an additional nonprobability sample of 10 PHAs in Fresno, California; Fort Collins, Colorado; Dekalb County, Georgia; Chicago, Illinois; Ypsilanti, Michigan; Cuyahoga County, Ohio; Philadelphia; Pennsylvania; Spartanburg, South Carolina; McKinney, Texas; and Yakima, Washington. Because we selected a non-probability sample of PHAs to visit and interview, the information we obtained cannot be generalized more broadly to all PHAs. However, it provides context on RAD particularly on implementation challenges and perspectives on physical and financial impacts, long-term affordability, and resident protections. We also selected the following 11 individuals and organizations as experts and stakeholders: 1. Council of Large Public Housing Authorities 2. National Association of Housing and Redevelopment Officials 3. Center on Budget and Policy Priorities 4. Public Housing Authorities Directors Association 5. National Housing Law Project 6. Community Legal Services of Philadelphia 7. Maryland Legal Aid 8. Disability Rights Maryland 9. Jaime Alison Lee, Associate Professor of Law and Director, Community Development Clinic, University of Baltimore School of Law 10. Yumiko Aratani, Assistant Professor, Columbia University Mailman 11. University of California, Berkeley, Terner Center for Housing We interviewed experts and stakeholders on resident impacts and implementation challenges associated with RAD. The entities may not represent all views on these topics, but their views provide insights on RAD. To select these individuals and groups, we met with three major PHA associations and two resident advocacy groups, and asked for referrals for organizations or individuals with expertise in RAD. We also selected a nonprobability, random sample of 31 RAD conversion files to review. Utilizing HUD RAD Resource Desk data, we randomly selected 31 RAD files for properties that had closed conversion as of June 30, 2017 and that planned to incur construction costs. We used the files to help us determine physical changes to RAD conversions and the impacts of RAD on residents through, for example, relocation. We excluded RAD conversions with no construction costs from the random sample because they would not have physical changes and no resident relocation would occur before or during our review. To address our first objective on the physical and financial outcomes of RAD conversion to date and how HUD measured these outcomes, we first obtained and analyzed HUD data on RAD conversions since RAD’s authorization (from fiscal years 2013 through 2017). We assessed the reliability of these data by reviewing system documentation, interviewing knowledgeable officials about system controls, and conducting electronic testing. We determined that the data were sufficiently reliable for the purposes of describing rehabilitation and new construction in RAD projects and evaluating RAD leveraging metrics. We included in our analysis all RAD conversions that were active or closed. We used these data to determine the number of closed RAD conversions, associated financial sources and uses, subsidy types, and type of construction (rehabilitation, new construction, and no rehabilitation or new construction). In addition, during our interviews with PHAs and developers, we obtained their perspectives on potential contributing factors to financial decisions and type of construction pursued through RAD conversion. As noted earlier, we also reviewed 31 randomly selected files of converted properties with construction costs to describe property physical changes in RAD conversions. Furthermore, we reviewed HUD documents, such as HUD and PD&R evaluations, publications, and policies and procedures to gain additional context for how HUD measures RAD outcomes. We also interviewed HUD officials, including PD&R and Office of Recapitalization officials, on RAD data and metrics, as well as other performance monitoring activities. We further analyzed data from the HUD RAD Resource Desk to determine how these data support HUD’s metrics and performance monitoring activities. As previously mentioned, we determined that these HUD data were sufficiently reliable for the purposes of this report. Specifically, we assessed and calculated RAD leverage ratio and construction activity. We assessed HUD’s performance monitoring activities and reporting against the RAD authorizing statute, Standards for Internal Control in the Federal Government. To recalculate estimates of the RAD leverage metric, we obtained documentation from the Office of Recapitalization to review the methodology used to calculate their most recent leverage ratio. We aligned the methodology they provided with RAD Resource Desk Transaction Log data that was downloaded on August 7, 2017. We replicated HUD’s methodology and matched the data utilized with the descriptors from the Transaction Log. To isolate financial sources and manually adjust the “other source” data, we compiled matched descriptors and funding amounts and categorized each observation based on the funding source description, as a federal source, state/county/city source, or PHA source, among others. For additional information and results, see appendix II. To determine how RAD affected residents in converted units, we analyzed HUD public housing and Section 8 household data before and after conversion (demographic characteristics of residents and changes in rent, income, and location). Specifically, we examined data from 2013— when the first transactions closed—through June 30, 2017. HUD compiled and provided custom extracts of data on households in RAD- converted properties from the Inventory Management System/Public and Indian Housing Information Center (IMS/PIC) (public housing and Section 8 PBV) and Tenant Rental Assistance Certification System (Section 8 PBRA). We assessed the reliability of the data extracts provided by HUD by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined the data on PBV households were sufficiently reliable for the purposes of our reporting objectives, but that the data on PBRA households was not sufficiently reliable for purposes of describing some characteristics of RAD households. For example, in trying to determine participation in the RAD program by year, we received several thousand PBRA entries that preceded the establishment of the RAD program. Moreover, as we previously mentioned, the postconversion household data for PBRA conversions is in a separate data system, so some variables, such as those related to race, ethnicity, rent, and income, differ from the other household data for that program. Because of these limitations, the data for PBRA households were not reliable for purposes of comparing RAD household characteristics before and after conversion as we had intended. To describe safeguards for residents and help ascertain how HUD implemented protections, we reviewed legal protections and requirements in HUD notices, reviewed selected conversion files, and interviewed HUD officials about monitoring and compliance processes. Finally, as previously described, we held focus groups with residents to better understand any effects on their living conditions and quality of life. To determine challenges PHAs faced in implementing RAD, we reviewed HUD guidance and related documents for PHAs in the program. We also interviewed eight PHAs during our site visits and spoke with another 10 PHAs by telephone about the benefits and challenges of participating in the RAD program. To examine provisions designed to help preserve long-term affordability of units, we reviewed the RAD authorizing statute and amendments and HUD notices and interviewed HUD staff to verify our understanding of agency affordability protections. For a sample of 31 randomly selected properties, we examined templates for contractual agreements for RAD closings and analyzed closing documents and contracts to determine if agreements matched program requirements. We interviewed HUD staff and staff of 18 PHAs to obtain viewpoints on the potential strengths or weaknesses of preservation in the case of default or foreclosure. We conducted this performance audit from February 2016 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: HUD’s Reported Leverage Ratios and Our Recalculation Estimates The Department of Housing and Urban Development’s (HUD) Office of Recapitalization collects financial sources and use data from Rental Assistance Demonstration (RAD) participants. Table 2 lists the financial source fields collected by HUD. Table 3 lists the financial cost fields collected by HUD. Table 4 provides additional financial source detail pertaining to HUD’s leverage ratio calculation. Table 5 and Table 6 show the total financial source amounts collected by HUD. Specifically, Table 5 shows total financial source amounts prior to recategorization, while Table 6 shows total financial source amounts after manual adjustments. Manual adjustments included isolating funding source observations in “other funding” fields 1-6 and incorporating them into existing fields, as appropriate. Table 7 replicates HUD’s methodology for calculating the RAD leverage metrics after manual adjustments in HUD data. See Table 4, above, to compare changes in each category. Table 8 recalculates the leverage ratio by deducting federal sources as leveraged sources. Table 9 recalculates the leverage ratio by deducting public sources as leveraged sources (compare to Table 8 above). Appendix III: RAD Resident Safeguard and Monitoring Requirements The Rental Assistance Demonstration (RAD) program has numerous requirements intended to ensure residents whose units are converted through RAD receive certain protections. The following is a description of these safeguards and their reporting and monitoring requirements. Appendix IV: Comments from the Department of Housing and Urban Development GAO’s Mission The Government Accountability Office, the audit, evaluation, and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s website (https://www.gao.gov). Each weekday afternoon, GAO posts on its website newly released reports, testimony, and correspondence. To have GAO e-mail you a list of newly posted products, go to https://www.gao.gov and select “E-mail Updates.” Order by Phone The price of each GAO publication reflects GAO’s actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO’s website, https://www.gao.gov/ordering.htm. Place orders by calling (202) 512-6000, toll free (866) 801-7077, or TDD (202) 512-2537. Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information. Connect with GAO Connect with GAO on Facebook, Flickr, Twitter, and YouTube. Subscribe to our RSS Feeds or E-mail Updates. Listen to our Podcasts. Visit GAO on the web at https://www.gao.gov. To Report Fraud, Waste, and Abuse in Federal Programs Congressional Relations Public Affairs Strategic Planning and External Liaison James-Christian Blockwood, Managing Director, spel@gao.gov, (202) 512-4707 U.S. Government Accountability Office, 441 G Street NW, Room 7814, Washington, DC 20548 Please Print on Recycled Paper. Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Paul Schmidt (Assistant Director), Julie Trinder-Clements (Analyst in Charge), Meghana Acharya, Enyinnaya David Aja, Alyssia Borsella, Juan J. Garcia, Ron La Due Lake, Amanda Miller, Marc Molino, Barbara Roesmann, Jessica Sandler, MaryLynn Sergent, Rachel Stoiko, and William Woods made major contributions to this report.
Why GAO Did This Study HUD administers the Public Housing program, which provides federally assisted rental units to low-income households through PHAs. In 2010, HUD estimated its aging public housing stock had $25.6 billion in unmet capital needs. To help address these needs, the RAD program was authorized in fiscal year 2012. RAD allows PHAs to move (convert) properties in the public housing program to Section 8 rental assistance programs, and retain property ownership or transfer it to other entities. The conversion enables PHAs to access additional funding, including investor equity, generally not available for public housing properties. GAO was asked to review public housing conversions under RAD and any impact on residents. This report addresses, among other objectives, HUD's (1) assessment of conversion outcomes; (2) oversight of resident safeguards; and (3) provisions to help preserve the long-term affordability of units. GAO analyzed data on RAD conversions through fiscal year 2017; reviewed a sample of randomly selected, nongeneralizable RAD property files; and interviewed HUD officials, PHAs, developers, academics, and affected residents. What GAO Found The Department of Housing and Urban Development (HUD) put procedures in place to evaluate and monitor the impact of conversion of public housing properties under the Rental Assistance Demonstration (RAD) program. RAD's authorizing legislation requires HUD to assess and publish findings about the amount of private-sector leveraging. HUD uses a variety of metrics to measure conversion outcomes. But, the metric HUD uses to measure private-sector leveraging—the share of private versus public funding for construction or rehabilitation of assisted housing—has limitations. For example, HUD's leveraging ratio counts some public resources as leveraged private-sector investment and does not use final (post-completion) data. As a result, HUD's ability to accurately assess private-sector leveraging is limited. HUD does not systematically use its data systems to track effects of RAD conversions on resident households (such as changes in rent and income, or relocation) or monitor use of all resident safeguards. Rather, since 2016, HUD has required public housing agencies (PHA) or other post-conversion owners to maintain resident logs and collect such information. But the resident logs do not contain historical program information. HUD has not developed a process for systematically reviewing information from its data systems and resident logs on an ongoing basis. HUD has been developing procedures to monitor compliance with some resident safeguards—such as the right to return to a converted property—and begun a limited review of compliance with these safeguards. However, HUD has not yet developed a process for monitoring other safeguards—such as access to other housing voucher options. Federal internal control standards require agencies to use quality information to achieve objectives, and obtain and evaluate relevant and reliable data in a timely manner for use in effective monitoring. Without a comprehensive review of household information and procedures for fully monitoring all resident safeguards, HUD cannot fully assess the effects of RAD on residents. RAD authorizing legislation and the program's use agreements (contracts with property owners) contain provisions intended to help ensure the long-term availability of affordable units, but the provisions have not been tested in situations such as foreclosure. For example, use agreements between HUD and property owners specify affordability and use restrictions that according to the contract would survive a default or foreclosure. HUD officials stated that HUD intends to develop procedures to identify and respond to risks to long-term affordability, including default or foreclosure in RAD properties. However, HUD has not yet done so. According to federal internal control standards, agencies should identify, analyze, and respond to risks related to achieving goals and objectives. Procedures that address oversight of affordability requirements would better position HUD to help ensure RAD conversions comply with program requirements, detect potential foreclosure and other risks, and take corrective actions. What GAO Recommends GAO makes five recommendations to HUD intended to improve leveraging metrics, monitoring of the use and enforcement of resident safeguards, and compliance with RAD requirements. HUD agreed with our recommendations to improve metrics and build on existing oversight.
gao_GAO-18-269
gao_GAO-18-269_0
Background The exchanges (including the FFE and those operated by individual states) provide a seamless, single point of access for eligible individuals to enroll in qualified health plans. For the FFE, CMS established a website—Healthcare.gov—as the public portal through which individuals may apply for coverage and select and enroll in health plans, which are offered at different levels of coverage, or “metal tiers”—bronze, silver, gold, and platinum—that reflect the percentage of covered medical expenses estimated to be paid by the insurer. The data that individuals provide in their application is stored in the FFE’s centralized enrollment system, which is maintained by CMS. Although CMS oversees the centralized enrollment system, both CMS and issuers have shared responsibility for enrollment and coverage functions once individuals apply for coverage: CMS is responsible for determining an individual’s eligibility for coverage and income-based federal subsidies, enrolling the individual, and processing subsequent coverage changes or terminations. For example, individuals may change their existing coverage by signing up under an SEP due to the birth of a child or relocation, or they may voluntarily terminate their coverage, or CMS may terminate coverage if the agency is unable to verify key information such as citizenship status. CMS is also responsible for making payments for APTCs and determining whether an enrollee is eligible for any cost-sharing reductions that lower enrollees’ out-of- pocket costs for expenses, such as deductibles and copayments. Issuers are responsible for, among other things, collecting premiums from enrollees, arranging for coverage through provider networks, and paying claims. Issuers are also responsible for processing, and notifying CMS of, terminations related to nonpayment of premiums or fraud. As a result of this shared responsibility, CMS and issuers notify each other of coverage updates by transferring data back and forth through electronic files known as “transaction files.” It is critical that both issuers and CMS have consistent, accurate, and current information on enrollees, because monthly APTC payments are based on enrollment data in CMS’s centralized system. Federal regulations require CMS to reconcile enrollment information with issuers on at least a monthly basis. Accordingly, CMS and issuers reconcile certain key data elements on a monthly basis through an automated enrollment reconciliation process, in which issuer and CMS data are compared and discrepancies are resolved. Through this process, APTC amounts and their effective dates are compared and reconciled. CMS’s data system is considered to be correct when considering discrepancies on overall enrollment counts or with key data elements, such as coverage start and end dates between issuer and CMS data. Therefore, CMS will not change the APTC payments based on issuers’ data that may differ from CMS’s data unless there are significant discrepancies. There are several specific steps involved in transferring data between CMS and issuers for initial enrollment, subsequent updates, and reconciliation (see fig. 1 for a high-level overview of the data transfer process): Initial enrollment: CMS forwards an outbound electronic transaction file to the issuer with information on the applicant, the plan selection, the premium, and the APTC amount. Once the issuer receives the initial premium payment, the issuer sends an inbound electronic transaction file back to CMS to confirm the enrollment. Issuers may not refuse to issue coverage to an individual CMS has deemed eligible once that individual has made the initial premium payment. Transaction files are transmitted electronically on a daily basis. Subsequent changes/terminations: Subsequent changes to the individual’s coverage may be initiated by enrollees, CMS, or issuers. For example, enrollees may request changes to their coverage through the portal if they experience a change in circumstance (such as needing to enroll under an SEP due to the birth of a child, or to terminate their coverage if they move to a different state); CMS may terminate coverage if the agency cannot verify key eligibility information (such as citizenship status); or issuers may terminate coverage if enrollees fail to pay their premiums. If CMS initiates changes in coverage, it notifies issuers through subsequent outbound transaction files, and similarly, if issuers initiate changes they notify CMS through subsequent inbound transaction files. Monthly reconciliation: CMS sends issuers a snapshot of key elements of the enrollment data in its centralized enrollment system in an outbound reconciliation file. Issuers compare the data from the file to their enrollment systems and identify missing enrollments or other discrepancies. Issuers make updates as necessary and send CMS an inbound reconciliation file with information about current enrollees, cancellations, and terminations in their systems. CMS then performs an automated comparison of the data in the inbound reconciliation files with its centralized enrollment system and identifies any further discrepancies that may need to be resolved either by CMS or issuers. If necessary, CMS makes further updates to its data. In an April 2017 final rule, CMS implemented several actions that, in part, responded to issuer concerns about special enrollment periods and stability of enrollment. Specifically, CMS stated that the agency would require documentation from all individuals applying to enroll in coverage under an SEP to verify their eligibility for the SEP prior to enrollment. CMS also stated that, starting in June 2017, it would allow issuers, subject to state law, to apply a new premium payment to an individual’s past due payments before applying that premium towards a new enrollment. CMS stated that issuers would be allowed to refuse to provide coverage to an enrollee applying under an SEP due to loss of existing coverage if the issuer had previously terminated the enrollee’s coverage for nonpayment of premiums, unless the enrollee paid the past due premiums. CMS further stated that this provision was intended to encourage individuals to maintain continuous coverage rather than start and stop coverage (and thereby avoid incurring past due premiums). Just over Half of FFE Enrollees Maintained Continuous Coverage throughout 2015; Length of Coverage Varied by Enrollee Characteristics Approximately 4.9 million enrollees (53 percent of the 9.2 million FFE enrollees in 2015) maintained continuous coverage throughout the year— that is, their coverage began between January 1 and March 1, 2015, and lasted through December 31, 2015. These individuals therefore had 10 or more months of continuous coverage, with an average length of coverage of 11.6 months. Most of these enrollees (83 percent) re-enrolled in coverage by June 2016. The remaining 4.3 million enrollees (47 percent of the FFE enrollees in 2015) did not maintain continuous FFE coverage throughout the year, as defined above. The average length of coverage for these enrollees was about 5.0 months and, for most (72 percent), coverage ended prior to the end of the year. (See fig. 2 for information on enrollee length of coverage.) Of the 4.3 million enrollees, 38 percent re-enrolled in exchange coverage for 2016, although enrollees that held coverage through the end of the year—regardless of their length of coverage—were far more likely to have re-enrolled than enrollees whose coverage ended prior to the year’s end. In general, we did not find notable differences in attributes of enrollees’ coverage (for example, by benefit level of selected plan or monthly premium after APTC) or enrollee demographics when comparing the two groups of enrollees—those who maintained continuous coverage throughout 2015, and those who did not. (For data on coverage and demographics of FFE enrollees who did maintain continuous coverage throughout 2015 and those who did not, see app. I.) However, in examining the demographic and coverage characteristics of all FFE enrollees, we found that enrollees with certain characteristics tended to remain covered for a longer period of time in 2015 compared to other enrollees. For example: Enrollment period. Enrollees who enrolled during the open enrollment period had a higher average length of coverage than enrollees who enrolled through an SEP—9.1 months compared to 5.2 months (see fig. 3). However, more individuals who enrolled through an SEP remained enrolled through December 31, 2015, compared to individuals who enrolled during open enrollment—72 percent compared to 64 percent. Age. Enrollees aged 55 or older had the highest average length of coverage, while those aged 25 to 34 had the lowest—9.2 months compared to 7.8 months. Reported household income. APTC-eligible enrollees who reported having a household income between 301 and 400 percent of the federal poverty level had the highest average length of coverage, while those who reported having a household income less than, or equal to, 100 percent of the federal poverty level had the lowest—8.9 months compared to 8.0 months. Eligibility for APTC. Enrollees who were eligible for APTC had a higher average length of coverage than enrollees who were not eligible for APTC—8.6 months compared to 7.8 months Benefit level of selected plan. Enrollees who selected higher- benefit, gold plans had the highest average length of coverage, while enrollees who selected lower-benefit catastrophic, plans had the lowest—8.8 months compared to 6.7 months. Enrollees who selected silver plans—the most common plan selection—had an average length of coverage of 8.6 months. State of residence. Enrollees residing in Maine had the highest average length of coverage, while enrollees residing in Mississippi had the lowest—9.4 months compared to 8.0 months. See appendix II for additional data on the average length of coverage for enrollees by various characteristics. CMS Lacks Complete and Transparent Data on Terminations of Enrollee Coverage for Nonpayment of Premiums CMS’s data on terminations of enrollee coverage due to nonpayment of premiums are not complete and accurate. CMS officials told us that they collect some information from issuers on their terminations of enrollee coverage for nonpayment of premiums. When issuers terminate policies, the inbound transaction files they send to CMS must include, among other elements, a revised coverage end date taking the termination into account. CMS uploads these data into its centralized FFE enrollment system. However, while the issuers may also include codes that designate the reasons for the terminations, there is no requirement for them to consistently do so. Data on termination codes may therefore not be consistently reported by issuers. CMS officials told us that data on reasons for termination are not tracked because they are not critical to ensure the accuracy of APTC payments—which is the main function of the reconciliation process. Officials stated that key essential variables that CMS does track are whether coverage is effectuated (that is, whether the first premium payment has been made), whether the enrollee is eligible for APTC payments, and whether coverage was terminated. In addition, when issuers do report termination reason codes, these data are not always accurate. Specifically, CMS told us that, historically, issuers may have incorrectly used the nonpayment termination code for other types of terminations, and two issuers we interviewed acknowledged having done so. We compared data on terminations for nonpayment from CMS’s centralized enrollment system with data we obtained from three issuers for a small selection of enrollees. We found that for one large issuer operating in multiple states, the CMS data indicated that coverage for 18 of the 26 enrollees that we examined had been terminated for nonpayment of premiums, while the issuer data indicated that coverage had been terminated for other reasons, in most cases because it had expired at the end of the year. The issuer indicated that it likely reported these year-end terminations to CMS incorrectly as terminations for nonpayment of premiums. CMS has recently taken actions that may improve the reliability of data on terminations for nonpayment, but these actions do not ensure the data are consistently reported and recorded by CMS. Specifically, in July 2017, CMS indicated that it would add new codes to the transaction files for issuers to use to help prevent inaccurate reporting of the nonpayment termination code. CMS told us that it expects issuers to begin using the new codes in 2018. CMS’s data on terminations for nonpayment therefore may be more reliable beginning in 2018. However, CMS has not required issuers to report the termination reasons in the transaction files because, according to CMS officials, these data are not essential to tracking the accuracy of APTC payments. The agency also does not have plans in the near future to use the data in tracking trends in enrollment and termination of enrollee coverage in the FFE to assess the overall stability of the exchange. Further, CMS does not have a transparent, systematic process for issuers to ensure that data on terminations they initiate due to nonpayment are complete and accurate in the CMS system. Issuers we interviewed told us that they are unable to ascertain whether CMS is correctly updating the FFE enrollment system with the termination reason codes issuers provide when policies are terminated. While issuers can determine from the monthly reconciliation files whether CMS has updated certain issuer data for enrollees whose coverage was terminated (for example, the revised coverage end date), the files do not capture data on reasons for termination. Therefore, issuers are unable to determine if the CMS FFE data on termination reason codes matches theirs and make corrections where necessary. Some issuers told us they had requested that CMS add a variable to capture data on termination reasons in the monthly reconciliation files sent to issuers. CMS officials stated that the agency is in the initial stages of exploring whether this would be feasible for CMS and issuers, but that it will require significant resources and time to develop. Although CMS’s recent changes may improve its data, they do not ensure the agency will have complete and transparent data on terminations for nonpayment of premiums. According to federal internal control standards, federal agencies should obtain and use relevant, reliable data to achieve their objectives. Without complete and accurate data, CMS may be allowing enrollees who lost exchange coverage for nonpayment of premiums to re-enroll under SEPs although, under federal regulations, these individuals are ineligible to do so. Issuers reported that this had occurred. CMS officials told us that the agency is exploring options to have its system automatically prevent certain enrollees with prior terminations for nonpayment from enrolling in coverage under an SEP for loss of existing coverage, but noted that this functionality would depend on receiving reliable data on terminations for nonpayment from issuers. Further, without reliable data, CMS may not be able to assess the effects of its April 2017 policy allowing issuers to apply enrollees’ new premium payments toward unpaid premiums over the past 12 months. This is because the agency lacks the complete and accurate data that would be necessary to ensure that issuers are correctly identifying enrollees terminated for nonpayment. Conclusions In its role as administrator of the FFE, it is important for CMS to assess the overall stability of the exchange by, among other things, tracking trends in enrollment and termination of enrollee coverage and addressing issuers’ concerns, where appropriate, to ensure their continued participation in the exchange. Issuers have raised concerns that the SEP regulations potentially allow individuals to enroll in coverage despite having their coverage terminated for nonpayment of premiums. However, CMS does not have the data needed to determine the extent of these problems. While CMS has made some efforts to improve the accuracy of the agency’s data on terminations for nonpayment, it has not indicated whether the agency will require issuers to consistently and accurately report these data. Moreover, CMS has no way to ensure the reliability and transparency of the data, because the existing process—the exchange of monthly reconciliation files between CMS and issuers—does not have a place to capture these data. CMS could capitalize on this existing process, already familiar to issuers, by adding a variable that captures data on termination reasons to the monthly reconciliation file and tracking its accuracy. By taking this step, in addition to requiring issuers to report these data, CMS could help ensure it has reliable and transparent data on terminations of enrollee coverage for nonpayment of premiums, and it could use these data to assess the effects of CMS policies and the overall stability of the exchange. Recommendations We are making the following two recommendations to CMS: The Administrator of CMS should ensure that CMS has complete data on terminations of enrollee coverage for nonpayment of premiums by requiring issuers to report these data. (Recommendation 1) The Administrator of CMS should provide a transparent process for issuers and CMS to systematically reconcile discrepancies in their data on terminations of enrollee coverage for nonpayment of premiums. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to HHS. HHS provided written comments, which are reprinted in appendix III. HHS concurred with our first recommendation to require issuers to report data on terminations of enrollee coverage for nonpayment of premiums. HHS noted that it currently collects information on termination reasons on enrollment transactions with issuers, and that it would review the requirements for collection of these data to identify possible improvements. HHS also concurred with our second recommendation to ensure a transparent process for issuers and CMS to systematically reconcile discrepancies in their data on terminations of enrollee coverage for nonpayment of premiums. HHS stated that it would consider how to incorporate reconciliation of these data into its existing monthly data reconciliation process with issuers, balancing issuer and agency burdens against the benefits of doing so. As agreed with your office, unless you publically announce the contents of the report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Demographic and Coverage Characteristics of Federally Facilitated Exchange Enrollees, 2015 Table 1 provides information on demographic characteristics for federally facilitated exchange (FFE) enrollees that maintained continuous coverage throughout 2015—defined as beginning coverage by March 1, 2015, and maintaining it without any gaps through December 31, 2015—and for all other 2015 FFE enrollees. Table 2 provides information on the characteristics of these enrollees’ coverage. Table 3 provides the extent to which enrollees maintained continuous coverage throughout 2015 by their state of residence. Appendix II: Average Length of Coverage for Federally Facilitated Exchange Enrollees, 2015 Table 4 provides information on average length of coverage for all 9.2 million federally facilitated exchange enrollees in 2015 by various demographic characteristics. Table 5 provides information on average length of coverage for these enrollees by characteristics of the enrollees’ coverage. Table 6 provides information on average length of coverage for enrollees by their state of residence. Appendix III: Comments from the Department of Health and Human Services Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, William Hadley (Assistant Director), Iola D’Souza (Analyst in Charge), Richard Lipinski, Peter Mann- King, and Priyanka Sethi Bansal made key contributions to this report. Also contributing were Muriel Brown, Laurie Pachter, and Emily Wilson.
Why GAO Did This Study CMS has noted that it is important for enrollees to maintain continuous health insurance coverage to ensure the stability of the FFE. Certain rules allow for enrollment flexibilities—such as special enrollment periods and a 3-month grace period that is allowed before coverage is terminated for recipients of federal income-based subsidies who default on their premiums. However, some issuers have stated that these rules could be misused, resulting in non-continuous coverage. There are little data on the extent to which enrollees maintain continuous coverage during a year and, more specifically, on the extent to which coverage is terminated for nonpayment of premiums. GAO examined (1) the extent to which FFE enrollees maintained coverage in 2015 and (2) the extent to which CMS has reliable data on termination of enrollees' coverage for nonpayment of premiums. GAO analyzed CMS's 2015 FFE enrollment data (the most recent year of available data); interviewed CMS officials and selected issuers; and reviewed applicable laws and guidance from CMS. What GAO Found In 2015, 9.2 million individuals enrolled in the federal health insurance exchange in 37 states. Eligible individuals (e.g., U.S. citizens or those lawfully present in the United States) are able to enroll in health coverage during the annual open enrollment period. Outside of open enrollment, eligible individuals may enroll in coverage or change their coverage selection during special enrollment periods. Individuals may enroll under a special enrollment period if, for example, they lost their coverage from another source, such as Medicaid or an employer, or due to relocation. Under federal regulations, enrollees may not sign up for coverage under a special enrollment period citing loss of coverage if the coverage was lost due to nonpayment of premiums. About half (53 percent) of the 2015 federally facilitated exchange (FFE) enrollees maintained continuous health insurance coverage throughout the year—that is, they began coverage between January 1 and March 1, 2015, and maintained it through December 31, 2015. These individuals had an average of 11.6 months of coverage. The remaining 47 percent of FFE enrollees started their coverage later or ended it during the year; they averaged 5.0 months of coverage. Enrollees could have voluntarily ended coverage—due to gaining other coverage, for example—or have had it terminated by the Centers for Medicare & Medicaid Services (CMS) or the issuers of coverage for valid reasons, including losing eligibility for exchange coverage or for nonpayment of premiums. CMS does not have reliable data on issuer-generated terminations of coverage for enrollees' nonpayment of premiums. Although CMS and issuers share data on the terminations each generates and reconcile their data on a monthly basis to ensure data accuracy, the agency does not require issuers to consistently report data on the reasons for terminations. Officials told us they do not track these data because they are not critical to ensure the accuracy of the federal subsidy amounts—which is the main function of the monthly reconciliation process. Further, CMS lacks a transparent process to ensure the accuracy of these data, as the monthly reconciliation files transmitted between CMS and issuers do not include a place to capture data on termination reasons. Issuers said that they are therefore unable to ascertain whether data they provide on the reasons for termination match CMS's data, and thus they cannot make corrections where necessary. The agency's lack of reliable data on terminations for nonpayment limits its ability to effectively oversee certain federal regulations. For example, because CMS is not systematically tracking these data, it cannot tell whether enrollees applying for coverage under a special enrollment period had lost their coverage for nonpayment of premiums—in which case they would be ineligible for the special enrollment period per federal regulations. CMS could capitalize on its existing process, already familiar to issuers, by adding a variable that captures data on termination reasons to the monthly reconciliation file. By taking this step, in addition to requiring issuers to report these data, CMS could help ensure it has reliable and transparent data on terminations of enrollee coverage for nonpayment of premiums, and it could use these data to assess the effects of CMS policies and the overall stability of the exchange. What GAO Recommends GAO recommends that CMS ensures it has (1) complete data on terminations of coverage for nonpayment of premiums; and (2) a transparent process to reconcile discrepancies and ensure the accuracy of these data. The Department of Health and Human Services concurred with both recommendations.
gao_GAO-18-629T
gao_GAO-18-629T_0
Coast Guard Faces Challenges in Effectively Managing its Acquisition Portfolio Short-term Prioritization through the Annual Budget Process and the 5-Year Capital Investment Plan Limit Effective Planning We found in September 2012, and in our July 2018 review, that the Coast Guard’s approach of relying on the annual budget process and the 5-year CIP to manage portfolio affordability does not provide the best basis for making decisions to develop a more balanced and affordable portfolio in the long term. Further, in June 2014, we found that there is no evidence that short-term budget decisions will result in a good long-term strategy, and the Coast Guard’s annual budget-driven trade-off approach creates constant churn as program baselines must continually re-align with budget realities instead of budgets being formulated to support program baselines. This situation results in trade-off decisions between capability and cost being pushed into the future. For example, since 2010, the Coast Guard has a stated requirement for three medium polar icebreakers, but it has only one operational medium icebreaker, the Healy, which has an expected end of service life—the total period for which an asset is designed to operate—in 2029. Despite the requirement for three medium polar icebreakers, Coast Guard officials said they are not currently assessing acquisition of the medium polar icebreakers because they are focusing on the heavy icebreaker acquisition and plan to assess the costs and benefits of acquiring medium polar icebreakers at a later time. As required by statute, the Coast Guard has, since 2012, prepared a 5- year CIP that it is required to update and submit annually with the administration’s budget request. The 5-year CIP is the Coast Guard’s key acquisition portfolio planning tool. However, in our July 2018 review, we found that shortcomings of that plan that limit its effectiveness. Specifically, we found that the Coast Guard’s 5-year CIPs continue to demonstrate a pattern of certain ineffective planning practices, such as not identifying priorities or trade-offs between acquisition programs and not providing information about the effect of current decisions on the overall affordability of the acquisition portfolio. These shortcomings limit the Coast Guard’s ability to manage the affordability of its acquisition portfolio. Coast Guard officials said the CIP reflects the highest priorities of the department within the given top funding level and that prioritization and trade-off decisions are made as part of the annual budget cycle. However, the reasoning behind these decisions, and the resulting impacts on affected programs, are not articulated in the CIPs. While the Coast Guard is not required under statute to identify the effects of trade-off decisions in the CIP, failing to show which acquisitions would take on more risk—such as delays to certain recapitalization efforts—so other acquisitions can be prioritized and adequately funded within budget parameters also makes it difficult for Congress and other stakeholders, such as Department of Homeland Security (DHS) and the Office of Management and Budget (OMB), to understand any other options the Coast Guard considered. GAO’s Cost Estimating and Assessment Guide states that comparative analyses showing facts and supporting details among competing alternatives, such as budget priorities, should consider trade-offs needed to identify solutions and manage risk. In the report we issued today, we recommended that the Coast Guard work with Congress to include a discussion of the acquisition programs it prioritized and describe how trade-off decisions made could affect other acquisition programs in the Coast Guard’s annual 5-year CIP. DHS agreed with our recommendation and plans to include additional information in future CIP reports to address how trade-off decisions could affect other major acquisition programs. The Coast Guard plans to implement this recommendation by March 2020. In June 2014, we found that the Coast Guard needed to take a more strategic approach in managing its acquisition portfolio. We recommended that the Coast Guard develop a 20-year fleet modernization plan that would identify all acquisitions necessary for maintaining at least its current level of service and the fiscal resources necessary to build these assets. DHS concurred with this recommendation and the Coast Guard is in the process of developing a 20-year Long-Term Major Acquisitions Plan to guide and manage the affordability of its acquisition portfolio, but DHS has not yet approved the plan. Such an analysis would facilitate a fuller understanding of the affordability challenges facing the Coast Guard while it builds the Offshore Patrol Cutter, among other major acquisitions. The lack of a long-term plan and continuing to determine priorities and make trade-off decisions based on the annual budget have rendered the Coast Guard’s acquisition planning reactive. We found that reactive planning and the Coast Guard’s constrained budget environment have created a bow wave of near-term unfunded acquisitions, negatively affecting future acquisition efforts and potentially affecting future operations. This bow wave consists of new acquisition programs and recapitalization efforts, as well as high-cost maintenance projects that use the same acquisition construction and improvements account, which continue to put pressure on available resources. These projects include some that are not currently identified in the 5-year CIP. For instance, the Coast Guard’s 87-foot patrol boats are forecast to require recapitalization beginning in 2023. Additionally, the ocean-going 175-foot coastal buoy tenders are past the point in their service lives when a midlife maintenance availability would normally have been conducted. In July 2018, we found that that the Coast Guard has historically operated vessels well past their expected end of service life, and it will likely need to do so with these assets given limited available acquisition funding. Executive Oversight Council Has Not Conducted Annual Reviews of All Acquisitions Collectively The Coast Guard has a management body—the Executive Oversight Council—in place to conduct oversight of its major acquisition programs; however, this management body has not conducted oversight across the entire acquisition portfolio using a comprehensive, collective approach. Among the Coast Guard’s three cross-directorate groups that have roles in the acquisition process, we found in July 2018 that the Executive Oversight Council is best positioned to oversee the portfolio collectively and has the potential to implement key portfolio-wide management practices, including conducting formal reviews and issuing reports. This council has cross-directorate senior-level management representation, access to information on acquisition programs, and support from the other two cross-directorate groups (the Systems Integration Team and the Resource Councils). However, this council has not carried out these portfolio-wide practices. In 2014, the Coast Guard updated the Executive Oversight Council’s charter, in response to our September 2012 recommendation, adding the responsibility for portfolio-wide oversight to include conducting an annual review to assess and oversee acquisitions collectively. However, in our July 2018 review, we found that the Coast Guard revised the council’s charter in June 2017, removing this responsibility. According to Executive Oversight Council officials, this responsibility was removed from the 2017 charter because the council did not conduct these annual reviews. Instead, Executive Oversight Council officials indicated that the council facilitates a balanced and affordable portfolio of acquisition programs through the individual program-level reviews. Best practices states that successful organizations assess product investments in aggregate, rather than as independent projects products or programs. For example, by considering the requirements, acquisition, and budget processes collectively, it helps organizations prioritize their product investments. Further, we found that the Executive Oversight Council has not engaged in overseeing or reporting on the acquisition portfolio collectively and annually. OMB’s 2017 Capital Programming Guide outlines a capital programming process, including how agencies should effectively and collectively manage a portfolio of capital assets. This OMB guidance states that a senior-level executive review committee should be responsible for reviewing the agency’s entire capital asset portfolio on a periodic basis and for making decisions or priorities on the proper composition of agency assets needed to achieve strategic goals and objectives within the budget limits. In the case of the Coast Guard, only the Executive Oversight Council has members at the senior-level executive level and has the responsibility for oversight of its major acquisition programs. Without conducting comprehensive, collective portfolio reviews at the senior management level, the Coast Guard does not have sufficient cross-directorate information to determine needed trade-offs in the major acquisitions realm, considering budget realities. It is also limiting its ability to make strategic decisions on future requirements and capability gaps in a timely manner within the acquisition portfolio. In our July 2018 report on Coast Guard recapitalization efforts, we recommended that the Commandant of the Coast Guard should require the Executive Oversight Council, in its role to facilitate a balanced and affordable acquisition portfolio, to annually review the acquisition portfolio collectively, specifically for long-term affordability. DHS disagreed with our recommendation stating that other bodies within the Coast Guard, such as the Investment Board, Deputies Council, and Investment Review Board—are responsible for making decisions regarding out-year funding, while the Executive Oversight Council works outside of the annual budget process. DHS also stated that, to meet the spirit of our recommendation, the Coast Guard will update the Executive Oversight Council’s charter to require a review of the collective acquisition portfolio, specifically evaluating long-term planning. We believe that updating the Executive Oversight Council’s charter to include long-term- planning is a positive step. However, we continue to believe that in addition to long-term planning, the Executive Oversight Council should include the major acquisition portfolio’s budget realities faced by the Coast Guard in its reviews, or long-term affordability. If the planning accounts for long-term funding considerations to achieve the Coast Guard’s acquisition goals and objectives, we believe the intent of our recommendation would be met. Coast Guard’s Heavy Polar Icebreaker Program’s Optimistic Schedule Is Driven by Capability Gap Rather Than Knowledge-Based Analysis The Coast Guard’s short-term planning focus has, in part, driven the acquisition of its heavy polar icebreaker program to its current situation— trying to meet a highly optimistic schedule. The heavy polar icebreaker program is intended to field three new icebreakers to replace the Coast Guard’s sole operational heavy polar icebreaker, the Polar Star. The Polar Star is expected to reach the end of its service life between 2020 and 2023 while the first heavy polar icebreaker is expected to be delivered in fiscal year 2023, with the second and third icebreakers expected to be delivered in 2025 and 2026, respectively. Figure 1 shows the potential icebreaking capability gap. We are currently conducting a review of the heavy polar icebreaker acquisition, and, preliminarily, we have found that the Coast Guard set an optimistic schedule baseline for the delivery dates for new polar icebreakers based on the ice-breaking capability gap rather than an analysis of what is realistic and feasible. Rather than building a schedule based on knowledge—such as determining realistic schedule targets and analyzing how much time to include in the schedule to buffer against potential delays, and comprehensively assessing schedule risks—the Coast Guard used the estimated end date of the Polar Star’s service life as the primary driver to set the lead icebreaker’s objective (or target) delivery date of September 2023 and threshold (latest acceptable) delivery date of March 2024. Design study information provided by several shipbuilders estimated that it could take up to 3.5 years to build the lead icebreaker, but the Coast Guard is planning for a more optimistic estimate of 2.5 years for the delivery date. Our best practices for developing project schedules state that estimating how long an activity takes should be based on the effort required to complete the activity and the resources available and not driven by a specific completion date. In addition, preliminary findings indicate the Coast Guard did not conduct analysis to identify a reasonable amount of margin or time to include in the program schedule baseline to account for any delays in the program. The current heavy polar icebreaker’s schedule includes only 6 months of margin between the Coast Guard’s target and latest acceptable delivery dates. However, our analysis of recent shipbuilding acquisitions shows that longer schedule delays, whether they are in the program’s control or not, should be expected. For example, among the 12 selected shipbuilding acquisition programs active in the last 10 years that we analyzed, the Navy and the Coast Guard have delayed delivery of all but one lead ship from their original planned delivery dates, with delays ranging from 9 to 75 months. We have found in our past shipbuilding work that delays have resulted from a number of issues, including redesign work to address discoveries during pre-delivery testing, and key system integration problems, and design quality issues among others. However, Coast Guard officials told us such risks are not accounted for in the Heavy Polar Icebreaker schedule. We plan to issue a report on the Coast Guard’s heavy polar icebreaker acquisition this summer. In addition, we will continue to review this program in our annual assessment of major acquisition programs. Coast Guard Faces Sustainment Challenges for the Polar Star and 270- foot Medium Endurance Cutters We found in July 2018 that the Coast Guard’s heavy polar icebreaker Polar Star and the Medium Endurance Cutters are currently either approaching or operating beyond the end of their design service lives. These cutters are in need of major maintenance overhauls—or Service Life Extension Projects (SLEP)—in order to continue providing capabilities to operators. According to Coast Guard officials, SLEPs are necessary because the Coast Guard does not have the funds available to initiate a new major acquisition program to recapitalize these assets in the short term, or because a significant amount of maintenance work is required to keep these assets operational until replacements are fielded. These planned SLEPs involve several risks including scheduling and funding. Heavy Icebreaker Polar Star has Required More Maintenance than Planned to Remain Operational After being placed in a nonoperational status in 2006 due to equipment problems, the Coast Guard conducted reactivation work on the Polar Star from 2010 to 2013, and the icebreaker resumed its primary mission for the annual deployment to the National Science Foundation’s McMurdo Research Facility in Antarctica in 2014. Further, our July 2018 review indicated that the Coast Guard is planning a SLEP on the Polar Star to keep it operational until the first and second new heavy polar icebreakers are delivered in order to bridge a potential operational gap. This approach, according to Coast Guard officials, would allow the Coast Guard to operate a minimum of two heavy icebreakers once the first polar icebreaker is delivered and provide the Coast Guard with a self-rescue capability—the ability for one icebreaker to rescue the other if it became incapacitated while performing icebreaking operations. However, we found that the Coast Guard’s plans to conduct this SLEP during its annual depot-level maintenance periods—that is, maintenance that is beyond the capability of the crew of a cutter or other asset—may not be feasible given the amount of maintenance already required on the cutter. Specifically, the Polar Star’s mission capable rating (an asset’s availability to conduct operations) has been decreasing in recent years and reached a low point of 29 percent—well below the target of 41 percent—from October 2016 to September 2017. Based on mission capable data, we found this was mostly due to additional time spent in depot-level maintenance, which has increased in recent years from about 6 months in 2015 to more than 8 months in 2017. Additionally, the Polar Star has required extensions of about 3 months for its annual dry dock periods—the period of time when a cutter is removed from the water so that maintenance can be conducted—in 2016 and 2017 to complete required maintenance activities. These dry docks were originally planned to last between 2 1/2 months and 4 months. We found in July 2018 that these delays and extensions are likely to continue as the cutter ages. According to Coast Guard officials, the Polar Star’s SLEP work will be conducted during the annual dry dock periods by adding an additional 1 or 2 months to the annual dry docks. However, if the work is unable to be completed during this timeframe, it could force the Coast Guard to miss its commitment to conduct its annual Antarctica mission. Coast Guard maintenance officials stated that until the Polar Star completes the SLEP, its repairs will likely continue to get more expensive and time consuming. As we found in July 2017, the Polar Star SLEP effort has a rough-order cost estimate of $75 million, which is based on the reactivation work completed in 2013. However, we found this estimate may be unrealistic based on assumptions the Coast Guard used, such as that it would continue to use parts from the Coast Guard’s other heavy polar icebreaker, the Polar Sea, which has been inactive since 2010. The Coast Guard’s recent assessment of the Polar Star’s material condition— the physical condition of the cutter, which includes the hull structure, habitability, major equipment systems, and spare parts availability—was completed in January 2018. The material assessment stated that many of the available parts from the Polar Sea have already been removed and installed on the Polar Star. As a result of the finite parts available from the Polar Sea, the Coast Guard may have to acquire new parts for the Polar Star that could increase the $75 million SLEP estimate. The Polar Star’s recent material assessment will form the basis to determine which systems will be overhauled during the SLEP and for a more detailed cost estimate. The Coast Guard expects the Polar Star SLEP to begin by June 2020, at which time the Polar Star could reach the end of its current useful service life (currently projected to be between 2020 to 2023). This timeline contains risk that the Polar Star could be rendered inoperable before the cutter is able to undergo a SLEP. We will continue to monitor the Polar Star’s SLEP through our annual review of DHS programs. Coast Guard Is Developing Plans to Extend Medium Endurance Cutters’ Service Lives The Coast Guard operates two fleets of Medium Endurance Cutters (270- foot and 210-foot cutters) and both are either approaching or have exceeded their design service lives. According to Coast Guard officials, there are no plans to extend the service lives of the 210-foot Medium Endurance Cutters due to the age of the vessels (some of the cutters will be over 60 years old when they are expected to be removed from service). However, we found in July 2018 that, according to Coast Guard maintenance officials, the primary problem facing the 270-foot Medium Endurance Cutters is obsolescence of parts. The cutters have several systems that are no longer manufactured, and in many cases the original manufacturer no longer makes parts for the systems, such as the generators, fire pumps, and main diesel engines. To sustain the 270-foot Medium Endurance Cutters until the replacement cutters—the Offshore Patrol Cutters—are delivered, the Coast Guard is planning to conduct a SLEP. Coast Guard officials stated they are evaluating how many of the 13 270-foot cutters will undergo the SLEP. According to Coast Guard officials, the Offshore Patrol Cutter acquisition program is on track to meet its cost and schedule goals. The Coast Guard is in the process of completing the design of the cutter before starting construction, which is in-line with GAO-identified shipbuilding best practices. In addition, Coast Guard officials stated that the program is using state-of-the-market technology that has been proved on other ships as opposed to state-of-the-art technology, which lowers the risk of the program. The Coast Guard expects to start construction of the first Offshore Patrol Cutter in fiscal year 2019 and procure a total of 25 ships, with plans to initially fund one cutter per year and eventually two cutters per year until all 25 cutters are delivered. Further, Coast Guard officials have stated that if the Offshore Patrol Cutter program experiences any delays, it will likely decrease the Coast Guard’s operational capacity because the legacy Medium Endurance Cutters will likely require increased downtime for maintenance and other issues, reducing their availability. As we indicated earlier, short-term planning limits the Coast Guard’s ability to identify and consider tradeoffs with its acquisition portfolio. The Coast Guard is evaluating how long the 270-foot Medium Endurance Cutters should remain in service. According to Coast Guard officials, this decision is at least partially dependent on the delivery of the Offshore Patrol Cutters—specifically the shipbuilder’s ability to deliver 2 cutters per year, which is expected to start in fiscal year 2024 with the 4th and 5th cutters. Officials stated that the Coast Guard does not plan to operate any Medium Endurance Cutters once all 25 Offshore Patrol Cutters are operational, yet the fiscal year 2018 through 2022 CIP report indicates that 7 of the 270-foot Medium Endurance Cutters will still be in service when all 25 Offshore Patrol Cutters are delivered and operational. Officials said this is a contingency plan in case not all Offshore Patrol Cutters are delivered on time. Figure 2 shows the planned delivery dates for the Offshore Patrol Cutters and the proposed decommissioning dates for the legacy Medium Endurance Cutters. The fiscal year 2018 through 2022 CIP shows that there is little, if any, gap between when the 210-foot and 270-foot Medium Endurance Cutters will be removed from service and when the Offshore Patrol Cutters will be operational. However, both Medium Endurance Cutter classes will be well past their end of service lives by the time they are decommissioned. For instance, in our July 2012 report, we found that the 210-foot Medium Endurance Cutter Dependable reached its end of service life in 2006. Nevertheless, based on the fiscal year 2018 through 2022 CIP, we found that the Coast Guard plans for the cutter to operate for an additional 23 years (until 2029) without any major sustainment work to extend its service life. While it is not unusual for the Coast Guard to operate cutters for longer than originally planned, the lack of a more comprehensive, collective portfolio management approach, in part, will result in some of the Medium Endurance Cutters operating over 60 years, which is 30 years beyond their original design service lives. In addition, the Coast Guard’s own assessments indicate likely challenges. For instance, the Coast Guard’s February 2017 Sustainability Assessment of the 210-foot Medium Endurance Cutters, it rated 5 of the 14 cutters as a high risk for sustainability, which reflects either a poor material condition or high maintenance costs. Moreover, the most recent material condition assessments for the Medium Endurance Cutters, completed in 2015, found that 210-foot Medium Endurance Cutters cannot be expected to meet operational requirements using the normal depot-level maintenance funding levels due to the time required to complete maintenance and the increased maintenance costs in recent years; and mission effectiveness of the 270-foot Medium Endurance Cutters will continue to degrade without a near-continuous recapitalization of older sub-systems. In July 2012, we found that as assets age beyond their design service lives, they can negatively affect the Coast Guard’s operational capacity to meet mission requirements as the cutters require more maintenance. We will continue to monitor the Medium Endurance Cutters’ SLEP and the Offshore Patrol Cutter acquisition in our annual review of major acquisition programs. In conclusion, as the Coast Guard continues modernizing its fleet and sustaining existing assets for longer than planned, it is important that it develops a more strategic and comprehensive approach for managing its portfolio so that future requirements and capability gaps can be addressed in a timely manner. The Coast Guard has a history of using its annual budgets to plan its acquisition portfolio, which leads to ever changing priorities and creates deferred acquisitions and a bow wave of future funding requirements. This bow wave has begun and the Coast Guard will continue to add to it until it begins to have a longer-term focus, such as with the creation of the 20-year Long Term Major Acquisition Plan that we recommended in 2014. The Coast Guard has an opportunity with this plan to lay the foundation for the success of the future acquisition portfolio by showing what assets are needed and how much it is expected to cost, and it will position itself to provide decision makers with critical knowledge needed to prioritize its constrained acquisition funding. In the meantime, the Coast Guard would benefit from describing in the 5-year CIP how the annual trade-off decisions that are made could affect other acquisition programs. This would help decision makers understand the needs of the Coast Guard so that they can know how to better allocate taxpayer dollars as they invest in new more capable Coast Guard assets. Chairman Hunter, Ranking Member Garamendi, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this statement, please contact Marie A. Mak, (202) 512-4841 or makm@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Rick Cederholm, Assistant Director; Peter W. Anderson; John Crawford; Claire Li; Roxanna Sun; and David Wishard. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The Coast Guard, a component within DHS, is spending billions of dollars to acquire assets, such as cutters and aircraft. This portfolio of major acquisition programs is intended to help the Coast Guard accomplish its missions—including interdicting illegal drugs and search and rescue missions. GAO's extensive prior work on Coast Guard acquisitions has found that the Coast Guard's reliance on its annual budget process to manage its portfolio is a major management challenge. In the report issued today, GAO discusses particular challenges with the Coast Guard's approach in managing its acquisition portfolio, such as not performing a collective assessment of the portfolio to ensure affordability. This statement addresses the challenges the Coast Guard faces in (1) managing its overall acquisition portfolio, and (2) sustaining aging assets. This statement is based on GAO's extensive body of published and ongoing work examining the Coast Guard's acquisition efforts over several years. What GAO Found The Coast Guard's approach of relying on the annual budget process and the 5-year Capital Investment Plan (CIP) to manage its acquisition portfolio does not provide the best basis for making decisions to develop a more balanced and affordable portfolio in the long term. Specifically, the Coast Guard's annual budget-driven trade-off approach creates constant churn as program baselines must continually re-align with budget realities instead of budgets being formulated to support program baselines. Further, Coast Guard officials have told GAO the CIP reflects trade-off decisions made as part of the annual budget process, but it does not describe the effects of those trade-offs because including such information is not statutorily required. This short-term approach has also left the Coast Guard with a bow wave of near-term unfunded acquisition programs, putting future missions at risk. Until these trade-offs are transparent to all stakeholders and decision makers, the effectiveness of Coast Guard's long-term acquisition portfolio planning is limited. Until new assets being acquired become available, the Coast Guard plans to rely on aging assets, many of which are already past their intended service lives—the time an asset is expected to operate. For example, the Coast Guard plans to replace the Medium Endurance Cutters (see figure) with the Offshore Patrol Cutters beginning in 2023, but the Medium Endurance Cutters exhausted their intended service lives in 2014. The Coast Guard plans to extend service lives for some of the Medium Endurance Cutters to keep them operating longer; however, maintenance for these vessels is becoming more expensive, and some systems are obsolete. GAO will continue to monitor the maintenance effort for the Medium Endurance Cutter and the Offshore Patrol Cutter acquisition in an annual review of Department of Homeland Security (DHS) major acquisition programs. What GAO Recommends The report on which this statement is primarily based ( GAO-18-454 ) recommends that the Coast Guard work with Congress to include in its annual CIP a discussion of how trade-off decisions could affect other acquisition programs. DHS agreed with this recommendation. GAO has also made other recommendations in this area in the past, as discussed in this testimony.
gao_GAO-19-162
gao_GAO-19-162_0
Background U.S.-bound Air Cargo and the Air Cargo Supply Chain In fiscal year 2017, about 13 billion pounds of cargo was transported on aircraft to the United States—over 5 billion pounds was transported on passenger aircraft (e.g., Delta and United Airlines), and about 8 billion pounds was transported on all-cargo aircraft (e.g., FedEx and United Parcel Service)—from over 300 foreign airports, according to our analysis of Bureau of Transportation Statistics data. U.S.-bound air cargo can vary widely in size and include such disparate items as electronic equipment, automobile parts, clothing, medical supplies, fresh produce, and cut flowers. The international air cargo shipping process involves a complex network of business entities that include individual shippers, manufacturers, transportation companies, freight forwarders, warehouses and air carriers. Entities within the supply chain may provide all services (warehousing, consolidation, and loading of air cargo, for example) or only certain services. The standards set by the International Civil Aviation Organization (ICAO) focus on four primary types of entities: known and unknown consignors (i.e., individual shippers, manufacturers, other shipping entities), regulated agents (i.e., freight forwarders, handling agents), and commercial air carriers. Various other air cargo supply chain entities also have responsibilities for applying specific types of security controls in accordance with the international standards. Figure 1 shows an illustrative example of the flow of U.S.-bound air cargo and where in the supply chain the cargo can be secured. TSA and Air Carrier Responsibilities for Ensuring the Security of U.S.-Bound Air Cargo The Aviation and Transportation Security Act (ATSA), enacted into law shortly after the September 11, 2001 terrorist attacks, established TSA and gave it responsibility for securing all modes of transportation, including the nation’s civil aviation system, which includes U.S. and foreign-flagged air carrier operations to, from, within, or overflying the United States, as well as the foreign point-to-point operations of U.S.- flagged carriers. Among other things, ATSA requires, in general, that TSA provide for the screening of all passengers and property, including cargo transported by air carriers. ATSA further requires that a system be in operation to screen, inspect, or otherwise ensure the security of the cargo transported by all-cargo aircraft to, from, and within the United States, but did not establish a firm deadline for the implementation of such a system. Further, to help enhance civil aviation security, the Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Commission Act), mandated that DHS establish a system within 3 years of enactment (enacted August 3, 2007) to screen 100 percent of air cargo transported on all passenger aircraft operated by an air carrier traveling to, from, within, or overflying the United States. TSA reported that it met the mandate to screen 100 percent of domestic air cargo transported on passenger aircraft in August 2010 and U.S.-bound air cargo transported on passenger aircraft from foreign airports in August 2013. There is no comparable 100 percent screening requirement in statute for cargo transported to the United States on all-cargo air carriers. However, TSA requires that all cargo transported on U.S.-bound flights be screened or subjected to security controls that prevent the introduction of explosives, incendiaries, or other destructive devices. If the cargo comes from known consignors or regulated agents, TSA’s all-cargo security program does not require any additional screening unless the cargo piece exceeds a certain weight. On the other hand, all-cargo air carriers must screen all cargo that they accept from unknown consignors or nonregulated agents. Air carriers are responsible for implementing TSA security requirements predominantly through TSA-approved security programs that describe the security policies, procedures, and systems the air carriers are to implement and maintain in order to comply with TSA security requirements. These requirements include measures related to the acceptance, handling, and screening of cargo; training of employees in security and cargo screening procedures; testing employee proficiency in cargo screening; and access to cargo areas and aircraft. If threat information or events indicate that additional security measures are needed to better secure the aviation sector, TSA may issue revised or new security requirements in the form of security directives or emergency amendments when more immediate action on behalf of air carriers is necessary. Air carriers must implement the requirements set forth in applicable security directives or emergency amendments (unless otherwise approved by TSA to implement alternative security measures) in addition to requirements already imposed and enforced by TSA in order to remain compliant with their respective security programs. Under TSA regulations, air carriers are responsible for ensuring the security of the air cargo they transport, and TSA requirements specify methods and technologies that may be used to secure U.S-bound air cargo through screening procedures. Specific screening methods outlined in the 9/11 Commission Act, for example, include X-ray systems, explosives detection systems (EDS), explosives trace detection (ETD), explosives detection canine teams certified by TSA, and physical search together with manifest verification. The 9/11 Commission Act, however, requires that screening involve a physical examination or non- intrusive method of assessing whether cargo poses a threat to transportation security and not solely performing a review of information about cargo contents or verifying the identity of the cargo’s shipper, when not performed in conjunction with the screening methods outlined above. Air Carrier Inspections and Foreign Airport Assessments To assess whether air carriers properly implement security regulations, TSA conducts regulatory compliance inspections of U.S. and foreign- flagged air carriers at all foreign airports with U.S.-bound flights. During these inspections, a TSA inspection team is to examine air carriers’ implementation of applicable security requirements, including their TSA- approved security programs, any amendments or alternative procedures to these security programs, and applicable security directives or emergency amendments. In general, following a risk-informed approach, TSA attempts to inspect all air carriers with TSA-approved security programs at each foreign airport where they operate flights to the United States either annually or semiannually depending on the risk level of the airport. Compliance inspections can include reviews of documentation, such as screening logs; interviews of air carrier personnel; and direct observations of air cargo operations. Consistent with the ATSA, and in accordance with existing statutory requirements, TSA also assesses the effectiveness of security measures at foreign airports using select ICAO security standards and recommended practices. These standards and recommended practices include ensuring that passengers and cargo are properly screened and that unauthorized individuals do not have access to restricted areas of the airport. TSA uses a risk-informed approach to schedule foreign airport assessments, generally every 1 to 3 years, with high risk airports assessed more frequently than medium and low risk airports. Although TSA is authorized under U.S. law to conduct foreign airport assessments at intervals it considers necessary, it may not perform an assessment of security measures at a foreign airport without permission from the host government. TSA also does not have authority to impose or otherwise enforce security requirements at foreign airports. Instead TSA must work with host government civil aviation officials to schedule airport visits to conduct airport assessments (as well as air carrier inspections) and improve upon existing conditions when deficiencies are identified. Table 1 highlights the roles and responsibilities of certain TSA positions within Global Strategies that are responsible for implementing the air carrier inspection and foreign airport assessment programs. NCSP Recognition In addition to conducting air carrier inspections and foreign airport assessments, TSA has also developed the NCSP Recognition Program, for which TSA compares and assesses foreign air cargo security programs and standards to determine if those programs provide a level of security that is commensurate with TSA’s air cargo security standards. The NCSP recognition process involves comparing foreign countries’ air cargo security program requirements to TSA air cargo security requirements and conducting visits to the foreign countries to observe the security programs in operation and determine if they can be validated as commensurate with TSA’s. The recognition decision is based on whether the other country’s NCSP is commensurate in six pillars of cargo supply chain security that TSA has identified, which are: Facility Security. Procedures and mechanisms to prevent unauthorized entry to facilities where cargo is screened, prepared, and stored. Chain of Custody/Transit Procedures. Methods or procedures to prevent and deter unauthorized access to cargo while stored or in transit between facilities prior to loading onboard aircraft. Screening. Screening of cargo through the application of technical or other means that are intended to identify weapons or explosives. Personnel Security. Processes to vet individuals with unescorted access to air cargo at any point in the air cargo supply chain. Training. Training of personnel who screen, handle screened cargo, or perform other duties related to air cargo screening, preparation, or storage. Compliance and Oversight Activities. Clearly established requirements that regulated entities must satisfy in order to participate in the security program, and routine audits of such entities for compliance by appropriate authorities. TSA first approved the NCSP recognition process for passenger aircraft operations in fiscal year 2011 and made subsequent changes to the process in fiscal year 2013. According to TSA, the NCSP Recognition Program increases its visibility into recognized governments’ air cargo security requirements and air cargo supply chains, facilitates the identification of air cargo industry vulnerabilities, and is a key component of TSA’s efforts to achieve 100 percent screening of U.S.-bound air cargo and enhance global supply chain security. Within Global Strategies, the Mitigation Plans and Programs Directorate is responsible for the NCSP Recognition Program. GAO’s 2012 Air Cargo Security Review Air Cargo Advance Screening (ACAS) The Department of Homeland Security’s (DHS) U.S. Customs and Border Protection (CBP) and the Transportation Security Administration (TSA) initiated the ACAS pilot in December 2010 to more readily identify high risk cargo for additional screening prior to all-cargo and passenger aircraft departing from foreign airports to the United States. Unlike TSA, which focuses on aviation security, to include the security of air cargo prior to loading on aircraft at last point of departure airports, CBP focuses on identifying persons and cargo that may violate U.S. law and are, therefore, prohibited from entry into the United States. The aim of the pilot was to determine whether it was feasible for air carriers to submit air cargo manifest data to CBP prior to departure from all foreign last point of departure airports. This would allow CBP to analyze, target, and, if needed, for DHS to issue instructions to air carriers to provide additional cargo information or take additional security measures before such cargo is loaded onto U.S.-bound aircraft. DHS determined that the pilot was successful. In 2012, we reported on the actions TSA took to enhance the security of U.S.-bound air cargo after the October 2010 discovery of explosive devices in packages on all-cargo aircraft bound for the United States from Yemen. We recommended, among other things, that DHS assess the costs and benefits of requiring all-cargo carriers to report U.S.-bound air cargo screening data. DHS agreed with our recommendation and TSA reported that, although all-cargo air carriers submit data to TSA as part of the Air Cargo Advance Screening (ACAS) pilot, the all-cargo air carriers do not need to report on the number of shipments screened for explosives. Nevertheless, TSA reported that it will be able to utilize ACAS data to determine the percentage of shipments transported to the United States on all-cargo aircraft that carriers must screen for explosives. TSA Conducts Overseas Inspections and Assessments to Help Ensure Screening of U.S.- bound Air Cargo and Compliance with Security Requirements To help ensure compliance with cargo security requirements and international standards, TSA inspects air carriers and assesses certain known consignors and regulated agents. TSA also inspects cargo security procedures during foreign airport assessments. Further, DHS has also implemented requirements to obtain advance information on air cargo shipments through ACAS that it uses to perform targeted risk assessments. TSA Inspects Air Carriers and Assesses Other Supply Chain Entities to Help Ensure Compliance with Cargo Security Requirements TSA inspects air carriers and assesses certain known consignors and regulated agents to help ensure compliance with cargo security requirements. However, certain factors can limit TSA’s ability to conduct inspections or observe various security measures, including cargo screening. TSA Inspects Air Carriers TSA uses a multistep process to plan, conduct, and record air carrier cargo inspections. To plan inspections, TSA develops an annual Master Work Plan that regional operations centers use to schedule air carrier inspections each fiscal year. Based on our review of TSA work plans for fiscal years 2012 through 2018 and discussions with TSA officials at all six regional operations centers, TSA separately plans for passenger inspections and cargo inspections of both all-cargo air carriers as well as passenger air carriers that transport cargo bound for the United States from foreign airports. To conduct air cargo inspections, TSA inspectors are to use standardized, cargo-specific job aids that assess air carriers against security program requirements in all six pillars of supply chain security. According to TSA officials, they update the cargo inspection job aids, as needed, to ensure they reflect changes to TSA requirements and the current threat environment. For example, the cargo inspection job aids prompt TSA inspectors to inquire about the transportation of cargo from certain high risk countries. TSA inspectors we spoke with at all six regional operations centers stated that they use the cargo inspection job aids, and inspectors we spoke with at five regional operations centers stated that they are helpful. We observed 17 air carrier cargo inspections at airports in two different countries and found that TSA inspectors consistently used the cargo inspection job aids to assess the air carriers against TSA requirements. These inspectors observed air carriers’ implementation of security measures (such as cargo screening), interviewed security officials, and reviewed air carrier records (including cargo screening and training logs). Officials at all six regional operations centers and the air carriers we met with confirmed these methods are routine practices. Further, officials representing 10 of the 11 air carriers we met with confirmed that TSA regularly inspects their cargo operations at foreign airports to ensure compliance with screening and other security requirements. After completing an air carrier inspection, TSA inspectors are to enter air carrier cargo inspection results into PARIS. TSA supervisors and managers are to review the inspection reports for quality and track their completion. TSA officials we interviewed at TSA headquarters and all six regional operations centers confirmed the quality review process is in place and that they use it. In addition, TSA headquarters cargo experts are to review a sample of air carrier cargo inspections. Based on our analysis of PARIS data, TSA conducted close to 5,000 air carrier cargo inspections (including both passenger air carriers and all- cargo air carriers) from fiscal year 2012 through fiscal year 2017 and found air carriers in full compliance with applicable security requirements in 84 percent of these inspections. TSA reported at least one instance of noncompliance, or violation, for the remaining 16 percent of cargo inspections. Based on the TSA data, the percentage of inspections with violations has generally trended downward during this time period. TSA officials attributed this downward trend to a number of factors including: (1) TSA’s emphasis on assisting air carriers (through its international industry representative) in implementing new air cargo security requirements after the 2010 printer ink cartridge plot; (2) increases in the number of TSA inspectors to ensure compliance; (3) TSA’s outreach to foreign governments for improved cargo security under the NCSP Recognition Program; and (4) TSA efforts to engage with air carriers, including regional industry summits that included a cargo security focus. According to TSA officials, if a TSA inspector finds that an air carrier is not in compliance with any applicable security requirements, additional steps are to be taken to correct and record those specific violations, which can include providing on-the-spot counseling for minor violations or opening an investigation if the violation is potentially more serious. Upon conclusion of the investigation, TSA is to make a determination whether to issue a warning notice, letter of correction, or notice of proposed civil penalty. For example, based on TSA data, we determined that TSA inspectors provided counseling (specific guidance) in certain instances when they found that an air carrier had failed to obtain multiple views of cargo screened using an X-ray machine. According to the TSA data, the air carrier took immediate corrective actions and implemented the correct procedures on-the-spot. From the data provided by TSA, we also identified potentially more serious violations. Examples of such violations included instances in which TSA inspectors initiated an investigation when they found that an air carrier was not screening 100 percent of the cargo as required under its approved security program. According to TSA officials, TSA relies on a system of progressive enforcement and carefully considers whether a civil penalty is warranted based, in part, on the history of an air carrier’s inspections. TSA officials added that they may consider options other than civil penalties, since their objective is to encourage compliance through capacity-building efforts with air carriers, not to generate revenue. For example, TSA will sometimes settle a civil penalty by allowing the air carrier responsible for the violation to invest the agreed upon penalty into improved security measures or screening processes. According to TSA data, TSA inspectors identified 1,128 air carrier cargo security violations during fiscal years 2012 through 2017 for the 16 percent (781) of air carrier inspections where they found at least one violation. For these violations, TSA took the following actions: TSA inspectors resolved 580 of the violations (approximately half) through counseling and referred the remaining 548 violations for investigation since they were each potentially serious enough to warrant an enforcement action. TSA conducted investigations covering the 548 potentially more serious violations, which resulted in about 220 administrative actions, nearly 50 civil penalties, and over 30 instances where no action was taken. According to TSA, TSA inspectors recommended total civil penalties of approximately $23.5 million, $22.2 million of which consisted of penalties proposed for one air carrier. TSA Assesses Known Consignors and Regulated Agents in Recognized Countries During air carrier inspection visits, the TSA inspection team may also conduct assessments of known consignors and regulated agents in countries with recognized NCSPs. According to TSA data, TSA conducted assessments of 38 known consignors and regulated agents in fiscal year 2017. While conducting a site visit to a foreign airport in an NCSP country in March 2018, we observed TSA inspectors conduct assessments of two regulated agents and the inspectors covered all of the required questions. The assessments were primarily interviews along with some observations that included warehouse security and limited cargo screening. Record reviews were not part of the assessment because that is the purview of the foreign government’s civil aviation authority, according to the TSA inspectors. Foreign government civil aviation authority officials attended the assessments of the two regulated agents to observe and take notes of the visit and discussions. According to the TSA inspectors who conducted the assessments in the NCSP country we visited, meeting with regulated agents is invaluable because regulated agents, not air carriers or their authorized representatives, conduct almost all air cargo screening in that country. The inspectors added that having the opportunity to meet with regulated agents during foreign site visits provides them with insights regarding the extent to which screening of U.S.-bound cargo is being conducted at foreign last point of departure airports. In countries without a recognized NCSP, air carriers are required under their TSA-approved security programs to screen all cargo at the airport. Certain Factors Can Limit TSA’s Ability to Conduct Inspections or Observe Cargo Screening TSA inspectors are not always able to observe certain security measures during air carrier cargo inspections or airport assessments because of foreign government sovereignty and air cargo logistics. For example, regional operations center officials told us that they are not always able to observe cargo screening because of restrictions placed on them by foreign governments, such as the number of days they are given to complete an inspection or assessment, the hours they are allowed to work, or the size of the TSA inspection team. TSA officials also stated that the transportation of air cargo occurs at all hours of the day and night, and TSA inspectors must sometimes choose which security measures to observe. For example, the TSA officials stated that screening may occur many hours prior to the loading of that cargo on an aircraft. At both foreign airports we visited, we observed TSA inspectors working late night or early morning hours to observe air carriers’ cargo operations. Out of the 17 air carrier cargo inspections we observed at the two foreign airports we visited, TSA inspectors were not able to observe cargo acceptance procedures for 11 air carriers and cargo screening for 9 air carriers because these carriers did not receive or screen cargo during the time of the inspections or the inspectors were busy conducting other inspections. Because regulated agents screen the vast majority of the cargo before transporting it to the airport in the NCSP country we visited, TSA did not observe cargo screening in eight of the nine air carrier cargo inspections they conducted at that airport. For inspections where TSA inspectors cannot observe security measures, we observed (and TSA inspectors confirmed) that they rely on interviews with officials responsible for cargo security and screening and document reviews (such a reviewing cargo screening logs) to determine whether air carriers are complying with TSA air cargo security requirements. At the request of TSA, air carriers must provide evidence of compliance with applicable security requirements and its security programs, including copies of records. TSA inspectors also do not inspect air carriers at all foreign airports from which air carriers transport U.S.-bound cargo. As we reported in May 2018, challenges prevent TSA from completing 100 percent of required air carrier inspections in Cuba at the frequency established in its standard operating procedures, including external factors, such as foreign government requests to reschedule TSA inspections, and limitations in the data TSA uses to schedule inspections. Further, TSA officials stated that most all-cargo carriers do not have scheduled flights. Instead, they wait until they have sufficient cargo to ship and then complete their routes, which can make it difficult for TSA to schedule inspections— planned 3 months in advance—during times that the carrier will be flying cargo to the United States. According to the vice president of security at one all-cargo carrier, TSA does not always inspect all last point of departure routes used by the airline. TSA is taking steps to better understand air carriers’ schedules. For example, in response to our 2018 review addressing TSA’s efforts to ensure the security of air carrier operations between the United States and Cuba, TSA reported that it began developing a tool in August 2017 that is designed to analyze aggregate flight data and validate or identify last point of departure service to the United States from international locations. TSA Inspectors Assess Foreign Airports from which U.S-bound Cargo is Shipped to Help Ensure Proper Cargo Security Procedures Are in Place In addition to conducting air carrier cargo inspections, TSA inspection teams conduct assessments of foreign airports that provide passenger and/or cargo service to the United States to determine if these airports are maintaining and carrying out effective security measures. TSA inspectors generally use the same process to plan, conduct, and record airport assessments as air carrier inspections, according to TSA headquarters and regional operations centers officials. Specifically, TSA inspection teams assess the foreign airports using 44 ICAO standards and recommended practices, including nine standards or practices that are specific to the transport of cargo and mail. These standards include measures for the acceptance, screening, and protection of air cargo. At the end of each foreign airport assessment, TSA inspectors are to prepare a report detailing findings on the airport’s overall security posture and security measures that may also contain recommendations for corrective actions. We observed TSA inspectors conducting the cargo portion of an airport assessment at one airport we visited and confirmed their use of this process. Inspectors used the results of the air carrier cargo inspections conducted earlier in the site visit to inform the cargo portion of the airport assessment and complete the associated job aid. The TSA inspectors obtained additional information specific to the assessment during an interview with airport officials and an international mail facility in the country we visited. The inspectors stated that they corroborated the information obtained during interviews with documentation provided by airport officials and the foreign government in advance of the visit. TSA conducted about 570 assessments of foreign airports with U.S- bound cargo shipments from fiscal year 2012 through fiscal year 2017, and TSA inspectors determined that the airports were fully compliant with the cargo-related ICAO standards and recommended practices in about 430 of these assessments (75 percent), according to our analysis of TSA data. However, TSA inspectors found at least one instance of cargo noncompliance in about 140 airport assessments (25 percent). Based on TSA data, the percentage of airport assessments in which TSA inspectors identified cargo noncompliance issues has generally trended upward during fiscal years 2012 through 2017. TSA officials attributed this upward trend to the introduction of a new ICAO standard in 2014 for ensuring that all cargo shipments designated as higher-risk undergo enhanced screening. TSA assigns a vulnerability score to each ICAO standard and recommended practice assessed using a rating system, ranging from a category “1,” which represents full compliance with ICAO standards and recommended practices, to a “5,” which involves the most serious or egregious issues. For example, in a fiscal year 2017 foreign airport assessment, TSA inspectors recorded an instance of noncompliance of ICAO standard 4.6.3 (that requires protection of cargo from the point of screening until departure of the aircraft) as a “3” when they identified holes in a facility perimeter barrier allowing direct access to secured cargo. Further, during a 2014 airport assessment, TSA inspectors assessed an instance of noncompliance of the same standard as a “5” when they observed two unescorted individuals in a security restricted area without airport identification. Based on the results of TSA’s foreign airport assessments conducted during fiscal years 2012 through 2017, TSA inspectors assessed most noncompliance issues identified as a “2” or “3.” As of December 2017, TSA officials reported that certain foreign airports took corrective actions to address noncompliance issues. As a result, TSA closed out approximately 40 percent of the fiscal year 2012 through 2017 deficiencies identified in its assessments. According to our analysis of TSA data, for the remaining 60 percent of noncompliance issues, the airports have not yet taken sufficient action to fully address TSA’s concerns, or TSA inspectors have not yet verified whether the actions foreign airports reported that they have taken are sufficient for addressing the noncompliance issues. The majority of unaddressed noncompliance issues pertain to issues identified in fiscal year 2016 or 2017 assessments. In our 2017 review of TSA’s foreign airport assessments, we reported that TSA assists foreign airports in addressing identified noncompliance issues (security deficiencies) in various ways, but noted that TSA could enhance data management. As part of assisting foreign airports, TSA inspectors educate foreign airport officials on how to mitigate identified airport security deficiencies. Specifically, TSA provides on-the-spot counseling, training, technical assistance, security consultations, and security equipment. In addition, TSA representatives—the primary liaisons between the U.S. government and foreign governments on transportation security issues—are responsible for monitoring the progress made by foreign officials in addressing security deficiencies identified during TSA airport assessments. Our 2017 review found, however, that TSA representatives did not always update key information in TSA’s database for tracking the resolution status of security deficiencies, including the security deficiencies’ root causes and corrective actions. To help strengthen TSA’s analysis and decision making, we recommended that TSA fully capture and specifically categorize data on the root causes of security deficiencies and the status of corrective actions to be taken. TSA concurred with our recommendations and is taking steps to address them, as discussed below. In addition to working with foreign airports to address deficiencies, TSA sometimes requires air carriers to adopt security procedures through security directives or emergency amendments to compensate for serious vulnerabilities that TSA identified during the foreign airport assessment. For example, at one airport in Africa, passenger air carriers must hold all cargo for 24 hours prior to transport. In response to our 2017 recommendations, TSA officials told us that they are in the process of developing a vulnerability resolution tool to capture the vulnerabilities associated with a specific location, such as a foreign country or airport. According to TSA officials, the tool will be used to identify and categorize root causes of vulnerabilities identified during air carrier inspections and foreign airport assessments, as well as incorporate other country specific information. TSA officials added that, once completed, TSA hopes to be able to use the tool to develop vulnerability mitigation options to, among other things, address security vulnerabilities identified during air carrier inspections and foreign airport assessments. For example, if TSA inspectors identify a cargo screening vulnerability during an air carrier inspection or airport assessment, they may determine that the root cause is a lack of national-level training courses. In an example such as this, although TSA does not have the authority to require a foreign government to take corrective actions, TSA officials may develop a training curriculum that foreign governments could deploy, if they choose, to address the identified vulnerability. According to TSA officials, TSA inspectors and TSA representatives would subsequently determine whether the training resolved the vulnerability and, if necessary, consider what additional measures may be appropriate. TSA expects to have the tool in place and staff trained to use it by the beginning of fiscal year 2019. DHS Has Taken Steps to Obtain Advance Air Cargo Information to Perform Targeted Risk Assessments of U.S.- Bound Flights DHS has taken steps to require advance information on air cargo shipments in order to conduct targeted risk assessments and help ensure the cargo is secure before air carriers transport it to the United States. As previously discussed, in December 2010, U.S. Customs and Border Protection (CBP) began collecting cargo data from certain air carriers before they loaded U.S.-bound cargo as part of the voluntary ACAS pilot program. In response to a terrorist plot in July 2017, TSA issued security directives and emergency amendments in September 2017 requiring air carriers transporting cargo to the United States from last point of departure airports in Turkey to submit advance cargo data to CBP. Further, in January 2018, TSA imposed similar requirements for foreign air carriers operating out of certain high risk countries in the Middle East. DHS subsequently published the ACAS interim final rule, which requires all air carriers to submit advance air cargo information as of June 12, 2018. TSA and CBP identify high risk cargo based on, among other things, the advance information air carriers submit and may require them to take additional actions before loading the cargo onto U.S.-bound flights. Before implementation of the ACAS interim final rule, air carriers not participating in the ACAS pilot were required to submit manifest data to CBP no later than 4 hours before the flight’s arrival in the United States, or no later than the time of departure from locations in North America, the Caribbean, Central America, and parts of South America north of the Equator. However, under ACAS, a subset of the manifest data must be provided prior to loading the cargo onto U.S.-bound aircraft. After reviewing the data, DHS can mandate that an air carrier (1) provide additional information on a particular cargo shipment, (2) perform enhanced screening before loading the cargo, or (3) not transport the cargo to the United States. TSA officials are beginning to track whether air carriers have conducted the required ACAS screening as a part of their international compliance activities. TSA officials stated that inspectors review air carrier screening and manifest logs during air carrier cargo inspections at foreign airports to verify compliance with ACAS. In addition, TSA plans to fully develop the process of assessing air carrier compliance with ACAS requirements, according to TSA officials. TSA Has Recognized the Air Cargo Security Programs of the European Union and 12 Other Countries and Monitors Their Implementation TSA Has Increased the Number of Countries Recognized, as well as the Scope of Its Recognition Program As of June 2018, TSA has recognized the passenger air cargo security programs of the European Union, which covers the 28 European Union member states, and 12 other countries. NCSP recognition is a voluntary agreement between TSA and a foreign government. TSA’s NCSP recognition process involves three phases: (1) a technical review and analysis of a foreign country’s air cargo security program’s requirements with TSA requirements to determine if the programs align on basic principles; (2) validation visits to the foreign country to determine if the air cargo security program aligns with TSA practices; and (3) a decision on whether to recognize the foreign government’s air cargo security program as commensurate with TSA’s air cargo security requirements. The recognition decision is based on whether the foreign government’s NCSP is commensurate with TSA requirements across TSA’s six pillars of cargo supply chain security, and the potential outcomes are as follows: Recognition with no caveats. TSA may determine that the foreign government’s NCSP is fully commensurate with all of TSA’s air cargo security requirements across all six supply chain security pillars or TSA may find there are slight variations in air cargo security requirements that nonetheless provide a commensurate level of security and give the country’s NCSP recognition with no caveats. As of June 2018, TSA had recognized the NCSPs of Canada, Israel, and Norway without any caveats. Recognition with caveats. TSA may decide to recognize a government’s NCSP, but with certain caveats based on specific variations within a country’s national requirements. According to TSA officials, in this instance, TSA requires air carriers in that country to continue to implement specific TSA requirements on U.S.-bound air cargo to account for the variation. As of June 2018, TSA had issued at least one caveat with nine NCSP recognized countries and the European Union. For example, in these nine recognized countries and the European Union, TSA requires air carriers to rescreen cargo originating from specific third party countries according to TSA standards before transporting it to the United States. No recognition, but provides recommendations. TSA may determine that a foreign government’s NCSP is not commensurate with TSA requirements in many areas and make recommendations to that government on how to improve its air cargo security program to better align with TSA and global air cargo security requirements. For example, after reviewing one country’s air cargo security program requirements, TSA determined that its NCSP was not commensurate and provided written recommendations on ways to improve its NCSP, as discussed below. According to TSA officials, under such circumstances they will continue to engage with the foreign government. If the foreign government implements the recommendations, TSA may reconsider the foreign government for NCSP recognition. Notably, TSA recognized another country’s air cargo security program only after its civil aviation authority implemented TSA’s recommendations to improve certain procedures, including screening of staff with access to air cargo. Where NCSP recognition is not applicable, air carriers transporting air cargo into the United States from last point of departure airports must continue to apply their TSA-approved security program requirements pertaining to cargo. TSA originally developed the NCSP Recognition Program for passenger air cargo security programs in fiscal year 2011, and TSA expanded the scope of the program in fiscal year 2013 to include all-cargo operations. As a result of this expansion, foreign governments may choose to engage with TSA on NCSP recognition for passenger operations, all-cargo operations, or both. According to TSA’s NCSP memo authorizing the change, by including all-cargo operations in its evaluation of other countries’ NCSPs, TSA can gain a greater understanding of the international air cargo supply chain. As of June 2018, TSA had recognized the all-cargo operations of the European Union and six other countries. Figure 2 provides information about the foreign government NCSPs that TSA had recognized as of June 2018. According to TSA data, air carrier participation in the NCSP Recognition Program has increased in recent years. Specifically, as of June 2018, 130 air carriers participate in the NCSP Recognition Program—an increase from about 50 in fiscal year 2015 when TSA last recognized a foreign government’s NCSP. After TSA has recognized a foreign government’s NCSP, air carriers can request amendments to their TSA-approved security programs to allow them to follow a recognized country’s air cargo security program instead of having to follow both the recognized country’s security program and separate requirements in their TSA-approved security programs. Representatives from all 11 air carriers we met with stated that they have submitted requests to TSA to amend their security programs in order to implement the foreign government’s NCSP instead of TSA requirements when operating in those countries that have NCSP recognition. According to representatives from all 11 air carriers and TSA officials we met with, air carriers benefit from NCSP recognition. Specifically, they and the stakeholders in their supply chains can learn and use the host country’s set of air cargo security requirements (and without a need to know and implement TSA requirements for cargo transported on U.S.-bound flights from that country). TSA officials stated that, as of June 2018, apart from the European Union and the 12 other countries that have NCSP programs, no additional foreign governments are close to achieving NCSP recognition. However, TSA NCSP Recognition Program officials continue to coordinate with foreign governments on air cargo security issues when requested and as TSA resources allow. According to information provided by TSA, as of June 2018, TSA had coordinated with 21 additional foreign governments interested in NCSP recognition that are not yet recognized. In non- recognized countries, air carriers transporting U.S.-bound air cargo must follow the measures required by the foreign governments in addition to their TSA-approved security programs. TSA Uses a Variety of Mechanisms to Monitor and Revalidate Recognized Governments’ NCSP Implementation Once TSA determines a foreign government’s NCSP is commensurate with TSA requirements, it monitors NCSP implementation through air carrier cargo inspections, foreign airport assessments, ongoing engagements with foreign government officials, and revalidation of NCSP recognition (see fig. 3). Each of these monitoring mechanisms is discussed in greater detail below. According to TSA officials, results from air carrier inspections and foreign airport assessments provide TSA valuable information in determining whether to revalidate a foreign government’s NCSP recognition because TSA inspectors are able to verify a recognized government’s NCSP implementation in person. We analyzed TSA data from fiscal years 2015 through 2017 and confirmed that TSA conducted air carrier cargo inspections and assessments of foreign airports with U.S-bound cargo shipments that covered all recognized NCSPs. Representatives from 10 of the 11 air carriers we met with and the two foreign governments we met with confirmed that TSA conducts air carrier inspections in recognized countries. According to our analysis of TSA data for fiscal years 2015 through 2017, TSA inspectors identified more air carrier violations and lower rates of compliance with cargo-related standards and recommended practices at foreign airports located in non-NCSP countries than in NCSP countries. In addition to identifying lower rates of compliance in non-NCSP countries, TSA officials also determined that the noncompliance issues in non-NCSP countries were more serious than noncompliance issues in NCSP countries, according to our data analysis. According to TSA officials, TSA inspectors identified fewer violations during air carrier cargo inspections in NCSP countries because air carriers only need to implement one air cargo security program (the host government’s) and, therefore, were less likely to make errors. Additionally, TSA inspectors identified fewer noncompliance issues in NCSP countries because TSA officials meet with foreign officials in recognized countries on a regular basis, and this helps to improve compliance. Representatives from 10 air carriers we met with confirmed that they are less likely to violate air cargo security requirements in NCSP countries because (1) the foreign government conducts regular compliance inspections (a component of the oversight and compliance security pillar TSA requires foreign governments implement to obtain NCSP recognition), or (2) screeners are less likely to make errors screening cargo because they only need to implement the foreign government’s NCSP, which reduces confusion. For example, one air carrier representative told us that cargo screeners do not need to determine which security measures (TSA’s or the host government’s) to implement for a particular flight. Annual Meetings and TSA Representative Engagement with Foreign Government Officials TSA and foreign government officials also discuss changes in a foreign government’s NCSP on a regular basis, according to our review of TSA’s documents and interviews with TSA and foreign government officials. For example, TSA’s memos authorizing the NCSP Recognition Program and 11 of 12 letters of recognition provided to foreign governments express an intent for TSA to hold in-person, annual meetings with officials in countries with a recognized NCSP program to discuss issues related to NCSP recognition. TSA officials generally held or planned to hold such meetings in fiscal years 2017 and 2018, according to our review of TSA’s NCSP Recognition Program fiscal year 2018 work plan. In addition, TSA officials stationed at U.S. embassies are to meet with their foreign government counterparts on a regular basis, according to TSA officials and the two recognized governments with whom we met. For example, the TSA representative who coordinates with the European Commission in Brussels, Belgium, told us that he meets with European Commission officials multiple times each month. He stated that these conversations can cover regulatory and legislative changes pertaining to air cargo security with European Commission officials and he informs TSA headquarters and the Frankfurt Regional Operations Center of changes that could affect NCSP recognition in Europe. TSA headquarters and European Commission officials confirmed that these meetings occur. Revalidation of NCSP Recognition TSA revalidates recognized NCSPs using the results of its air carrier inspections, airport assessments, ongoing engagement with foreign government officials, and additional site visits to the foreign country, if needed. According to our analysis of TSA NCSP recognition letters and NCSP information compiled by TSA officials, TSA has revalidated all recognized NCSP countries at least once since fiscal year 2012. Further, this analysis shows that TSA has generally revalidated the NCSPs of recognized countries every 3 years, as required by the TSA memos that established and revised the NCSP recognition process. However, in 2016, TSA authorized a change to the revalidation process that allows for continuous NCSP recognition because, according to TSA officials and NCSP memos, the monitoring mechanisms TSA has in place (e.g., air carrier inspections, foreign airport assessments, and ongoing dialogue with foreign government officials) provide sufficient information to validate that foreign governments’ recognized NCSPs and continue to provide a commensurate level of security to TSA’s. TSA’s 2016 NCSP memo states that TSA can revoke continuous recognition at any time, and TSA may not grant continuous recognition to a country if TSA determines that additional oversight is warranted. For example, TSA officials stated that they may only recognize a country’s NCSP on a time-limited basis if they experience communication or access issues or have concerns about implementation of the NCSP. As of June 2018, TSA had granted continuous recognition to the European Union and 10 other countries and had not revoked any government’s continuous recognition, according to summary NCSP information provided by TSA officials. TSA’s Existing Performance Measures Do Not Allow It to Specifically Determine the Effectiveness of Its Efforts to Secure U.S.-Bound Air Cargo TSA has taken steps to broadly measure the effectiveness of its air carrier inspections and foreign airport assessments, but these efforts do not allow TSA to specifically determine the effectiveness of the cargo portions of such inspections or assessments. In addition, TSA has not developed measures for determining the effectiveness of its NCSP Recognition Program. TSA Has Not Evaluated the Effectiveness of its Air Carrier Cargo Inspections or the Cargo Portions of Foreign Airport Assessments TSA tracks data on the results of air carrier inspections and foreign airport assessments, and it broadly measures the effectiveness of its foreign airport assessment program and is developing a similar measure for its air carrier inspection program. However, TSA’s performance measures do not allow it to specifically determine the effectiveness of its air carrier cargo inspections or the cargo portions of foreign airport assessments. For example, in fiscal year 2017, TSA developed a new performance measure to track the extent to which foreign airports take actions to address noncompliance issues identified by TSA inspectors during foreign airport assessments. The target for this performance measure is for 70 percent of foreign airports to implement corrective actions or other mitigation strategies. However, that performance measure does not allow TSA to determine the effectiveness of the cargo portions of airport assessments because it does not separately account for cargo and noncargo noncompliance issues. Specifically, the current measure does not capture noncompliance issues by category, to allow TSA to determine which noncompliance issues specifically pertain to cargo. Such a broad measure of the effectiveness of foreign airport assessments could obscure progress made (or lack thereof) in resolving cargo-specific vulnerabilities. According to our analysis of TSA fiscal year 2017 foreign airport assessment data, TSA could meet its 70 percent target if foreign airports take actions to address noncompliance issues unrelated to cargo—including passenger and carry-on baggage screening and access controls—without taking any actions to address identified noncompliance issues for cargo. TSA officials stated that they are coordinating with the Office of Management and Budget to develop a performance measure to gauge the effectiveness of air carrier inspections. However, TSA officials also stated that they have no plans to differentiate the extent to which air carriers correct violations TSA inspectors identify related to cargo from those identified related to passengers as they develop this measure. Notably, TSA has regularly included a goal to secure air cargo and the supply chain in annual operational implementation plans, but TSA has no associated performance measures that show the effectiveness of efforts taken to meet this goal. TSA’s Office of Global Strategies Fiscal Year 2016 Strategy states that all strategic goals and objectives will have corresponding, relevant performance indicators that measure organization effectiveness in those areas. Further, DHS and TSA guidance state that it is important to measure the effectiveness of risk management priorities. For example, the DHS National Infrastructure Protection Plan and Transportation Systems Sector-Specific Plan state that setting goals and measuring the effectiveness of risk management efforts against these goals are key elements of a risk management framework. We have also previously reported on the importance of developing outcome-based performance measures—measures that address the results (effectiveness) of products and services. According to TSA officials, they have not developed outcome-based performance measures that are specific to cargo security because they believe that measuring the results of air carrier inspections and foreign airport assessments holistically is sufficient to provide them with information on air cargo vulnerabilities. However, as previously discussed, TSA inspectors are identifying some potentially serious cargo vulnerabilities during air carrier cargo inspections and the cargo portions of airport assessments, including cargo that was not properly screened. Given TSA’s assessment that the security threat in air cargo is significant, developing and monitoring an outcome-based performance measure specific to the cargo portions of foreign airport assessments—along with differentiating the extent to which air carriers correct violations related to cargo from those related to passengers as it develops and monitors outcome-based performance measures for its air carrier inspection program–could help TSA better determine the effectiveness of these efforts and whether they are improving the security of U.S.-bound air cargo. Such cargo-specific outcome-based performance measures could include differentiating the percentage of cargo-related violations that TSA has verified air carriers have addressed (as opposed to passenger- related violations) and measuring the progress that foreign airport authorities, foreign governments, or TSA have made to address vulnerabilities specific to ICAO’s cargo-related standards. TSA Has Not Evaluated the Effectiveness of its NCSP Recognition Program TSA does not measure the effectiveness of its NCSP Recognition Program. Specifically, TSA budget documents and annual performance reports do not include measures for gauging the success of its NCSP Recognition Program. TSA operational implementation plans for fiscal years 2014 through 2017 addressed program recognition—including working toward recognition efforts with countries based on a list of priorities and holding annual in-person meetings with each recognized government—but TSA has not evaluated the impact of these actions. In addition, while TSA’s operational implementation plans include milestones to measure outputs of the NCSP Recognition Program, TSA has not measured outcomes of its NCSP recognition efforts. For example, TSA has not measured the extent to which non-recognized countries implement recommendations that TSA has made to them during the NCSP recognition process. TSA officials stated that such a measure would help them determine the effect of the NCSP Recognition Program on air cargo security. According to TSA officials, in the absence of formal performance measures, the primary metric used to measure the performance of the NCSP Recognition Program is the number of countries TSA has recognized. However, this metric does not address the effectiveness of the NCSP Recognition Program because it does not measure how the program improves air cargo security. We have previously reported on the importance of measuring program performance. Our prior reports and guidance have stated that performance measures should evaluate both processes (outputs) and outcomes related to program activities. Specifically, we have noted that output measures address the type or level of program activities conducted, such as the number of countries recognized, while outcome- based measures address the results of products and services, such as how recognition programs facilitate the identification of air cargo industry vulnerabilities or contribute to improved air cargo security. Further, as discussed earlier, TSA strategy documents and leading practices encourage the development of relevant performance indicators that measure program effectiveness. TSA officials stated that TSA has not developed performance measures associated with the NCSP Recognition Program because TSA has reorganized and different directorates within TSA have had responsibility for NCSP program recognition over time. TSA officials also stated that developing NCSP Recognition Program performance measures has been secondary to other tasks, such as developing the ACAS program. Developing and monitoring output and outcome-based performance measures for its NCSP Recognition Program will help TSA better assess the effectiveness of the program and whether the resources it has invested are yielding their intended results. Conclusions Air carriers transport billions of pounds of cargo into the United States from foreign airports each year, and the threat posed by terrorists attempting to conceal explosive devices in air cargo shipments remains significant, according to TSA. TSA has taken steps to ensure that U.S- bound air cargo is secure by, for example, conducting air carrier cargo inspections overseas, performing assessments of foreign airports that transport cargo to the United States using ICAO cargo-related standards and recommended practices, and evaluating and recognizing the NCSPs of foreign countries. Although TSA tracks cargo compliance data collected during its air carrier inspections and foreign airport assessments and is developing a vulnerability resolution tool, TSA has not developed outcome-based performance measures for determining the effectiveness of its air cargo security compliance efforts. Developing and monitoring an outcome-based performance measure for the cargo portions of airport assessments and differentiating the extent to which air carriers correct violations related to cargo from those related to passengers as it develops and monitors outcome-based performance measures for its air carrier inspection program could help TSA better assess the effectiveness of these efforts and whether they are improving air cargo security. For example, TSA could measure the percentage of cargo-related violations that TSA has verified air carriers have addressed. Further, developing and monitoring output and outcome-based performance measures for its recognition programs will help TSA better determine the effectiveness of the NCSP Recognition Program and whether the resources TSA has invested are yielding their intended results. For example, TSA could measure the extent to which non-recognized countries implement recommendations that TSA has made to them during the NCSP recognition process. Recommendations for Executive Action We are making the following three recommendations to TSA: The Administrator of TSA should instruct Global Strategies to develop and monitor outcome-based performance measures for determining the effectiveness of the cargo portion of its foreign airport assessments. (Recommendation 1) The Administrator of TSA should instruct Global Strategies to differentiate the extent to which air carriers correct violations related to cargo from those related to passengers as it develops outcome-based performance measures for its air carrier inspection program, and monitor any measure it develops. (Recommendation 2) The Administrator of TSA should instruct Global Strategies to develop and monitor output and outcome-based performance measures for determining the effectiveness of its NCSP Recognition Program. (Recommendation 3) Agency Comments In August 2018, we provided a draft of the sensitive version of this report to the Department of Homeland Security for its review and comment. In written comments, which are included in appendix IV, DHS stated that it concurred with the recommendations and plans to develop cargo-specific performance measures to help determine the effectiveness of its air carrier inspections, foreign airport assessments, and the NCSP Recognition Program. DHS also provided technical comments, which we have incorporated into the report, as appropriate. We are sending copies of this report to interested congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Nathan Anderson at (202) 512-3841 or andersonn@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report: (1) describes steps the Transportation Security Administration (TSA) takes to help ensure that U.S-bound air cargo is secure, (2) describes the status of TSA’s efforts to recognize and monitor foreign governments’ air cargo security programs, and (3) analyzes the extent to which TSA measures the effectiveness of its efforts to secure U.S.-bound air cargo. This report is a public version of a sensitive report that we issued in October 2018.TSA deemed some of the information in our October report to be Sensitive Security Information, which must be protected from public disclosure. Therefore, this report omits sensitive information about TSA’s risk methodology, the standards that TSA uses to assess foreign airports, the specific results of TSA’s air carrier inspections and foreign airport assessments, and information on the types of NCSP recognition TSA has granted to other countries. Although the information provided in this report is more limited, the report addresses the same objectives as the sensitive report and uses the same methodology. To describe the steps TSA takes to help ensure that U.S-bound air cargo is secure, we reviewed relevant laws and regulations, TSA security policies and procedures, screening program requirements, and security directives and emergency amendments relevant to air cargo. For example, we reviewed relevant air carrier security programs and associated cargo inspection job aids that TSA transportation security specialists (inspectors) are to use during each air carrier cargo inspection to ensure that requirements for air carrier security programs are fully evaluated during each inspection. We also reviewed fiscal years 2012 through 2018 air carrier inspection and airport assessment Master Work Plans—which TSA uses to track its overseas air carrier inspection and foreign airport assessment schedule—to better understand how TSA schedules inspections and assessments and the types of inspections it conducts. We chose these fiscal years because they cover the time period since our previous air cargo security review. In addition, we conducted site visits to two foreign airports that operate flights that transport air cargo directly to the United States—one in South America and one in Asia—to observe a nongeneralizable sample of TSA inspectors conducting a total of 17 air carrier cargo inspections. At one airport, we also observed the cargo portion of an airport assessment. We selected these locations based on their designation by TSA as airports of relatively high risk level, as well as high volume of U.S.-bound air cargo; TSA’s air carrier inspection schedule; and geographic dispersion. We also chose these countries to allow us to observe an inspection in one country where TSA has recognized the NCSP and one country where TSA has not recognized the NCSP. In addition, we reviewed the final reports TSA inspectors completed for the air carrier cargo inspections and airport assessment we observed. Further, we obtained and analyzed the results of all air carrier cargo inspections (close to 5,000) and assessments at foreign airports that are last points of departure for cargo bound for the United States (about 570) conducted by TSA inspectors and then entered by them into TSA’s databases. The Performance and Results Information System (PARIS) database contains security compliance information on TSA-regulated entities, including air carriers, and the Global Risk Analysis and Decision Support (GRADS) system vulnerability tracking sheet contains the results of foreign airport assessments. We analyzed PARIS and GRADS data from fiscal years 2012 through 2017, to cover the period since our previous air cargo security review and to include the 5 most recent years for which data were available at the time of our review. Specifically, we analyzed the frequency with which air carriers and foreign airports complied with TSA air cargo security requirements and select cargo- related International Civil Aviation Organization (ICAO) aviation security standards and recommended practices, including the seriousness of ICAO noncompliance issues TSA inspectors identified. TSA also uses GRADS to populate the Open Standards and Recommended Practices Finding Tool (OSFT), which tracks efforts taken by TSA and host governments to address noncompliance issues identified during foreign airport assessments. We analyzed fiscal years 2012 through 2017 OSFT data to determine the status of noncompliance issues TSA inspectors identified. We also reviewed 2017 PARIS data on the number of known consignor and regulated agent assessments TSA inspectors conducted. To assess the reliability of TSA’s air carrier and airport assessment data captured in PARIS and GRADS tracking sheet and OSFT, we reviewed program documentation on system controls, interviewed knowledgeable TSA officials, and analyzed TSA’s data for any potential gaps and errors. During our assessment, we found some inconsistencies in the tool TSA uses to follow up on airport noncompliance issues. We rounded airport compliance information to the nearest 10 for reporting purposes. We also aggregated ICAO standards and recommended practices within the Measures Related to Cargo, Mail, and Other Goods category for reporting purposes because their numbering has changed over time. We concluded that TSA’s data on air carrier inspections and foreign airport assessments were sufficiently reliable to provide a general indication of the level of compliance for TSA’s air carrier inspections and foreign airport assessments over the period of our analysis. In addition, we conducted interviews with TSA officials, foreign government representatives, and air cargo industry stakeholders, as follows: We interviewed senior TSA officials, inspectors, TSA representatives stationed overseas, and international industry representatives located at TSA headquarters and in the field. For example, we met with the Director of Global Compliance as well as managers and inspectors from all six TSA regional operations centers who are responsible for planning and conducting air carrier inspections and assessments of foreign airports. During our interviews with TSA staff, we discussed TSA’s efforts to ensure the security of U.S.-bound air cargo prior to being transported to the United States and air carriers are in compliance with the applicable TSA cargo security requirements. We also interviewed officials at the European Commission (EC) and from the civil aviation authority in the country in Asia that we visited to discuss air cargo security standards and their experiences in coordinating with TSA. We judgmentally selected these foreign government entities because they (1) aligned with TSA’s inspection site visit in the country in Asia that we observed and (2) represent different models of recognition (i.e., TSA recognizes both the passenger and all-cargo portions of the European Union national cargo security program (NCSP) but only passenger operations in the NCSP for the country in Asia that we visited). Further, we met with representatives from 2 aviation associations and 11 air carriers that include U.S. and foreign-flagged air carriers, as well as passenger and all-cargo carriers. One of the international aviation associations includes air carriers that comprise over 80 percent of the world’s air traffic and the other aviation association includes the 5 air carriers that transported the largest individual amounts of U.S.-bound air cargo, by tonnage, in fiscal year 2017. We based our selection of the 11 air carriers on the relatively high volume of U.S.-bound cargo they transport; their operation of flights at the foreign airports we visited; and to obtain a range of coverage regarding their geographical regions of operation, passenger and all- cargo air carriers, and U.S. and foreign-flagged air carriers. Results from these meetings with foreign governments and aviation industry officials are not generalizable, but provided us with information on stakeholders’ experiences and perspectives regarding air cargo security issues. To describe the status of TSA’s efforts to recognize and monitor foreign governments’ air cargo security programs, we reviewed TSA’s policies and procedures for its NCSP Recognition Program. For example, we reviewed TSA memos from 2012, 2013, and 2016 that documented the recognition standards and any subsequent revisions to the NCSP Recognition Program; as well as TSA’s process for monitoring NCSP recognition requirements. Additionally, we analyzed letters that TSA provided since 2012 to the 13 governments it determined had commensurate air cargo security programs and NCSP information TSA officials compiled specifically for our review to better understand TSA’s terms of recognition with each government and the timeframes for revalidating NCSP recognition. We also reviewed letters TSA provided to governments it had determined did not have commensurate air cargo security programs, which provided us with insights into the recognition process and the criteria applied to TSA’s reviews. Further, we reviewed the NCSP Recognition Program’s fiscal years 2017 and 2018 work plans, as well as summaries of TSA’s annual meetings with foreign governments to better understand TSA’s efforts to engage with recognized governments. We also analyzed the air carrier cargo inspection and airport assessment data discussed above to determine the number of cargo inspections and assessments TSA completed in recognized countries from fiscal years 2015 through 2017. We chose this time period because it represents the 3 most recent complete fiscal years, and TSA last recognized a country’s NCSP in 2015. We also analyzed data from TSA’s Security Policy and Industry Engagement Policy Inventory on the number of air carriers participating in the NCSP Recognition Program from fiscal year 2012— when the NCSP Recognition Program began—through fiscal year 2017— the most recent complete fiscal year available at the time of our review— to determine how the level of participation has changed over time. In addition, we analyzed fiscal year 2017 Department of Transportation Bureau of Transportation Statistics T-100 data bank, which contains data on U.S.-bound departures from foreign airports, among other things, to determine the percentage of overall U.S.-bound air cargo shipped from NCSP countries. To assess the reliability of the T-100 data, we reviewed documentation on system controls, interviewed knowledgeable officials from the Bureau of Transportation Statistics, and analyzed the data for any potential gaps and errors. We determined that the T-100 data were sufficiently reliable for our intended purposes. Finally, we conducted interviews with TSA and foreign government officials from two countries, and with representatives of the 11 air carriers described previously to better understand TSA’s ongoing efforts to recognize and monitor foreign governments’ air cargo security programs. We also confirmed the status of countries’ NCSP recognition, as of June 2018, with TSA officials. To analyze the extent to which TSA measures the effectiveness of its various efforts to secure U.S.-bound air cargo, we reviewed documents that contain information on TSA’s air cargo security objectives, goals, and performance measures, including (1) information reported to the Office of Management and Budget in annual budget documents from fiscal years 2014 through 2019, and (2) TSA’s Global Strategies directorates Operational Implementation Plans from fiscal years 2014 through 2018— the most recent years available at the time of our review. These plans include annual objectives and milestones for U.S.-bound air cargo security programs. We also reviewed the measures in the annual budget documents and Operational Implementation Plans and compared them with requirements in TSA’s Global Strategies’ Fiscal Year 2016 Strategy and Fiscal Year 2018 Strategy Program and applicable laws governing performance reporting in the federal government, including the Government Performance and Results Act of 1993 (GPRA), as updated and expanded by the GPRA Modernization Act of 2010 (GPRAMA). For example, we assessed whether the performance measures provide information on the effectiveness of TSA’s various air cargo security efforts. Although GPRA and GPRAMA requirements apply to those goals reported by departments (e.g., DHS), we have previously reported that they can serve as leading practices at other organizational levels, such as component agencies (e.g., the TSA) for performance management. Further, we assessed TSA’s performance measures against risk management principles in the DHS National Infrastructure Protection Plan and the Transportation Systems Sector-Specific Plan. In addition, we obtained additional information on how TSA measures the performance of its air cargo security efforts during our interviews with TSA headquarters officials. The performance audit upon which this report is based was conducted from July 2017 to October 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with TSA from September 2018 to November 2018 to prepare this public version of the original sensitive report for public release. This public version was also prepared in accordance with these standards. Appendix II: Transportation Security Administration (TSA) Processes for Conducting Air Carrier Cargo Inspections Air carrier cargo inspections are conducted by a team of Transportation Security Administration (TSA) security specialists (inspectors) at foreign airports who review passenger and all-cargo air carriers’ implementation of requirements in their TSA-approved security programs, any amendments or alternative procedures to these security programs, and applicable security directives or emergency amendments. The frequency of air carrier cargo inspections at each airport depends on a risk-informed approach and is influenced, in part, by the airport’s vulnerability to security breaches, since the security posture of each airport varies, according to TSA. In general, TSA procedures require TSA to inspect air carriers with TSA-approved security programs at each airport annually or semiannually depending on the vulnerability level of the airport, with some exceptions. The inspection teams—based out of TSA regional operations centers—generally include one team leader and one team member and typically take 1 or 2 days, but can involve more inspectors and take longer to complete depending on the extent of service by the air carrier. TSA inspectors may spend several days at a foreign airport inspecting air carriers if there are multiple air carriers serving the United States from that location. During air carrier cargo inspections, TSA inspectors are to review applicable security manuals, procedures, and records; interview air carrier personnel; and are to observe security measures, such as cargo acceptance and screening, among other activities. Air carriers are subject to inspection in six key areas of cargo supply chain security, as described in table 2. After completion of an air carrier inspection, TSA inspectors are to record the results into TSA’s Performance and Results Information System (PARIS), a database containing security compliance information on TSA- regulated entities. If an inspector finds that an air carrier is in violation of any applicable security requirements, the inspector is to take additional steps to record the specific violation(s) and, in some cases, pursue them with further investigation. For example, TSA inspectors may choose to resolve violations that are minor or technical in nature, such as an employee not displaying their identification, through on-the-spot feedback and instruction, referred to as “counseling.” For more serious violations, such as inadequate screener training, TSA inspectors may pursue administrative actions, including issuing a warning notice, or initiating an investigation and requiring air carriers to inform TSA of the specific steps they will take to address the issue. For more egregious violations, such as failure to screen cargo, TSA inspectors may recommend a civil penalty. In extreme cases, TSA may withdraw its approval of an air carriers’ security program and suspend the air carriers’ operations. According to TSA officials, they rely on a system of progressive enforcement and carefully consider whether a civil penalty is warranted based on the compliance history of an air carrier, among other factors. Appendix III: Transportation Security Administration (TSA) Processes for Conducting Foreign Airport Assessments Through its foreign airport assessment program, TSA determines whether foreign airports that provide passenger or all-cargo air carrier service to the United States are maintaining and carrying out effective security measures. To determine the frequency of foreign airport assessments, TSA uses a risk-informed approach to categorize airports into three risk tiers, with high risk airports assessed more frequently than medium and low risk airports. TSA’s assessments of foreign airports are generally scheduled during the same site visit as air carrier inspections for a certain location, and the same team of inspectors generally conducts both the airport assessment and air carrier inspections. According to TSA, it generally takes 3 to 7 days to complete a foreign airport assessment. However, the amount of time and number of team members required to conduct an assessment varies based on several factors, including the size of the airport and the threat level to civil aviation in the host country. TSA uses a multistep process to plan and conduct assessments of foreign airports. Specifically, TSA must obtain approval from the host government to conduct an airport assessment, and schedule the date for the on-site assessment. After conducting an entry briefing with host country and airport officials, the TSA team conducts an on-site visit to the airport. During the assessment, the team of inspectors uses several methods to determine a foreign airport’s level of compliance with 39 International Civil Aviation Organization (ICAO) standards and five ICAO recommended practices, to include conducting interviews with airport officials, examining documents pertaining to the airport’s security measures, and conducting a physical inspection of the airport. ICAO standards and recommended practices address operational issues at an airport, such as ensuring that passengers and cargo are properly screened and that unauthorized individuals do not have access to restricted areas of an airport. ICAO standards and recommended practices also address non-operational issues, such as whether a foreign government has implemented a national civil aviation security program for regulating security procedures at its airports and whether airport officials that are responsible for implementing security controls are subject to background investigations, are appropriately trained, and are certified according to the foreign government’s national civil aviation security program. At the close of an airport assessment, TSA inspectors are to brief foreign airport and government officials on the results. TSA inspectors also prepare a report in TSA’s Global Risk Analysis and Decision Support System (GRADS) detailing their findings on the airport’s overall security posture and security measures, which may contain recommendations for corrective actions and must be reviewed by TSA field and headquarters management. As part of the report, TSA assigns a vulnerability score to each ICAO standard and recommended practice assessed, as well as an overall vulnerability score for the airport, which corresponds to the level of compliance for each ICAO standard and recommended practice TSA assesses. Further, according to TSA officials, cargo experts in TSA headquarters review the cargo portion of each airport assessment before the assessment report is finalized. Afterward, TSA shares a summary of the results with the foreign airport and host government officials. In some cases, TSA requires air carriers to implement security procedures, such as requiring air carrier employees to guard the aircraft while on the tarmac, to address any deficiency that TSA identified during a foreign airport assessment through the issuance of security directives and emergency amendments. If the Secretary of Homeland Security determines that an airport does not maintain and carry out effective security measures, he or she shall, after advising the Secretary of State, take action, which generally includes notification to the appropriate authorities of the country of security deficiencies identified, notification to the general public that the airport does not maintain effective security measures, and modification of air carrier operations at that airport. Appendix IV: Comments from the Department of Homeland Security Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Nathan Anderson, (206) 287-4804 or andersonn@gao.gov. Staff Acknowledgments In addition to the contact above, Christopher Conrad (Assistant Director), Paul Hobart (Analyst-in-Charge), Hiwotte Amare, Charles Bausell, Dominick Dale, Pamela Davidson, Wendy Dye, Mike Harmond, Eric Hauswirth, Ryan Lester, Benjamin Licht, and Tom Lombardi made key contributions.
Why GAO Did This Study According to TSA, the federal agency responsible for securing the nation's civil aviation system, the introduction of explosive devices in air cargo shipments is a significant threat. To mitigate this threat, TSA is to review the security procedures carried out by all air carriers with U.S.-bound flights and at foreign airports servicing those air carriers. In addition, TSA assesses the commensurability of foreign countries' air cargo security programs. GAO was asked to evaluate TSA's progress in assessing and mitigating air cargo security risks. This report addresses (1) steps TSA takes to help ensure that U.S-bound air cargo is secure, (2) the status of TSA's efforts to recognize and monitor foreign governments' air cargo security programs, and (3) the extent to which TSA measures the effectiveness of its efforts to secure U.S.-bound air cargo. GAO reviewed TSA policies and procedures, analyzed TSA program data, observed a nongeneralizable sample of 17 air carrier inspections at two foreign airports (selected based on high air cargo volume and other factors), and interviewed TSA, foreign government, and air carrier representatives. What GAO Found The Transportation Security Administration (TSA) inspects air carriers and assesses foreign airports to help ensure the security of U.S.-bound air cargo. Air carrier inspections . GAO observed 17 air carrier inspections and found that TSA inspectors consistently followed TSA procedures. Further, GAO's analysis of TSA data found air carriers were in full compliance with cargo security requirements in 84 percent of the nearly 5,000 cargo inspections conducted during fiscal years 2012 through 2017. TSA officials were able to resolve a majority of the violations identified during the inspection process. Foreign airport assessments . GAO analysis of TSA data found that about 75 percent of the foreign airport assessments that TSA conducted during fiscal years 2012 through 2017 fully complied with international air cargo security standards. As of the end of 2017, foreign officials had addressed about 40 percent of the non-compliance issues. TSA continues to work with foreign officials to address the remaining non-compliance issues. As of June 2018, TSA had recognized the national cargo security programs (NCSP) of the European Union and 12 other countries as commensurate with TSA's, and TSA uses a variety of mechanisms to monitor NCSP implementation. TSA's process for NCSP recognition, which is voluntary, involves comparing air cargo security requirements to TSA's and conducting visits to the countries to validate their use. Once TSA determines a program is commensurate with TSA's, it monitors NCSP implementation through regular air carrier inspections, foreign airport assessments, and dialog with government officials. TSA may decide not to recognize a country's NCSP but, instead, make recommendations for improving air cargo security. In countries where TSA has not recognized their NCSP, all U.S.-bound cargo is subject to TSA security requirements. TSA's performance measures do not allow it to specifically determine the effectiveness of its efforts to secure U.S.-bound air cargo. For example, TSA measures whether foreign airports take actions to address all noncompliance issues identified during airport assessments, but such a broad measure could obscure progress made in resolving cargo-specific vulnerabilities. Similarly, TSA officials stated that they are developing a measure to gauge the effectiveness of air carrier inspections, but they do not plan to differentiate efforts to secure air cargo from those for securing passengers. Developing and monitoring outcome-based performance measures that separately account for cargo noncompliance issues and violations could help TSA better determine the extent to which its foreign airport assessments and air carrier inspections improve the security of U.S.-bound air cargo. In addition, TSA measures the number of countries it has recognized in the NCSP Recognition Program, but this metric does not address the effectiveness of the program. Developing and monitoring outcome-based performance measures for the NCSP Recognition Program would help TSA better determine whether the resources invested are yielding the intended results. This is a public version of a sensitive report issued in October 2018. Information that TSA deemed to be sensitive is omitted from this report. What GAO Recommends GAO is recommending that TSA develop and monitor outcome-based performance measures to assess the effectiveness of (1) the cargo portion of foreign airport assessments, (2) air carrier cargo inspections, and (3) the NCSP Recognition Program. TSA concurred with the recommendations.
gao_GAO-18-364
gao_GAO-18-364_0
Background The ACV is being developed as a partial or full replacement for the AAV, which is a tracked (non-wheeled) vehicle with capability to launch from ships to reach the shore carrying up to 21 Marines at a speed of up to approximately 6 knots. This speed effectively limits its range for traveling from ship to shore to no farther than 7.4 nautical miles. In order to upgrade the AAV to meet current threats and establish a path toward an enhanced platform, DOD and the Marine Corps implemented an incremental approach. The first step was to improve the AAVs’ protection from threats such as improvised explosive devices by installing enhanced armor and other equipment—referred to as survivability upgrades—efforts which are currently underway. The second step was to establish a plan to replace the AAV with a new vehicle, the ACV, which would develop and enhance capabilities in three incremental steps: ACV 1.1 would be a wheeled vehicle that provides improved protected land mobility but limited amphibious capability. In operations, it is expected to be part of an amphibious assault through the use of a surface connector craft to travel from ship to shore. This increment would leverage prototypes, demonstration testing, and other study results from the previously suspended Marine Personnel Carrier program. ACV 1.2 would have improved amphibious capability, including the ability to self-deploy and swim to shore. The development phase of the second ACV increment (ACV 1.2) is scheduled to begin in February 2019. ACV 2.0 would focus on exploring technologies to attain higher water speed capability. The ACV 1.1 program was initiated in 2014 and development of ACV 1.1 vehicles started in November 2015. The remainder of this report is focused on development and acquisition of the ACV 1.1, which we will refer to as ACV. The Marine Corps acquisition of the ACV employs a two- phase strategy for selecting a contractor to produce the ACV fleet. In the first phase, the program issued a solicitation for offerors to submit proposals and provided for award of multiple contracts for each contractor to design and develop 16 prototypes for performance assessment. In the second phase, referred to as the down-select process, after testing the prototypes, the Marine Corps intends to select a single contractor to continue into the start of production. The Marine Corps received five initial proposals and ultimately awarded contracts to BAE and SAIC to develop the ACV prototypes. The Marine Corps considered the ACV to be a substantially non-developmental item because both contractors’ designs were based on vehicles that were already in production and deployed by other militaries. Figure 1 depicts the BAE and SAIC prototype vehicles. After testing the prototypes, the Marine Corps plans to select a single contractor to continue into the production phase. The first prototypes were delivered in January 2017 and have since been undergoing developmental, operational, and live fire testing. Developmental testing assesses whether the system meets all technical requirements and is used to: verify the status of technical progress, determine that design risks are minimized, substantiate achievement of contract technical performance, and certify readiness for initial operational testing. ACV developmental testing includes testing for sustainability, system survivability, and water and land mobility. Operational testing (assessment) is the field test, under realistic conditions, for the purpose of determining effectiveness and suitability of the weapons for use in combat by typical military users. Live fire testing is used to demonstrate vehicle capability against a range of ballistic and non-ballistic threats expected to be encountered in the modern battlefield, such as improvised explosive devices among others. In January 2018 the Marine Corps started an operational assessment, which was scheduled to be completed in March 2018. The assessments consist of field tests, under realistic conditions, to inform the decision to enter production. Ongoing test results, including the operational assessment, will be used to inform the ACV June 2018 production decision. Figure 2 is a timeline of the ACV program’s progress and plans to full capability. The ACV program plans to produce at least 208 vehicles after exercising contract options for 2 years of low-rate production of 30 vehicles each year starting in 2018 and then exercise options for 2 years of full-rate production for the remaining 148 or more vehicles starting in 2020. In addition to testing the prototype vehicles, the program is holding a production readiness review that started in November 2017 and according to program officials, they will keep the review open until April 2018. During this review, the program will determine whether the designs are ready for production and whether the contractors have accomplished adequate production planning to enter production. Officials from DCMA, which conducts contract performance oversight, have provided support in assessing production readiness. After receiving the proposals for the production down-select, the program will hold a system verification review in April 2018 to verify that the performance of the ACV prototypes meets capability requirements and performance specifications. This report represents the last in the series of reports we are to issue in response to the fiscal year 2014 National Defense Authorization Act, which contains a provision that we review and report annually on the ACV program until 2018. Previously, In October 2015 we found that the Marine Corps made efforts to adopt best practices and minimize acquisition risk, including: adopting an incremental approach to update capabilities, using proven technologies, increasing competition, and awarding fixed-price incentive contracts for much of the development work. In April 2017, we found that DOD’s life cycle cost estimate for ACV 1.1 of about $6.2 billion, fully or substantially met the criteria for the four characteristics of a high-quality reliable cost estimate. However, we also found that changes the Marine Corps made to the acquisition schedule — partly in response to a stop work order following a bid protest that was denied by GAO in March 2016 — raised acquisition risk by increasing the overlap between development activities, such as testing of the vehicles, with production. This is a risk we had identified in a previous report. As a result, we recommended that the Marine Corps delay the production decision until 2019. DOD did not concur with that recommendation. ACV Program Is on Track to Meet Development Cost Goals with No Additional Schedule Delays Costs for the development phase of ACV are on track to meet cost goals established at the start of development, based on a recent Navy estimate, the ACV program office, and reporting from the contractors. In September 2017, the ACV program’s Defense Acquisition Executive Summary Report for ACV provided a Navy cost estimate for development of $750.7 million, less than the $810.5 million baseline established at the start of development in November 2015. Program officials also indicated that the ACV program was on track to meet cost goals. They noted that the contractors have not contacted the government to negotiate an increase in billing prices, as of December 2017. Since both of the contractors have delivered all 16 of their required prototypes and the manufacturing of the prototypes is the largest anticipated portion of ACV development contract costs, most of the costs associated with the manufacturing of the prototypes have likely been realized. The Marine Corps made efforts to reduce cost risk to the government by adopting a fixed-price incentive (firm target) contract type for the construction of the prototype vehicles. As we previously reported in October 2015, the Marine Corps planned to award hybrid contracts to each of the ACV development contractors, which would apply different pricing structures for different activities. The Marine Corps awarded the contracts in November 2015 as planned. Most critically, a fixed-price incentive contract type is being used for items in the contract associated with the manufacturing of the development prototypes, which was anticipated to be the largest portion of ACV development contract costs. Under this contract type, the government’s risk is generally limited to the contract’s price ceiling. Incentive contracts are appropriate when a firm- fixed-price contract is not appropriate and the required supplies can be acquired at lower costs by relating the amount of profit or fee to the contractor’s performance. According to Federal Acquisition Regulation, since it is usually to the government’s advantage for the contractor to assume substantial cost responsibility and appropriate share of the cost risk, fixed-price incentive contracts are preferred over cost-reimbursement incentive contracts when contract costs and performance requirements are reasonably certain. The fixed-price incentive (firm target) contract type provides for adjusting profit and establishing the final contract price by application of a formula based on the relationship of total final negotiated cost to total target cost. The final price is subject to a price ceiling, negotiated at the outset. If the final negotiated cost exceeds the price ceiling, the contractor absorbs the difference. As we also previously reported, however, the Marine Corps received a waiver to forgo the establishment of a certified Earned Value Management System for the ACV program, which reduces the regularly-available cost, schedule, and performance data available for the program to review. The ACV program office and DOD also indicated that they anticipate production costs will be within goals established at the start of development, though key production costs have not yet been determined. The program’s development contracts with the two competing contractors contain fixed-price incentive options for 4 years of production. The pricing of the production vehicles will not occur, however, until DOD makes a production decision in June 2018 and negotiates the final terms and exercises the production option with one of the contractors. The Marine Corps has made no major changes to the ACV acquisition schedule since we previously reported on the program in April 2017. In that report we found that the production decision was moved from February to June 2018 after a stop work order was issued to the contractors in response to a bid protest from a vendor that was not selected for one of the ACV development contracts. A senior program official emphasized the importance of keeping the ACV acquisition on schedule because the capability it provides is complementary to a broader set of capability updates across multiple platforms that the Marine Corps is in the process of procuring. ACV May Enter Production with Manufacturing Maturity That Does Not Meet Best Practices The ACV program office is in the process of conducting tests and assessments to determine if the program is on track to meet the criteria to enter production, but program officials told us the Navy—which has the authority to approve major acquisition milestone decisions for the program—may choose to start low-rate production without meeting established best practices for manufacturing maturity. At the start of development, DOD established criteria for entering production in areas such as capability performance and the status of the contractors’ manufacturing readiness to manufacture the ACV vehicles. Leading up to the production decision, the program is engaged in a number of activities such as the operational assessment and production readiness review to inform the decision to start production. The production readiness review has a critical role in informing the decision to enter production because it represents an opportunity for the program to determine the maturity of the contractor’s manufacturing process and assess potential risks related to cost, schedule, or performance. Our previous reviews about manufacturing best practices found that identifying manufacturing risks early in the acquisition cycle and assessing those risks prior to key decision points, such as the decision to enter production, reduces the likelihood of cost growth and potential delays. The ACV program has used the DOD Manufacturing Readiness Level (MRL) Deskbook to identify levels of manufacturing capability and establish targets for minimal levels of manufacturing readiness at specific acquisition milestones. The ratings are applied to various risk areas such as design, materials, process capability and control, and quality management. Table 1 shows the basic MRL definitions provided by the Joint Defense Technology Panel. The MRL Deskbook recommends that a program is expected to demonstrate a MRL of 8 by the time of the low-rate production decision. However, GAO’s previously identified best practices for managing manufacturing risks recommend programs reach a higher level—MRL- 9— for the risk area of process capability and control before entering low- rate production. At MRL-9, a program is expected to have its applicable manufacturing processes in statistical control. The MRL Deskbook recommends that a program achieve an MRL-9 at the start of full-rate production. The Marine Corps has eliminated manufacturing capability as a criterion for consideration in the down-select production decision. In the solicitation issued to the two competing contractors for the production decision in December 2017, the Marine Corps identified two criteria that would be considered to determine the winner of the down-select competition for the production decision. They are, in descending order of importance: (1) technical performance of the prototype vehicles and (2) the contractors’ submitted cost proposals. Previously, the ACV acquisition strategy and development contracts identified five criteria for the selection process, with manufacturing capability as the second most important factor (behind technical performance). The development contracts stipulated that the government reserved the right to adjust the factors and their order of importance prior to the release of the solicitation for the production down- select decision. Program officials said that narrowing the down-select factors to performance test results and cost was in line with the original intent of the program to use the best value tradeoff process described in the Federal Acquisition Regulation and that the revised criteria were appropriate for a non-developmental item such as the ACV. While the program removed manufacturing capabilities from its criteria for selecting the contractor for production, ACV program officials are still assessing manufacturing readiness to support their production decision. Program officials stated that they could enter production at a lower level of manufacturing readiness than DOD guidance or GAO identified best practices suggest. The program started a production readiness review in November 2017 to determine the contractors’ respective manufacturing maturity. According to program officials, they will keep the review open until April 2018, at which point the program will make a determination about the contractors’ manufacturing readiness levels. The program office confirmed that the ACV criterion for entering production is to achieve an MRL-8 but noted that it is possible that the program could choose to enter into production without an overall MRL-8. Program officials stated that if there are any specific risk areas that are assessed below that threshold, the program office will define the risk and make a recommendation to the Navy for entry into production based on whether or not they consider the risk acceptable. To help inform its determination, program officials said that they will review the manufacturing readiness assessments produced by the contractors, as well as reviews by the DCMA, which is responsible for assisting with contract oversight. Because the two contractors were still in competition at the time of the release of this report, we are unable to publicly report additional, more detailed, information about production readiness or performance tests. However, we have previously found that programs with insufficient manufacturing knowledge at the production decision face increased risk of production quality issues, cost growth, and schedule delays. Entering the production phase of the ACV acquisition with manufacturing readiness levels lower than those recommended by DOD guidance and GAO-identified best practices would increase the likelihood of outcomes associated with insufficiently mature manufacturing capabilities, such as production quality issues and schedule delays. The Marine Corps has already been authorized funding to start production and plans to exercise options in 2018 to produce 30 vehicles for the first year of low-rate production. However, the Marine Corps has two upcoming decisions that would provide opportunities to refocus on manufacturing readiness for the ACV—specifically the decision to enter into the second year of low-rate production in 2019 for 30 vehicles, and the decision to enter the first year of full-rate production in 2020 and acquire 56 of the remaining 148 vehicles. Acquiring additional vehicles before ensuring sufficient manufacturing maturity could raise the risk that the contractor may not be sufficiently prepared for continued production, which could result in delays in delivery of acceptable vehicles or additional costs to the government. Conclusions The Marine Corps has long identified the need for the enhanced capabilities envisioned through the ACV program and is nearing the potential production of such a vehicle. Following the cancellation of the EFV program after the expenditure of $3.7 billion, the ACV program represents an opportunity to follow a better acquisition approach. It is too early to determine whether the contractors will meet targets for production readiness by the time of the production decision, but the program office is considering entering production without meeting the recommended manufacturing maturity levels established by DOD or GAO-identified best practices. We have already identified the ACV program as adopting an aggressive acquisition schedule in which the amount of concurrent developmental testing and production is more than typical acquisition programs. In fiscal year 2018, Congress authorized funding for the program to start production, but the decision to enter a second year of low-rate production and the decision to start full-rate production represent opportunities for the ACV program to verify the manufacturer has achieved a sufficient level of readiness before commencing production of the bulk of vehicles. If the Marine Corps does not take steps to ensure that the contractor’s manufacturing readiness is sufficiently mature, as demonstrated through MRLs, prior to committing to additional production beyond the first year of low-rate production, there is an increased risk for production quality issues, cost growth, and schedule delays. Recommendations for Executive Action We are making two recommendations to DOD. The Secretary of the Navy should take steps to ensure that the Marine Corps not enter the second year of low-rate production until after the Marine Corps has determined that the contractor has achieved an MRL of at least an 8 for all risk areas. (Recommendation 1) The Secretary of the Navy should take steps to ensure that the Marine Corps not enter full-rate production until the Marine Corps has determined that the contractor has achieved an MRL of at least 9 for all risk areas. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this product to DOD for comment. In its comments, reproduced in appendix I, DOD partially concurred with GAO’s recommendations. DOD agreed that manufacturing readiness should be assessed prior entering both the second year of low-rate production and the start of full- rate production, and plans to do so. DOD acknowledged that the MRL Deskbook provides best practices for identifying risks, but noted that the ACV program is not required to follow it. DOD noted that it may be reasonable to proceed into manufacturing at lower MRLs, if steps to mitigate identified risks are taken. However, DOD disagreed that not demonstrating a specified MRL for any individual risk area, in itself, should delay the start of either production milestone. DOD expressed concern that delaying subsequent years of production, if MRLs are not at the levels recommended, could lead to counterproductive breaks in production. We agree that adopting the MRL Deskbook is not required by DOD and represents best practices to minimize production risk. However, we also believe that demonstrating the MRL levels recommended in the MRL Deskbook for all risk areas mitigates increased risk associated with the aggressive schedule pursued by the ACV program—about which we have previously expressed concerns. We believe our recommendation to achieve an overall MRL-8 by the second year of low-rate production is a reasonable goal, considering it gives the ACV program an additional year after the point at which the MRL Deskbook recommends reaching MRL- 8—the start of low-rate production. In addition, ensuring that all manufacturing readiness risk areas are at MRL-9 for the start of full-rate production, as recommended by best practices in the MRL Deskbook, would help further alleviate risks associated with the program’s aggressive schedule. We appreciate the DOD concerns about delaying subsequent years of production if MRLs have not reached those identified in the best practices in the MRL Deskbook, but note that not doing so increases the likelihood of production quality issues that could lead to cost growth and schedule delays in future years. Therefore, we made no changes to the recommendations in response to the comments. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition and Sustainment; the Secretary of the Navy; and the Commandant of the Marine Corps. This report also is available at no charge on GAO’s website at http://www.gao.gov. Should you or your staff have any questions on this report, please contact me at (202) 512-4841 or ludwigsonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Defense Appendix II: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments: In addition to the contact named above, Bruce H. Thomas (Assistant Director), Matt Shaffer (Analyst in Charge), Pete Anderson, Alexandra Jeszeck, Jennifer Leotta, Roxanna Sun, and Marie Ahearn made key contributions to this report.
Why GAO Did This Study In June 2018, the United States Marine Corps plans to select a contractor and begin low-rate production for the ACV, a vehicle used to transport Marines from ship to shore under hostile conditions. The ACV will replace all or part of the current Assault Amphibious Vehicle fleet. The National Defense Authorization Act for Fiscal Year 2014 included a provision for GAO to annually review and report on the ACV program until 2018. This report, GAO's last under that provision, assesses the extent to which the Marine Corps is making progress toward (1) meeting cost and schedule goals for the ACV program and (2) demonstrating manufacturing readiness. GAO reviewed program cost estimates, updated schedules, and program assessments of test results and production readiness, as well as compared ACV acquisition efforts to DOD guidance and GAO-identified best practices. GAO also interviewed program and testing officials, and visited both ACV primary assembly locations. What GAO Found The first version of the Amphibious Combat Vehicle (ACV 1.1) is on track to meet development cost goals with no additional anticipated delays for major acquisition milestones. With regard to costs, the development phase of ACV 1.1 is on pace to not exceed cost goals that were established at the start of development, based on a recent Navy estimate, the ACV program office, and reporting from the contractors. For example, a September 2017 program progress review reported a Navy estimate of the cost of development at $750.7 million, less than the $810.5 million baseline established at the beginning of development. With regard to schedule, the ACV program has made no major changes to the acquisition schedule since GAO previously reported on the program in April 2017. ACV 1.1 program officials are in the process of preparing to down-select to a single contactor and enter low-rate production in June 2018, start a second round of low rate production the following year, and begin full-rate production in 2020. ACV 1.1 may be followed by the acquisition of other versions (ACV 1.2 and ACV 2.0) with advanced capabilities such as higher water speeds. The ACV program is preparing to start production of ACV 1.1, which includes determining that the contractors' manufacturing capabilities are sufficiently mature. However, program officials are considering entering production with a lower level of manufacturing maturity than called for in Department of Defense (DOD) guidance or GAO identified best practices. The ACV program measures manufacturing maturity with manufacturing readiness levels (MRL) for risk areas such as design, materials, process capability and control, and quality management. DOD guidance for weapons acquisition production recommends that programs achieve an MRL of 8 across all risk areas before entering low-rate production and that a program achieve an MRL of 9 at the start of full-rate production. GAO's previous reviews about manufacturing best practices found that achieving manufacturing maturity and identifying production risks early in the acquisition cycle and assessing those risks prior to key decision points, such as the decision to enter production, reduces the likelihood of quality issues, cost growth, and delays. The Marine Corps contract option for producing the first round of low-rate production for ACV 1.1 will be exercised after June 2018; the contract also contains additional options for production vehicles. Making the decisions to proceed with the second round of low-rate production and for the start of full-rate production before meeting called-for levels of manufacturing readiness criteria increases the risk that ACV 1.1 will witness delays and increased costs. What GAO Recommends GAO recommends the Marine Corps (1) not enter the second year of low-rate production for ACV 1.1 until after the contractor has achieved an overall MRL of 8 and (2) not enter full-rate production until achieving an overall MRL of 9. DOD partially concurred with both recommendations, but noted that it is reasonable to proceed at lower MRL levels if steps are taken to mitigate risk. GAO made no changes to its recommendations in response to these comments.
gao_GAO-18-194
gao_GAO-18-194_0
Background Congressional Actions Related to DOD’s Organizational and Management Challenges DOD has historically faced organizational and management challenges that can limit effective and efficient coordination across the department to fulfill its mission, and Congress has taken steps to address these challenges through, among other things, legislation. For example, in the early 1980s, Congress expressed concern that DOD’s structure primarily served the needs of the services and encouraged interservice rivalries that led to operational failures. In response, Congress passed the Goldwater-Nichols Department of Defense Reorganization Act of 1986 to improve the management and administration of the department, among other purposes. One of the changes emanating from this act included specifying the military department secretaries’ responsibility for training and equipping forces, while making clear that the military service chiefs were not in the chain of command for military operations. The act also required that military personnel selected for promotion to brigadier general or rear admiral (lower half) to have joint duty experience unless waived by the Secretary of Defense or an authorized official. However, shortfalls in strategic integration at DOD—how DOD and the military services align their efforts and resources across different regions, functions, and domains—continue. Congress intended that section 911 of the NDAA for Fiscal Year 2017 improve strategic integration across the organizational and functional boundaries of DOD by, among other things, requiring the Secretary of Defense to develop an organizational strategy to advance a collaborative culture across DOD and create cross- functional teams to address critical objectives and outputs. DOD’s External and Internal Cross-Functional Team Studies As required by section 911 of the NDAA for Fiscal Year 2017, DOD awarded a contract to study how best to implement effective cross- functional teams in DOD. The study, conducted by McKinsey & Company and completed in August 2017, presented findings on leading practices for implementing cross-functional teams that were drawn from a literature review, DOD and non-DOD case studies, and interviews. It identified seven critical factors for cross-functional team success: (1) mission; (2) objective; (3) delegated authorities; (4) team membership; (5) ways of working; (6) collaborative environment; and (7) an implementation plan. While not required by the contract, the study also contained a checklist for implementing cross-functional teams, which includes recommendations to assist DOD in assembling, initiating, and operating a team. The checklist distinguished action items by implementation phases: prelaunch, at launch, throughout the project, and at the project’s close. For example, the checklist suggested that, at launch, DOD should onboard the team and tailor training to the team experience and timeframe. DOD transmitted the report to Congress in September 2017. ODCMO officials also began collecting information in March 2017 to conduct their own internal study of cross-functional teams within DOD to help inform their implementation of section 911. This internal study, completed in August 2017, evaluated four case studies of prior DOD cross-functional teams, including their structure, returns, and implementation costs. From the case studies, ODCMO officials identified lessons learned to inform establishing and monitoring cross- functional teams. The ODCMO’s internal study found that cross-functional teams require significant senior leader attention. For example, the Secretary of Defense was directly involved in the sampled cross- functional teams, and he publicly stated his support for the teams, gave the teams precedence over other programs, and endorsed non-standard funding practices to accelerate their work. Further, the Secretary of Defense regularly engaged with teams. The study also found that DOD should provide team members with background information and the context behind the team’s mission and goals. Finally, the internal study found that cross-functional teams had the most robust decision-making authority when it came to integration and implementation of the Secretary of Defense’s priority initiatives. Leading Practices for Effective Cross-Functional Teams Through a review of literature and case studies as well as interviews with subject-matter experts, we identified eight leading practices for effective cross-functional teams, as shown in figure 1. These leading practices are similar to those identified by the McKinsey & Company contracted study and the ODCMO’s internal study as well as leading practices for interagency collaboration that we previously identified. Further, we found that leading practices for implementing effective cross- functional teams include the key characteristics shown in table 1. DOD’s Draft Organizational Strategy Addresses Statutory Elements, but DOD Has Not Outlined How It Will Advance a Collaborative Culture or Collaborated with Stakeholders The ODCMO developed a draft organizational strategy that addresses the two statutory elements required under section 911 of the NDAA for Fiscal Year 2017—identifying critical objectives and outputs that would benefit from the use of cross-functional teams, and providing for the appropriate use of these teams—but DOD has not issued that strategy as required by September 1, 2017. In addition, while the draft strategy contains the two required elements, it does not outline how DOD will achieve several future outcomes required under section 911 of the NDAA for Fiscal Year 2017 that are designed to advance a collaborative culture within the department. Further, ODCMO officials did not coordinate with key stakeholders, such as the Secretary of Defense, military departments, and defense agencies, in developing the organizational strategy. Our leading practices for collaboration highlight the value of agencies including stakeholders when defining and articulating a common outcome. DOD Has Developed, but Not Issued, a Draft Organizational Strategy That Includes Required Statutory Elements, but Has Not Outlined Its Approach for Advancing a Collaborative Culture The ODCMO developed a draft organizational strategy, but DOD did not issue the organizational strategy as required by September 1, 2017, and as of February 2018 has not issued the strategy. The August 2017 draft organizational strategy we reviewed is intended to be an organizational design that focuses on the responsibilities, functions, and authorities of— and relationships between—the leaders of DOD components and those of cross-functional teams. It describes DOD’s current organizational structure and processes and how they will change as a result of recent legislation and reform initiatives, and it describes best practices and lessons learned for implementing cross-functional teams, as well as areas that may benefit from the use of such teams. Although the act required the Secretary of Defense to issue the strategy by September 1, 2017, the Acting DCMO told us that other reform initiatives and organizational changes have a higher priority and that therefore he did not take steps to finalize the strategy. ODCMO officials told us that they plan to align the strategy with the revised National Defense Strategy, which was released in January 2018, and the Agency Strategic Plan, which was expected to be issued in February 2018. We found that DOD’s draft organizational strategy contains the two elements required under section 911 of the NDAA for Fiscal Year 2017. According to the act, among other things, the organizational strategy must (1) identify the critical objectives and other organizational outputs for the department that span multiple functional boundaries and would benefit from the use of cross-functional teams to ensure collaboration and integration across organizations within the department; and (2) provide for the appropriate use of cross-functional teams to manage such objectives and outputs. To address the first statutory element, the draft organizational strategy identifies several mission-focused and business- operations areas that would benefit from the use of cross-functional teams. For example, the strategy identifies three primary candidates for business operations, including Military Health Systems reforms, financial auditability, and security clearance backlog mitigation. To address the second statutory element, the draft organizational strategy identifies considerations for the appropriate use of cross-functional teams. For example, the strategy states that cross-functional teams should be used only for the Secretary of Defense’s highest-priority issues and that cross- functional teams require significant engagement with the Secretary of Defense and other top leadership. Section 911 of the NDAA for Fiscal Year 2017 also identifies several outcomes that DOD should achieve to advance a collaborative culture within the department; however, we found that DOD’s draft organizational strategy does not clearly articulate how the department will achieve these outcomes. The act states that DOD’s organizational strategy should, among other things: provide for the furtherance and advancement of a collaborative, team- oriented, results-driven, and innovative culture within the department that fosters an open debate of ideas and alternative courses of action, and supports cross-functional teaming and integration; improve the manner in which the department integrates the expertise and capacities of the functional components of the department for effective and efficient achievement of critical objectives and other organizational outputs that span multiple functional boundaries and would benefit from the use of cross-functional teams; improve the management of relationships and processes involving the Office of the Secretary of Defense, the Joint Staff, the combatant commands, the military departments, and the defense agencies with regard to such objectives and outputs; improve the ability of the department to work effectively in interagency processes with regard to such objectives and outputs in order to better serve the President; and achieve an organizational structure that enhances performance with regard to such objectives and outputs. We found that the draft strategy does not outline how the department will achieve these outcomes. For example, the draft organizational strategy notes that DOD leaders recognize the department must fully embrace and operationalize the cultural attributes set forth in section 911, including a more collaborative, team-oriented, results-driven, and innovative culture; however, it does not identify actions the department will take to help ensure that leaders embrace these attributes, such as through guidance or training. When we asked how the draft organizational strategy will help achieve these outcomes, ODCMO officials stated that the strategy contains references to cultural attributes for the department. For example, the draft organizational strategy describes cultural attributes of the department’s management and business operations, such as visibility across components and collaboration. However, ODCMO officials stated that they agree that the strategy could do more to address collaboration. The ODCMO officials said they originally interpreted section 911 to mean that the organizational strategy should focus on DOD’s organizational structure, processes, and leading practices for implementing cross- functional teams, rather than on how to transform the department’s culture more broadly. Nonetheless, the outcomes called for under the act refer to the need to advance a collaborative culture across the department. These officials also stated that they plan to revise the draft organizational strategy to include additional information on collaboration and information-sharing processes and systems, among other things. While not required to do so, OCMO, which will now lead the department’s efforts to implement section 911, could utilize our leading practices for mergers and organizational transformations to revise the organizational strategy to address how the department will advance a culture that is collaborative, team-oriented, results-driven, and innovative. We previously reported on leading practices and implementation steps for mergers and organizational transformations that can help agencies transform their cultures so that they are more results-oriented, customer- focused, and collaborative. The leading practices and implementation steps listed in table 2 were built on the lessons learned from large private and public sector organizational mergers, acquisitions, and transformations. These leading practices state that organizations should ensure that top leadership drives the transformation by defining and articulating a succinct and compelling reason for change. Doing so helps employees and stakeholders understand the expected outcomes of the transformation and engender not only their cooperation, but also their ownership of the outcomes. In addition, our leading practices state that organizations should establish a coherent mission and integrated strategic goals by adopting our leading practices for results-oriented strategic planning. Lastly, our leading practices state the organizations should include implementation goals and a timeline for achieving the transformation. By demonstrating progress toward these goals, the organization builds momentum and keeps employees excited about the opportunities change brings and helps to ensure the transformation’s successful completion. The incorporation of these leading practices in its organizational strategy to better articulate how the department will achieve the outcomes that generally advance a collaborative culture across DOD—as section 911 of the NDAA required—would better position DOD to transform and meet its mission. ODCMO Did Not Collaborate with Key Stakeholders, Including the Secretary of Defense, on Its Organizational Strategy ODCMO did not collaborate with key stakeholders on the development of the organizational strategy. Specifically, as of November 2017, ODCMO officials had not collaborated with or obtained input from the Secretary of Defense on the development of DOD’s organizational strategy. The Acting DCMO noted that the Secretary of Defense has multiple competing priorities related to reorganizing the department, such as creating a separate CMO position required in the NDAA for Fiscal Year 2017, as well as other reform initiatives. In addition, ODCMO officials told us that they did not collaborate with other stakeholders, such as the military departments and defense agencies, on the development of the organizational strategy. According to a draft memorandum from the Acting DCMO to the Deputy Secretary of Defense, the Acting DCMO plans to recommend that the Deputy Secretary of Defense coordinate the review and approval of the organizational strategy with stakeholders such as the Chairman of the Joint Chiefs of Staff, the Director of Cost Assessment and Program Evaluation, and DOD’s General Counsel. However, the memorandum did not specify other stakeholders, such as the military departments, the combatant commands, and defense agencies. ODCMO officials stated that their office plans to coordinate the review and approval of the strategy with other stakeholders, such as the military departments and defense agencies. However, as of November 2017, the officials had not provided documentation, such as a revised memorandum, showing specific plans to do so. Section 911 of the NDAA for Fiscal Year 2017 states that the Secretary of Defense should formulate and issue an organizational strategy that identifies the critical objectives and other organizational outputs for the department that span multiple functional boundaries and would benefit from the use of cross-functional teams. In addition, the act states that the organizational strategy should, among other things, improve the management of relationships and processes involving the Office of the Secretary of Defense, the Joint Staff, the combatant commands, the military departments, and the defense agencies with regard to such objectives and outputs. Our leading practices for collaboration state that when defining and articulating a common outcome, where appropriate, agencies should include stakeholders. In doing so, agencies can better address their interests and expectations and gain their support in achieving the objectives of the collaboration. Without obtaining key stakeholder input on the development of the organizational strategy, such as from the Secretary of Defense, military departments, the combatant commands, and defense agencies, DOD may not be well positioned to issue an organizational strategy that reflects the Secretary of Defense’s objectives and improves collaboration across the department. DOD Has Established One Secretary of Defense-Empowered Cross-Functional Team, and Draft Team Guidance Addresses Most Statutory Elements and Leading Practices DOD Established One Secretary of Defense- Empowered Cross- Functional Team In August 2017, the Secretary of Defense issued a memorandum authorizing a cross-functional team to address challenges with personnel vetting and background investigation programs within DOD. Although the memorandum refers to section 951 of the NDAA for Fiscal Year 2017, which requires DOD to develop a plan to transfer responsibility for conducting DOD personnel background investigations to the Defense Security Service, ODCMO officials told us that the cross-functional team reviewing personnel vetting was established pursuant to section 911 requirements, as the team will report directly to the Secretary’s office, among other things. Therefore, this team is considered a Secretary of Defense-empowered cross-functional team. The memorandum notes that a backlog of background investigations affects DOD’s mission readiness, critical programs, and operations. According to the memorandum, this cross-functional team will conduct a full review of current personnel vetting processes to identify a redesigned process for DOD’s security, suitability and fitness, and credential vetting. The cross-functional team’s objectives are to develop options and recommendations to mitigate shortcomings, ensure necessary resourcing, and transform the personnel vetting enterprise. An ODCMO official told us that DOD had selected an interim leader for the team. DOD’s Draft Guidance for Cross-Functional Teams Addresses Most Required Statutory Elements, but Could More Fully Incorporate Leading Practices ODCMO officials developed draft guidance for Secretary of Defense- empowered cross-functional teams. The draft guidance fully addresses six and partially addresses one of the section 911 required statutory elements. We also found that the draft guidance fully addresses five leading practices, partially addresses two leading practices, and does not address one leading practice for effective cross-functional teams. Table 3 shows our assessment of the extent to which DOD’s draft guidance meets required statutory elements. The draft cross-functional team guidance briefly describes the characteristics of a cross-functional team and highlights the team’s direct reporting line to the Secretary of Defense, the team’s delegated authorities, and team leader and member selection. The guidance also states expectations for cross-functional team members’ dedication to the team and for leaders of functional components to support their participating staff. Further, DOD’s draft guidance discusses the role of the teams in addressing complex, enterprise-wide issues, and discusses training for and operations of the cross-functional teams. The guidance additionally describes DOD’s commitment to collaboration and integration across the department. Finally, we found that the draft guidance partially addresses the required statutory element of identifying key practices on leadership, organizational practice, collaboration or functioning of cross- functional teams. The draft guidance discusses key practices for senior leaders on the functioning of cross-functional teams, but we found that it does not identify any practices on leadership, organizational practice, or collaboration. We also found that DOD’s draft guidance for cross-functional teams could more fully incorporate leading practices for cross-functional teams, which are similar to those identified by the McKinsey & Company contracted study and the ODCMO’s internal study as well as leading practices for interagency collaboration that we previously identified. Figure 2 shows our assessment of the extent to which DOD’s draft cross-functional team guidance incorporates our leading practices for effective cross-functional teams. We found that the draft guidance fully incorporates five of the leading practices for effective cross-functional teams: well-defined team structure, autonomy, senior management support, committed cross-functional team members, and well-defined team goals. In addition, the draft guidance partially addresses the leading practice for open and regular communication, as it discusses that teams will update the Secretary of Defense and senior staff at regular staff meetings to reflect on progress and seek feedback. The draft guidance, however, does not address information sharing and communication within the cross-functional team. Also, the draft guidance partially addresses the leading practice for empowered cross-functional team leaders by indicating that team leaders should report directly to the Secretary of Defense, select team members, and seek feedback from other federal agencies. Further, the guidance states that cross-functional team leaders will contribute to the performance evaluations of their team members. The guidance states that the Secretary of Defense will select the team leaders, but does not elaborate on what qualities the team leader should possess. Finally, the draft guidance does not address the leading practice for an inclusive team environment. For example, the draft guidance does not contain any reference to developing a unified team culture and trust among team members. ODCMO officials told us that they anticipate the Secretary of Defense reviewing and approving this guidance, including a detailed terms of reference that addresses information on mechanics of team operations and guidance for each team. However, without initial guidance that fully addresses the required statutory elements in section 911 of the NDAA for Fiscal Year 2017 and incorporates leading practices, DOD’s cross- functional teams may not be able to consistently and effectively approach the Secretary of Defense’s strategic objectives or further promote a collaborative culture within the department. DOD Has Developed, but Not Provided, Training for Its Presidential Appointees and Cross-Functional Team Members, and It Does Not Address All Statutory Requirements DOD Developed a Draft Training Curriculum for Presidential Appointees, but It Does Not Address All Required Statutory Elements and Has Not Been Provided to Appointees As of October 2017, the ODCMO developed a draft training curriculum on cross-functional teams for presidential appointees, but this curriculum does not address all statutory requirements. Furthermore, as of February 2018, 22 individuals have been nominated by the President, confirmed by the Senate, and appointed to positions within the Office of the Secretary of Defense, but none have received training required by section 911. Section 911 of the NDAA for Fiscal Year 2017 requires that, within 3 months of the appointment of an individual to a position in the Office of the Secretary of Defense appointable by and with the advice and consent of the Senate, the individual complete a course of instruction in leadership, modern organizational practice, collaboration, and the operation of cross-functional teams. The training requirement may be waived by the President upon a request by the Secretary of Defense if the Secretary of Defense determines in writing that the individual possesses, through training and experience, the skill and knowledge otherwise to be provided through a course of instruction. ODCMO officials stated that they intend to recommend that the Secretary of Defense seek such a waiver; however, this requirement had not been waived for any appointees as of November 2017. In addition, according to an ODCMO official, DOD has not developed criteria for determining who would be eligible for such a waiver and on what basis. We found that the draft curriculum addresses only one of four required elements in section 911 of the NDAA for Fiscal Year 2017. Specifically, the draft curriculum addresses the required statutory element for training on the operation of cross-functional teams by including information on elements of successful teams and when to use them. It does not, however, incorporate the required statutory elements for leadership, modern organizational practice, or collaboration. According to the Acting DCMO, these appointees do not need this type of training because they are already experts in their field, have considerable leadership experience, and have likely already received this type of training. However, our leading practices of a well-designed training program note that it is important for agencies to consider the need for continuous and lifelong learning, recognizing that learning is an investment in success rather than a cost to be minimized. In addition, our leading practices state that a core characteristic of a strategic training and development process is leadership commitment, meaning that agency leaders consistently demonstrate that they support and value continuous learning and set the expectation that effective training and development will improve individual and organizational performance. Further, as organizations are typically resistant to change and need top leadership to drive a successful organizational transformation, ensuring that senior officials receive this training will be important for DOD’s overall organizational transformation to succeed in driving a more collaborative culture. Without the provision of training for top leadership within the Office of the Secretary of Defense that includes the required elements in section 911 of the NDAA for Fiscal Year 2017 or developing criteria for obtaining a waiver from providing the training, DOD may have difficulty implementing its new organizational strategy as top leadership commitment is a key element of an organizational transformation. DOD Developed Training for Team Members That Addresses Statutory Requirements and Plans to Provide the Training Once Team Members Are Announced We found that DOD has developed a draft training curriculum for cross- functional team members and their supervisors that addresses required statutory elements, including the element focused on collaboration. This training has not been provided since no team members have been named for the one Secretary of Defense-empowered cross-functional team to address challenges with personnel vetting and background investigation programs within DOD. Section 911 of the NDAA for Fiscal Year 2017 requires that team members and their supervisors of Secretary- empowered cross-functional teams receive training in elements of successful cross-functional teams, including teamwork, collaboration, conflict resolution, and in appropriately representing the views and expertise of their functional components. Table 4 summarizes the requirements of section 911 of the NDAA for Fiscal Year 2017 and shows our assessment of the draft training curriculum against these required statutory elements. According to ODCMO officials, this training should take place soon after team members have been announced. In addition, ODCMO officials stated that they considered having an expert from another federal agency lead the training, but were prepared to conduct the training themselves if that expert was unavailable. Conclusions Congress has been encouraging DOD to undertake transformative organizational change and improve collaboration and more effectively accomplish its missions across its military departments and functional organizations. While ODCMO officials drafted an organizational strategy that includes the two required statutory elements, the strategy does not address how the department will achieve several outcomes that advance a collaborative culture in the department, as required under section 911 of the NDAA for Fiscal Year 2017. A revised strategy that addresses how the department will achieve these outcomes and is consistent with our leading practices for mergers and organizational transformations would better position DOD to further a culture within the department that is collaborative, team oriented, results driven, and innovative. DOD could also address three other areas to improve the department’s collaborative efforts. First, OCMO officials need to collaborate with key stakeholders across the department—such as the Secretary of Defense, military departments, the combatant commands, and defense agencies— to strengthen the organizational strategy and ensure a more successful implementation. Without this stakeholder input, the organizational strategy may meet resistance and not result in the desired organizational change. Second, DOD’s guidance for cross-functional teams is critical to their consistent and effective implementation across the department. This guidance would also help ensure that such teams are provided with the leadership support and resources, among other things, to address the Secretary of Defense’s strategic objectives and further promote collaboration across the department. Third, without training for presidential appointees to positions within the Office of the Secretary of Defense that includes leadership, modern organizational practice, collaboration, and the operation of cross-functional teams or developing criteria for who could receive a waiver for this training and on what basis, DOD may have difficulty aligning the perspective of these leaders to most effectively bring about change when implementing its new organizational strategy. Recommendations for Executive Action We are making a total of four recommendations to the Secretary of Defense and the Chief Management Officer (CMO). The Secretary of Defense should ensure that: The CMO, in its revisions to the draft organizational strategy, address how the department will promote and achieve a collaborative culture, as required under section 911 of the NDAA for Fiscal Year 2017. The CMO could accomplish this by incorporating our leading practices on mergers and organizational transformations. (Recommendation 1) The CMO obtain stakeholder input on the development of the organizational strategy from key stakeholders, including the Secretary of Defense, the military departments, the combatant commands, and defense agencies. (Recommendation 2) The CMO fully address all requirements in section 911 of the NDAA for Fiscal Year 2017 and incorporate leading practices for effective cross- functional teams in guidance on Secretary of Defense-empowered cross- functional teams. (Recommendation 3) The CMO either: (a) provide training for presidentially-appointed, Senate- confirmed individuals in the Office of the Secretary of Defense that includes the required elements—leadership, modern organizational practice, and collaboration—in section 911 of the NDAA for Fiscal Year 2017, or (b) develop criteria for obtaining a waiver and have the Secretary of Defense request such a waiver from the President for these required elements if the individual possesses—through training and experience— the skill and knowledge otherwise to be provided through a course of instruction. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In written comments, DOD concurred with our recommendations. DOD also provided technical comments, which we incorporated where appropriate. DOD’s comments are reprinted in their entirety in appendix III. We initially made our recommendations to the DCMO; however, because section 910 of the NDAA for Fiscal Year 2018 disestablished the position of DCMO on February 1, 2018 and established the position of CMO, we have updated our recommendations to be directed to the CMO. In response to our first recommendation, DOD emphasized the importance of collaboration across the department in pursuing DOD’s goals. In response to our second recommendation, DOD stated that finalizing the organizational strategy has been dependent on finalizing the National Defense Strategy and the Agency Strategic Plan. DOD also mentioned the reform teams established by the Deputy Secretary of Defense being aligned with strategic guidance. While DOD’s efforts to establish these reform teams are notable, as we discussed in our report, these reform teams do not meet the requirements for cross-functional teams established pursuant to section 911 of the NDAA for Fiscal Year 2017. Finally, DOD concurred with our third and fourth recommendations and stated that criteria for waiving training for presidentially-appointed, Senate-confirmed individuals will be completed and appropriate waivers submitted to the President for key personnel by March 30, 2018. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and DOD’s Chief Management Officer. In addition, the report is available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2775 or FieldE1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Summary of Requirements in Section 911 of the National Defense Authorization Act for Fiscal Year 2017 Section 911 of the National Defense Authorization Act for Fiscal Year 2017 requires the Secretary of Defense to take several actions. Table 5 below summarizes some of these requirements, the due date, and the date completed. Appendix II: Identification of Leading Practices for Effective Cross-Functional Teams We identified leading practices for effective cross-functional teams and compared the Department of Defense’s (DOD) steps to establish cross- functional teams against these leading practices. To identify the leading practices, we reviewed literature as well as five case studies of cross- functional teams. In addition, we selected six academic and practitioner experts to interview based on their publications or research, prior testimony before the Senate Armed Services Committee on the implementation of cross-functional teams at DOD, and recommendations from DOD officials. We identified eight broad categories of leading practices associated with effective cross-functional teams: (1) open and regular communication, (2) well-defined team goals, (3) inclusive team environment, (4) senior management support, (5) well-defined team structure, (6) autonomy, (7) committed cross-functional team members, and (8) an empowered cross-functional team leader. To identify what is known from published research about factors contributing to effective cross-functional teams, we conducted a literature search among relevant articles published from January 1990 through September 2017. We conducted a search for relevant peer-reviewed articles in 19 databases, including JSTOR, Academic OneFile, and ProQuest. Key terms included various combinations of “cross-functional team,” “best practice,” “characteristics,” “effective,” and “success.” From all database sources, we identified 46 relevant articles. We first reviewed the abstracts for each of these articles for relevancy in identifying contributing factors related to effective cross-functional teams. For the 17 articles that we found relevant and based on empirical research, we reviewed the full article for methodological rigor. GAO social scientists read and assessed each study, using a standardized data collection instrument. The assessment focused on information such as the population examined, the research design and data sources used, and methods of data analysis. The assessment also focused on the quality of the data used in the studies as reported by the researchers, any limitations of data sources for the purposes for which they were used, and inconsistencies in reporting study results. A second GAO social scientist reviewed each completed data collection instrument to verify the accuracy of the information included. We determined that the studies were sufficiently sound to support their results and conclusions. We excluded articles that lacked enough information about their methodologies for us to evaluate them. We then reviewed the citations and literature reviews of the relevant articles for additional sources. After including these articles and excluding others, 14 articles remained, covering cross-functional teams in both the private and public sectors. We took several additional steps to identify leading practices. First, we reviewed five case studies developed by subject-matter experts on cross- functional teams and interagency task forces employing similar collaboration tactics for national security issues. We reviewed these studies for academic rigor and determined that we could use them to inform our leading practice development. Second, we reviewed three relevant congressional testimonies from a Senate Armed Services Committee hearing in June 2016 about the use of cross-functional teams for improving strategic integration within DOD and incorporated them as well into the identification of leading practices. Third, we interviewed six subject-matter experts on cross-functional teams, utilizing a semi- structured set of questions, and used their responses to inform our cross- functional team leading practices. These experts include current and former government officials involved with cross-functional teams and academic researchers, who are listed below. Honorable Michael B. Donley—Former Secretary of the Air Force from 2008 to 2013, Dr. Amy Edmondson—Novartis Professor of Leadership and Management, Harvard Business School, Chris Fussell—Managing Partner at the McChrystal Group, former Navy SEAL and aide-de-camp to General Stanley McChrystal, Dr. Christopher J. Lamb—Distinguished Research Fellow, Center for Strategic Research in the Institute of National Strategic Studies, National Defense University, Honorable James R. Locher III—Former President and CEO, Project on National Security Reform, and Dr. Jeffrey Polzer—UPS Foundation Professor of Human Resource Administration, Harvard Business School. We documented our interviews with the selected subject-matter experts in a record of interview. To determine appropriate subject-matter experts to interview, we received recommendations from the Senate Armed Services committee and DOD officials, and identified subject-matter experts who testified before Congress on the topic of cross-functional teams. We also solicited names of other cross-functional team experts during our initial subject-matter expert interviews. Additionally, we examined the top business programs and research institutes at universities in the country identified in the top five rankings by U.S. News & World Report and identified researchers with expertise in cross- functional teams. Finally, we identified subject-matter experts through reviewing the Academy of Management’s Annual Meeting program from 2014 to 2016. The experts identified from this search were based in the United States and had papers in the program relating to cross-functional teams. We conducted a content analysis of cross-functional team practices identified in our literature review, the case studies, the congressional testimonies, and the subject-matter expert interviews. To do so, team members first reviewed: the results sections from the scholarly articles, the texts of the case studies, the transcripts of the testimonies, and the records of interview from the subject-matter interviews in order to identify characteristics of effective cross-functional teams. Then the team members independently reviewed the characteristics to identify themes. They subsequently compared the themes and developed a series of conceptual categories to be used as a coding structure for the content analysis. To conduct the content analysis of all identified characteristics, two analysts independently assigned each identified characteristic from the sources to one or more categories and sub-categories. Then, the team members met to compare their categorization decisions and to discuss the differences. Any disagreements regarding the categorizations of the characteristics were discussed and reconciled. The team members then tabulated the number of characteristics in each category and sub- category and reached agreement on the final set of categories and sub- categories. We assessed the outcome of our content analysis by comparing leading practices we identified to the contractor and internal DOD studies, as well as to our key considerations for implementing interagency collaborative mechanisms. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tina Won Sherman (Assistant Director), Tracy Barnes, Leslie Bharadwaja, Arkelga Braxton, Adelle Dantzler, David Dornisch, Jessica Du, Michael Holland, Amie Lesser, Ned Malone, Judy McCloskey, Sheila Miller, Richard Powelson, Terry Richardson, Ron Schwenn, Jared Sippel, Pam Snedden, Sarah Veale, and Richard Zarrella made key contributions to this report.
Why GAO Did This Study DOD continues to confront organizational challenges that hinder collaboration. To address these challenges, section 911 of the NDAA for Fiscal Year 2017 directed the Secretary of Defense to issue an organizational strategy that identifies critical objectives which span multiple functional boundaries and that would benefit from the use of cross-functional teams. Additionally, DOD is to establish cross-functional teams to support this strategy. The NDAA also included a provision for GAO to assess DOD's actions in response to section 911. This report evaluates the extent to which DOD, in accordance with statutory requirements and leading practices, has (1) developed and issued an organizational strategy, (2) established Secretary of Defense-empowered cross-functional teams, and (3) provided associated training for Office of the Secretary of Defense leaders. GAO analyzed DOD's draft organizational strategy, draft guidance on establishing cross-functional teams, and draft training curriculum. GAO also interviewed DOD officials and subject-matter experts and identified leading practices for effective cross-functional teams. What GAO Found The Department of Defense (DOD) has implemented some of the statutory requirements outlined in section 911 of the National Defense Authorization Act (NDAA) for Fiscal Year 2017 to address organizational challenges, but could do more to promote department-wide collaboration, as required under the NDAA. Specifically, DOD: Drafted an organizational strategy that includes the two required statutory elements, but does not outline how DOD will advance a more collaborative culture, as required by statute. Incorporating GAO's leading practices on mergers and organizational transformations, such as setting goals, would help DOD better advance a collaborative culture. Plans to coordinate review of the organizational strategy with some DOD offices, but has not followed GAO's leading practices for collaboration—to coordinate with key stakeholders, such as the Secretary of Defense and the military departments—in drafting the strategy. Without obtaining key stakeholder input, DOD may not be well positioned to improve collaboration across the department. Established one cross-functional team to address the backlog on security clearances and developed draft guidance for cross-functional teams that addresses six of seven required statutory elements and incorporates five of eight leading practices that GAO has identified for effective cross-functional teams (see figure). Fully incorporating all statutory elements and leading practices will help the teams consistently and effectively address DOD's strategic objectives. Developed a draft training curriculum for Presidential appointees in the Office of the Secretary of Defense. However, the curriculum addresses only one of four required statutory elements, and has not been provided to appointees. In addition, although the statute allows a waiver for this training, DOD has not developed criteria for such a waiver. Providing training for these officials or ensuring that appropriate criteria are used to waive training will improve DOD's ability to implement its new organizational strategy. What GAO Recommends GAO is making four recommendations to DOD, including revising its organizational strategy, collaborating with key stakeholders on the development of its organizational strategy, revising cross-functional team guidance, and providing training. DOD concurred with GAO's recommendations.
gao_GAO-18-63
gao_GAO-18-63_0
Background Identifying Concerns about Providers’ Clinical Care As part of the credentialing and privileging process, VAMC officials are responsible for monitoring each provider’s performance on an ongoing basis and identifying any concerns about clinical care that may warrant further review. VAMCs can identify concerns about providers’ clinical care in a variety of ways, including the following: Ongoing monitoring. VHA requires VAMCs to conduct and document ongoing monitoring of each provider’s performance at least twice a year through an ongoing professional practice evaluation. During this evaluation, a provider’s performance is evaluated against benchmarks established by VAMC leadership that define acceptable performance, such as documenting patient visits appropriately and achieving specific patient outcomes. Peer review triggers. VHA has a separate process, called peer review, that VAMCs may use to review adverse events. While information collected as part of peer review is protected for quality improvement purposes and may not be used to take action against a provider, VAMCs can identify concerns about a provider’s clinical care based on a trend of certain peer review outcomes over a specified period of time, referred to as triggers. VHA requires VAMCs to establish peer review triggers. An example of a peer review trigger is when a provider has two or more episodes of patient care within a 12- month period for which a peer determined that most experienced, competent providers would have managed the episodes differently. Complaints or incident reports. Concerns about a provider’s clinical care can also be identified through complaints and incident reports. These can come from any individual with a concern, including patients, providers, or VAMC leadership. Tort claims. Filed or settled tort claims or malpractice claims can raise a concern about a provider not identified through ongoing monitoring or peer review. Reviewing Concerns about Providers’ Clinical Care and Taking Adverse Privileging Actions Once a concern about a provider’s clinical care is identified, VHA policy and guidance establish processes for VAMC officials to use to review the concern and determine whether an action should be taken against the provider’s clinical privileges. VHA policy states that if allowing a provider under review to continue delivering patient care could result in imminent danger to veterans, VAMC officials should remove the provider from delivering patient care through a summary suspension of privileges. VAMC officials have flexibility to determine the most appropriate process to use to review a provider’s clinical care depending on the specific concerns and the situation. These processes include the following: Focused professional practice evaluation (FPPE) for cause. This is a prospective review of the provider’s care over a specified period of time, during which the provider has the opportunity to demonstrate improvement in the specific area of concern. Failure to improve could result in further review or action. Retrospective review. This is a review of the provider’s delivery of patient care focused on a specific period of time in the past, a specific area of practice, or both, based on an identified concern. Comprehensive review. This is a more extensive retrospective review, generally performed by a panel of experts to ensure fairness and objectivity. In addition to reviewing the provider’s past patient care, these reviews may also include interviews with the provider, patients, and staff. These reviews generally result in conclusions about whether care delivered by the provider met the standard of care and may include recommendations about the provider’s privileges. Once a review is completed, VAMC leadership officials and the VAMC credentialing committee make decisions about next steps, which could include the following: do nothing, if the review did not substantiate the concerns; conduct further review (such as an FPPE for cause to allow the provider an opportunity for improvement or a comprehensive review if more information is needed); or take an adverse privileging action, including limiting one or more privileges (such as prescribing medication or performing a certain procedure) or revoking all of the provider’s privileges. If the VAMC’s credentialing committee recommends an adverse privileging action, it is the VAMC director’s responsibility to weigh all available information, including recommendations, and take an action. After a permanent provider is notified of the director’s decision, the provider can appeal the decision to the Disciplinary Appeals Board as part of their due process rights. The adverse privileging action is considered final once the Disciplinary Appeals Board reaches a decision and the Deputy Under Secretary for Health executes the Board’s decision. If a permanent provider does not make use of the offered due process procedures within 7 days, the provider waives his or her right to due process and the adverse privileging action is considered final. Reporting Providers to the NPDB and SLBs VHA policy requires VAMCs to alert certain entities if there are serious concerns with regard to a provider’s clinical performance. VHA policy assigns reporting responsibility and authority to the VAMC director, who generally delegates the task of reporting to other VAMC officials. VHA makes this information available to other health care entities through two distinct reporting processes: NPDB. Under VHA policy, VAMC directors must report to the NPDB any adverse privileging action the facility takes that 1) affects the clinical privileges of a provider for a period longer than 30 days and 2) is related to professional incompetence or professional misconduct. VHA policy requires VAMCs to submit these NPDB adverse action reports within 15 calendar days of the date the adverse privileging action is made final— that is, when all applicable internal due process procedures have been completed and the VAMC director has signed off on the action. VAMC directors are also required to report to the NPDB providers who resign or retire while under investigation or in return for the VAMC not conducting such an investigation or proceeding. To avoid any errors in the facts of the report, the VAMC director must notify any provider who is about to be reported to the NPDB and give the provider an opportunity to discuss the content of the report before it is submitted. SLBs. VHA policy requires VAMC directors to report providers—both current and former employees—when there are serious concerns about the providers’ clinical care to any SLB where the providers hold an active medical license. Specifically, VHA policy requires VAMCs to report providers who so substantially failed to meet generally accepted standards of clinical practice as to raise reasonable concern for the safety of patients. According to VHA policy and guidance, the SLB reporting process should be initiated as soon as it appears that a provider’s behavior or clinical practice fails to meet accepted standards. VAMC officials are directed not to wait to report to SLBs until adverse privileging actions are taken because an SLB conducts its own investigation of the provider to determine whether licensure action is warranted. This reporting process comprises five stages as established in VHA policy, and VHA policy states that the process should be completed in around 100 days (see figure 1). Performance Pay Performance pay—a component of VA provider compensation—is an annual lump sum payment based on the extent to which an individual provider achieves specific goals. The goals may vary for providers across VA, at the same VAMC, or within a particular specialty. VA policy establishes minimum performance pay eligibility criteria, including being employed by VA from July 1 through September 30 of the fiscal year being reviewed. Selected VAMCs’ Reviews of Providers’ Clinical Care Are Not Always Documented or Timely, and VHA Does Not Adequately Oversee These Reviews Documentation frequently lacking. We found that the five selected VAMCs collectively required reviews of 148 providers’ clinical care after concerns were raised from October 2013 through March 2017, but VAMC officials were unable to provide documentation that almost half of these reviews were conducted. We found that all five VAMCs lacked at least some documentation of the reviews they told us they conducted, and in some cases the required reviews were not conducted at all. We also found VHA does not adequately oversee these reviews, as discussed later in this report. FPPEs for cause. FPPEs for cause accounted for most of the missing documentation of clinical care reviews, despite VHA policy requiring VAMCs to document FPPEs for cause in the providers’ files. Specifically, of the 112 providers for whom the selected VAMCs required FPPEs for cause from October 2013 through March 2017, the VAMCs were unable to provide documentation of the FPPEs for nearly a quarter (26) of the providers. Additionally, VAMC officials confirmed that FPPEs for cause that were required for another 21 providers were never conducted. Other reviews. The selected VAMCs were also unable to provide documentation of some retrospective reviews. Specifically, of the 27 providers for whom the selected VAMCs conducted a retrospective review, 8 were missing documentation. While VHA guidance recommends that VAMCs document these reviews, VHA policy does not require that VAMCs document retrospective or comprehensive reviews. VHA officials told us that they expected VAMCs to document these types of reviews so that the information could be used to support adverse privileging actions, if necessary. Without clearly stated documentation requirements in VHA policy, VAMC officials inconsistently document their results, preventing VAMC directors and VISNs from properly evaluating the effectiveness of its retrospective and comprehensive reviews, which are used to, among other things, ensure patient safety. Additionally, we found that key officials from two VAMCs were not aware of the VHA guidance and that 5 of the 8 missing retrospective reviews were from these two VAMCs. We also found that one VAMC was missing documentation of clinical care reviews for 12 providers who met the VAMC’s peer review trigger. In the absence of this documentation, we were unable to identify the type of reviews that were missing for these 12 providers. The selected VAMCs’ failure to document reviews of providers’ clinical care after concerns were raised is inconsistent with federal internal control standards for monitoring and documentation, which state that management should conduct and document separate evaluations, when necessary. In the absence of VAMC documentation of such separate evaluations of providers, VAMC leadership officials lack key information needed to make decisions about whether providers’ privileges are appropriate, and they also lack reasonable assurance that appropriate reviews are conducted. Reviews not always timely. We found that the five selected VAMCs’ reviews of providers’ clinical care were not always conducted in a timely manner after concerns were raised. Specifically, of the 148 providers, the VAMCs’ initiation of reviews of 16 providers’ clinical care was delayed by more than 3 months, and in some cases for multiple years, after the concern was raised. At one VAMC, service chiefs were not instructed to conduct reviews of 14 providers until 4 to 13 months after these providers met the VAMC’s peer review trigger. Before the service chiefs were notified of the concerns, 3 of these providers had at least one additional concerning episode of care—that peer reviewers judged would have been handled differently by most experienced providers—identified through the peer review process. As pointed out in VHA guidance, earlier intervention could prevent additional patients from receiving substandard care. Officials from another VAMC did not conduct retrospective reviews on 2 providers until we requested documentation of the reviews, approximately 3 and a half years after the credentialing committee had initially requested a review. While VHA officials told us that clinical care reviews should be conducted as expeditiously as reasonably possible, VHA policy does not specify a timeliness requirement. Allowing more time to elapse before a clinical care review is initiated weakens the intended purpose behind clinical care reviews and further increases risk to patient safety. Federal internal control standards for monitoring state that management should evaluate issues and remediate identified deficiencies in a timely manner. A clinical care concern could represent a potential deficiency in providing medical care, and as a result, VHA increases its risk further without establishing a policy that sets timeframes for conducting clinical care reviews. VHA oversight is inadequate. We also found that VHA does not adequately oversee VAMC reviews of providers’ clinical care after concerns have been raised, including ensuring that these reviews are completed and documented in a timely manner. Under VHA policy, VISNs are responsible for overseeing the credentialing and privileging processes at their respective VAMCs. While reviews of providers’ clinical care after concerns are raised are a component of credentialing and privileging, we found that the VISNs with responsibility for overseeing the selected VAMCs through routine audits do not include these reviews in their audits. While the standardized tool VHA requires the VISNs to use for these audits instructs the VISNs to identify and review providers who were on an FPPE for cause, none of the VISN officials we spoke with described any routine oversight of FPPEs or any other reviews of identified clinical care concerns. This may be in part because some VISN officials are not using VHA’s standardized audit tool as required. Officials from one VISN said they had developed their own audit tool and officials from another VISN said that they were not conducting the audits due to multiple instances of turnover in a key position at the VISN. Further, VHA’s standardized audit tool does not direct the VISN to oversee any other types of reviews of clinical care concerns, such as retrospective or comprehensive reviews. The tool also does not require VISN officials to look at documentation of the FPPEs for cause; instead, it calls for reviewing credentialing committee meeting minutes. Without reviewing documentation, VISN officials would be unable to identify the incomplete documentation that we identified in our review. Both VHA and VISN officials described instances of assisting VAMC officials with reviews of providers’ clinical care after concerns had been raised, but VHA and VISN officials told us that their involvement in these reviews is typically consultative and not routine. (For example, the VISN may assist by identifying providers outside of the VAMC to conduct the review.) As a result, VHA and the VISNs are not conducting routine oversight to ensure that VAMC reviews of providers’ clinical care after concerns are raised are conducted appropriately, including adequately ensuring that the reviews are completed and documented in a timely manner, in accordance with VHA policy. The lack of routine VHA oversight, through the VISNs, of VAMC reviews of providers’ clinical care after concerns are raised is inconsistent with federal internal control standards for monitoring, which state that management should establish and operate monitoring activities. In the absence of routine monitoring of VAMCs’ evaluations of providers after concerns have been raised, VHA lacks reasonable assurance that VAMCs adequately review all identified concerns about providers’ clinical care and take appropriate privileging actions to ensure that VA is providing safe, high quality care for veterans. Selected VAMCs Did Not Always Report Providers to the NPDB and SLBs in Accordance with VHA Policy, and VHA Does Not Adequately Oversee This Reporting NPDB and SLB reporting not completed. We found that the five selected VAMCs did not report the majority of providers who should have been reported to the NPDB or SLBs in accordance with VHA policy. Our analysis shows that from October 2013 through March 2017, of the 148 providers whose clinical care required a review, the VAMCs took adverse privileging actions against 5 providers, and another 4 providers resigned or retired while under review but before an adverse privileging action could be taken. However, at the time of our review, we found that the five selected VAMCs had only reported 1 of these 9 providers to the NPDB and none of these providers to the SLBs. Furthermore, the 1 provider who was reported to the NPDB for an adverse privileging action was reported 136 days after all internal VA appeals were complete, far beyond the 15 day reporting requirement. In addition to these nine providers, one of the selected VAMCs terminated the services of four contract providers based on deficiencies in the providers’ clinical performance, effectively revoking their clinical privileges. For example, the VAMC documented that one contractor’s services were terminated for cause related to patient abuse after only 2 weeks of work at the VAMC. A VAMC leadership official told us there was no further documentation of whether reporting was considered or whether any comprehensive review was conducted, despite the fact that the VAMC credentialing committee recommended both. While VHA policy identifies the requirements, steps, and limited fair hearing process for reporting contract providers, these required steps were not followed, and none of these providers were reported to the NPDB or SLB. As a result of our audit work, in August 2017, one of the VAMCs reported to the NPDB three of the providers who resigned or retired while under investigation but before an adverse privileging action could be taken. These reports were completed between 11 months and over 3 and a half years after the providers resigned or retired. VAMC officials could not confirm that they sent the required copies of the NPDB reports to the appropriate SLBs. The five selected VAMCs did report two providers to their respective SLBs for reasons other than adverse privileging actions. In accordance with VHA policy, these SLB reports were made after VAMC officials determined that the providers’ behavior or clinical practice so substantially failed to meet generally accepted standards of clinical practice as to raise reasonable concern for the safety of patients—the standard for SLB reporting. One of these providers could not have an adverse privileging action taken against them because VAMC officials unintentionally allowed the provider’s privileges to expire during a comprehensive review of the provider’s care. The other provider reported to the SLBs was considered for an adverse privileging action, but VAMC officials suspended the provider instead. The provider demonstrated improvement after the suspension. SLB reporting not always timely. While two of the selected VAMCs had each reported a provider, we found that in these cases the SLB reporting process took significantly longer than the 100 day timeframe suggested in VHA policy. Specifically, it took over 500 days for each of the two completed reports to pass initial and comprehensive review at the VAMC, receive concurrence from the VISN, and be submitted to the SLB. For example, one of the two providers self-reported to the SLB the concerning episode of care at the VAMC. However, before the VAMC submitted its SLB report 328 days later, the SLB had completed its investigation of the provider’s self-report and put in place an agreement that placed restrictions and requirements on the provider’s medical license. Subsequently, the provider successfully met the requirements of the agreement and had all restrictions on the license removed. Officials at two VAMCs told us the SLB reporting is more tedious or cumbersome than the NPDB reporting process, making it difficult to complete in a timely manner. One VAMC official commented that while completing the process in less than a year seems reasonable, the typical timeframe for submitting a SLB report is at least 2 years. At the five selected VAMCs, we found that providers were not reported to the NPDB and relevant SLBs as required because officials were generally not familiar with or misinterpreted related VHA policies. VHA officials commented that adverse privileging actions and clinical care concerns rising to the level of reporting are infrequent, with officials at two VISNs estimating that only a few occur across the facilities within their network each year. Staff at three VAMCs commented that there has been turnover in positions that have been delegated tasks related to reporting and one VAMC official told us that turnover in these positions is a barrier to timely reporting. For example, at one facility, we found that officials failed to report six providers to the NPDB because the officials were unaware that they had been delegated responsibility for NPDB reporting. Officials at two of the selected VAMCs told us that VHA cannot report contract providers to the NPDB. This assertion is inconsistent with VHA policy. Officials at two of the selected VAMCs were waiting to start the SLB reporting process for providers until after all appeals had been exhausted. This approach is inconsistent with VHA policy, which states that the process should start within 7 days of when the reporting standard is met. For example, for one provider who was reported, VAMC officials unnecessarily waited 7 months for the completion of the appeals process before they resumed the reporting process, which ultimately took 547 days. Officials at one VAMC did not report a provider to the NPDB or SLB following an adverse privileging action because the SLB had found out about the issue independently. This is inconsistent with VHA policy for NPDB and SLB reporting, and the SLBs in other states where the provider held a license were not alerted of concerns about the provider’s clinical practice. VHA oversight is inadequate. We also found that VHA and the VISNs do not adequately oversee NPDB and SLB reporting and they cannot ensure that VAMCs are reporting providers when required to do so by VHA policy. While the VISNs are responsible for overseeing the credentialing and privileging at their respective VAMCs under VHA policy, VHA policy does not require the VISNs to oversee whether VAMCs are reporting providers to the NPDB or SLB when warranted. As a result, VISN officials were unaware of situations in which VAMC directors failed to report providers to the NPDB, as evidenced by our review. In the case of reporting processes for SLBs, VISN officials told us that they review the evidence files to ensure, among other things, that the files are in compliance with privacy laws. However, officials told us that the VISNs do not oversee the reporting process to ensure that VAMC directors are reporting all providers to the SLB who should be reported. Additionally, VHA officials told us that they are not aware of the number of cases that have been initiated for SLB reporting. Further, by failing to report providers as required, VHA facilitates providers who provide substandard care obtaining privileges at another VAMC or a hospital outside of VA’s health care system without an indication on their record that an adverse privileging action was taken against them or that they resigned or retired while under investigation. For example, we found that two of the four contract providers whose privileges were revoked and were not reported to the NPDB or SLBs by one VAMC continue to be able to provide care to veterans outside of that VAMC. Specifically, one provider whose services were terminated related to patient abuse subsequently held privileges at another VAMC, while the other provider belongs to a network of providers that provides care for veterans in the community. Seven of the 12 providers who were not reported to the NPDB or SLBs after their privileges were revoked—through adverse privileging actions or the termination of services on a contract—or who resigned or retired while under investigation have current Medicare enrollment records, indicating that they are likely practicing outside of VA and may still be receiving federal dollars by billing for services provided to Medicare beneficiaries. We also identified one case where a VAMC director did not report a provider to the NPDB or SLB after an agreement was reached that the provider would resign, though the VAMC credentialing committee recommended the provider’s privileges be revoked. We found that the provider’s privileges were also revoked from a non-VA hospital in the same city for the same reason 2 years later. The director’s decision not to report the provider as required left patients in that community vulnerable to adverse outcomes because problems with the provider’s performance were not disclosed. There was no documentation of the reasons why the VAMC director did not report the provider to the NPDB or SLBs. This lack of routine oversight from VHA through the VISNs of VAMCs’ reporting of providers to the NPDB and SLBs is inconsistent with federal internal control standards for monitoring. The standards state that management should establish and operate monitoring activities to monitor the internal control system and appropriately remediate deficiencies on a timely basis. Without routine monitoring of the reporting process, VHA lacks reasonable assurance that all providers who should be reported to the NPDB and SLBs are reported. At Selected VAMCs, Providers with Adverse Privileging Actions Were Ineligible for Performance Pay the Year the Actions Were Taken None of the five providers who had an adverse privileging action taken against them in the period we reviewed received performance pay for the fiscal year the action was taken because they were ineligible, per VA policy. This is because VA policy requires providers to be employed through the end of the fiscal year to be eligible for performance pay, and none of the five providers we reviewed were still employed by the VAMCs at the end of the fiscal year in which the actions were taken. All five of the adverse privileging actions resulted from concerns about the providers’ clinical care in previous fiscal years. Among the five providers, two providers received performance pay in the fiscal year before their privileges were revoked, and three providers did not. For example, one provider’s privileges were revoked in 2015 due to concerns raised in 2014 regarding the provider’s failure to complete necessary documentation of patient care in a timely manner. This provider did not receive credit for the performance pay goal directly related to timely completion of documentation, and ultimately the provider received half of the maximum amount of performance pay for fiscal year 2014. In the case of another provider who did not receive any performance pay for the fiscal year before the adverse privileging action was taken, VAMC officials noted that the provider had been removed from practice for a portion of the fiscal year while they were reviewing the clinical care concern and thus was unable to meet performance pay goals. Conclusions VHA is responsible for ensuring that providers at its VAMCs deliver safe care to veterans and that concerns that may arise about providers’ clinical care are reviewed and addressed at VHA’s 170 VAMCs. However, our work shows that at our five selected VAMCs, reviews of concerns about providers’ clinical care were not always documented or conducted in a timely manner and VAMCs had not reported the majority of providers they should have reported to the NPDB or SLBs. This is concerning for several reasons. First, without documentation of the reviews of these concerns about providers’ clinical care, VAMC leadership officials may not have the information they need to make decisions about whether a provider’s privileges at the VAMC are appropriate. Second, if VAMCs do not document that they have reviewed provider’s clinical care after concerns have been raised, VHA lacks reasonable assurance that the VAMCs are adequately addressing such concerns or that VAMCs are limiting or revoking providers’ privileges when necessary. Third, if these reviews are not conducted in a timely manner and providers continue to deliver potentially substandard care, VHA may be increasing the risk that veterans will receive unsafe care at VAMCs. Finally, VAMCs’ failure to report providers to the NPDB and SLBs, as required under VHA policy, makes it possible for providers to obtain privileges at other VAMCs or non-VA health care entities without disclosing the problems with their past performance. In effect, this can help shield the providers from professional accountability outside of VA’s health care system. Further, VHA’s inadequate oversight of these processes calls into question the extent to which VAMCs are held accountable for ensuring that veterans receive safe, high quality care. As our review shows, the VISNs responsible for overseeing the five selected VAMCs do not routinely oversee VAMC reviews of providers’ clinical care after concerns are raised to ensure that these reviews are completed in accordance with VHA policies; nor do the VISNs oversee the VAMCs to ensure that all providers that should be reported are reported to the NPDB and SLBs. Until VHA strengthens its oversight of these processes, veterans may be at increased risk of receiving unsafe care through the VA health care system. Recommendations for Executive Action We are making the following four recommendations to VA: The Under Secretary for Health should specify in VHA policy that reviews of providers’ clinical care after concerns have been raised should be documented, including retrospective and comprehensive reviews. (Recommendation 1) The Under Secretary for Health should specify in VHA policy a timeliness requirement for initiating reviews of providers’ clinical care after a concern has been raised. (Recommendation 2) The Under Secretary for Health should require VISN officials to oversee VAMC reviews of providers’ clinical care after concerns have been raised, including retrospective and comprehensive reviews, and ensure that VISN officials are conducting such oversight with the required standardized audit tool. This oversight should include reviewing documentation in order to ensure that these reviews are documented appropriately and conducted in a timely manner. (Recommendation 3) The Under Secretary for Health should require VISN officials to establish a process for overseeing VAMCs to ensure that they are reporting providers to the NPDB and SLBs, and are reporting in a timely manner. (Recommendation 4) Agency Comments We provided a draft of this report to VA for comment. In its written comments, which are reproduced in Appendix I, VA agreed with our conclusions and concurred with our recommendations. In its comments, VA stated that VHA plans to revise existing policy to require documentation of reviews of providers’ clinical care after concerns have been raised and to establish expected timeframes for completing such reviews. VA estimates that it will complete these actions by September 2018. VA also stated that VHA will update the standardized audit tool used by the VISNs so that it directs them to oversee reviews of providers’ clinical care after concerns have been raised and to ensure timely reporting to the NPDB and SLBs. According to VA, the revised tool will also facilitate aggregate reporting by VISNs to identify trends and issues. VA estimates that it will complete these actions by October 2018. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Veterans Affairs, and the Under Secretary for Health. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Sharon M. Silas at (202) 512-7114 or silass@gao.gov or Randall B. Williamson at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Veterans Affairs Appendix II: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contact named above, Marcia A. Mann (Assistant Director), Kaitlin M. McConnell (Analyst-in-Charge), and Summar C. Corley made major contributions to this report. Also contributing were Krister Friday, Jacquelyn Hamilton, Vikki Porter, and Brienne Tierney.
Why GAO Did This Study Nearly 40,000 providers hold privileges in VHA's 170 VAMCs. VAMCs must identify and review any concerns that arise about the clinical care their providers deliver. Depending on the findings from the review, VAMC officials may take an adverse privileging action against a provider that either limits the care a provider is allowed to deliver at the VAMC or prevents the provider from delivering care altogether. GAO was asked to review VHA processes for reviewing concerns about providers' clinical care. This report examines, among other things, selected VAMCs' (1) reviews of providers' clinical care after concerns are raised and VHA's oversight of these reviews, and (2) VAMCs' reporting of providers to the NPDB and SLBs and VHA's oversight of reporting. GAO visited a non-generalizable selection of five VAMCs selected for the complexity of services offered and variation in location. GAO reviewed VHA policies and files from the five selected VAMCs, and interviewed VHA, VISN, and VAMC officials. GAO also evaluated VHA's practices using federal internal control standards. What GAO Found Department of Veterans Affairs (VA) medical center (VAMC) officials are responsible for reviewing the clinical care delivered by their privileged providers—physicians and dentists who are approved to independently perform specific services—after concerns are raised. The five VAMCs GAO selected for review collectively required review of 148 providers from October 2013 through March 2017 after concerns were raised about their clinical care. GAO found that these reviews were not always documented or conducted in a timely manner. GAO identified these providers by reviewing meeting minutes from the committee responsible for requiring these types of reviews at the respective VAMCs, and through interviews with VAMC officials. The selected VAMCs were unable to provide documentation of these reviews for almost half of the 148 providers. Additionally, the VAMCs did not start the reviews of 16 providers for 3 months to multiple years after the concerns were identified. GAO found that VHA policies do not require documentation of all types of clinical care reviews and do not establish timeliness requirements. GAO also found that the Veterans Health Administration (VHA) does not adequately oversee these reviews at VAMCs through its Veterans Integrated Service Networks (VISN), which are responsible for overseeing the VAMCs. Without documentation and timely reviews of providers' clinical care, VAMC officials may lack information needed to reasonably ensure that VA providers are competent to provide safe, high quality care to veterans and to make appropriate decisions about these providers' privileges. GAO also found that from October 2013 through March 2017, the five selected VAMCs did not report most of the providers who should have been reported to the National Practitioner Data Bank (NPDB) or state licensing boards (SLB) in accordance with VHA policy. The NPDB is an electronic repository for critical information about the professional conduct and competence of providers. GAO found that selected VAMCs did not report to the NPDB eight of nine providers who had adverse privileging actions taken against them or who resigned during an investigation related to professional competence or conduct, as required by VHA policy, and none of these nine providers had been reported to SLBs. GAO found that officials at the selected VAMCs misinterpreted or were not aware of VHA policies and guidance related to NPDB and SLB reporting processes resulting in providers not being reported. GAO also found that VHA and the VISNs do not conduct adequate oversight of NPDB and SLB reporting practices and cannot reasonably ensure appropriate reporting of providers. As a result, VHA's ability to provide safe, high quality care to veterans is hindered because other VAMCs, as well as non-VA health care entities, will be unaware of serious concerns raised about a provider's care. For example, GAO found that after one VAMC failed to report to the NPDB or SLBs a provider who resigned to avoid an adverse privileging action, a non-VA hospital in the same city took an adverse privileging action against that same provider for the same reason 2 years later. What GAO Recommends GAO is making four recommendations, including for VA to direct VHA to require VAMCs to document reviews of providers' clinical care after concerns are raised, develop timeliness requirements for these reviews, and ensure proper VISN oversight of such reviews as well as timely VAMC reporting of providers to the NPDB and SLBs. VA concurred with GAO's recommendations and described steps it will take to implement them.
gao_GAO-18-616T
gao_GAO-18-616T_0
Background Since 1990, generally every 2 years at the start of a new Congress, we call attention to agencies and program areas that are high risk due to their vulnerability to mismanagement or that are most in need of transformation. Our high-risk program is intended to help inform the congressional oversight agenda and to improve government performance. Since 1990, a total of 61 different areas have appeared on the High-Risk List. Of these, 24 areas have been removed, and 2 areas have been consolidated. On average, the high-risk areas that were removed from the list had been on it for 9 years. Our experience with the High-Risk List over the past 25 years has shown that the key elements needed to make progress in high-risk areas are top-level attention by the administration and agency leaders grounded in the five criteria for removing high-risk designations, which we reported on in November 2000. When legislative and agency actions, including those in response to our recommendations, result in significant progress toward resolving a high-risk problem, we will remove the high-risk designation. However, implementing our recommendations alone will not result in the removal of the designation, because the condition that led to the recommendations is symptomatic of systemic management weaknesses. In cases in which we remove the high-risk designation, we continue to closely monitor the areas. If significant problems again arise, we will consider reapplying the high-risk designation. The five criteria for removing high-risk designations are: Leadership commitment. Demonstrated strong commitment and top leadership support to address the risks. Capacity. Agency has the capacity (i.e., people and other resources) to resolve the risk(s). Action plan. A corrective action plan that defines the root causes, identifies effective solutions, and provides for substantially completing corrective measures in the near term, including steps necessary to implement solutions we recommended. Monitoring. A program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. Demonstrated progress. Ability to demonstrate progress in implementing corrective measures and in resolving the high-risk area. These five criteria form a road map for efforts to improve and ultimately address high-risk issues. Addressing some of the criteria leads to progress, and satisfying all of the criteria is central to removal from the list. Figure 1 shows the five criteria for removal for a designated high-risk area and examples of agency actions leading to progress toward removal. Importantly, the actions listed are not “stand alone” efforts taken in isolation of other actions to address high-risk issues. That is, actions taken under one criterion may be important to meeting other criteria as well. For example, top leadership can demonstrate its commitment by establishing a corrective action plan, including long-term priorities and goals to address the high-risk issue and by using data to gauge progress—actions that are also vital to addressing the action plan and monitoring criteria. When an agency meets all five of these criteria, we can remove the agency from the High Risk List. We rate agency progress on the criteria using the following definitions: Met. Actions have been taken that meet the criterion. There are no significant actions that need to be taken to further address this criterion. Partially Met. Some, but not all, actions necessary to meet the criterion have been taken. Not Met. Few, if any, actions toward meeting the criterion have been taken. Agencies Made Some Progress Addressing the Management Weaknesses That Led to the 2017 High Risk Designation Officials from Indian Affairs, BIE, BIA, and IHS expressed their commitment to addressing the issues that led to the high-risk designation for federal management of programs that serve tribes and their members. Since we last testified before this committee on September 13, 2017, we met with agency leaders and worked with each agency to identify actions the agencies took or plan to take to address the concerns that contributed to the designation. We determined that Indian Affairs, BIE, BIA, and IHS demonstrated varying levels of progress to partially meet most or all of the criteria for removing a high-risk designation. However, additional progress is needed for the agencies to fully address the criteria and related management weaknesses, particularly in the areas of leadership commitment and capacity. Leadership Commitment To meet the leadership commitment criterion for removal of a high-risk designation, an agency needs to have demonstrated strong commitment and top leadership support to address management weaknesses. The following examples show actions Indian Affairs, BIE, BIA, and IHS took to partially meet the leadership commitment criterion. Education. Indian Affairs’ leaders have demonstrated commitment to addressing key weaknesses in the management of BIE schools in several ways. For example, the BIE Director formed an internal working group, convened meetings with other senior leaders within Indian Affairs, and publicly stated that his agency is committed to ensuring implementation of our recommendations on Indian education. In addition, the BIE Director and other Indian Affairs leaders and senior managers have met with us frequently to discuss outstanding recommendations, actions they have taken to address these recommendations, and additional actions they could take. In particular, the BIE Director met with us on nine occasions over the past year to discuss our recommendations and instructed his staff to provide us draft policies and procedures related to our recommendations. However, it is important that Indian Affairs leaders be able to sustain this level of commitment to solving problems in Indian education. Since 2012, there have been six Assistant- Secretaries of Indian Affairs and five BIE Directors. There has also been leadership turnover in other key offices responsible for implementing our recommendations on Indian education. We have previously reported that leadership turnover hampered Indian Affairs’ efforts to make improvements to Indian education. We believe that ensuring stable leadership and a sustained focus on needed changes is vital to the successful management of BIE schools. Energy. BIA officials demonstrated leadership commitment by, for example, issuing a memorandum requiring all regions and their agency offices to use a centralized data management system to track requests for land title status reports. Using this type of centralized approach for tracking such requests may improve BIA’s ability to provide needed oversight of federal actions associated with energy development and ensure documents needed for the development of energy resources are provided in a timely manner. In addition, BIA officials frequently met with us over the last 9 months to discuss the bureau’s progress in addressing recommendations related to Indian energy. However, Indian Affairs does not have a permanent Assistant Secretary. BIA does not have a permanent Director, and BIA’s Office of Trust Services—which has significant responsibility over Indian energy activities—does not have a permanent Director or Deputy Director. We have seen turnover in these leadership positions as officials have been brought in to temporarily fill these roles. As officials are brought in temporarily, previously identified plans and time frames for completing some activities have changed, and BIA has found itself starting over to identify or implement corrective actions. Health Care. IHS officials demonstrated leadership commitment by regularly meeting with us to discuss the agency’s progress in addressing our recommendations. IHS has continued to implement its Quality Framework by acquiring a software system to centralize the credentialing of clinical providers, developing a patient experience of care survey, and developing standards for limiting patient wait time. However, IHS still does not have permanent leadership—including a Director of IHS—which is necessary for the agency to demonstrate its commitment to improvement. Since 2012, there have been five IHS Acting Directors, and there has been leadership turnover in other key positions, such as area directors. For example, in January 2017 we reported that officials from four of the nine area offices in our review reported that they had at least three area directors over the prior 5 years. We also reported that inconsistent area office and health care facility leadership is detrimental to the oversight of facility operations and the supervision of personnel. To fully meet the leadership commitment criterion, all agencies will need, among other things, stable, permanent leadership that has assigned the tasks needed to address weaknesses and that holds those assigned accountable for progress. For a timeline of senior leadership turnover in Indian Affairs, BIE, BIA, and IHS from 2012 through 2018, see Figure 2. Capacity To meet the capacity criterion, an agency needs to demonstrate that it has the capacity (i.e., people and other resources) to resolve its management weaknesses. Indian Affairs, BIE, BIA, and IHS each made some progress in identifying capacity and resources to implement some of our recommendations, but BIA officials reported to us that the agency does not have the people and resources needed to fully implement other recommendations. The following examples show actions Indian Affairs, BIE, BIA, and IHS took to partially meet the capacity criterion. Education. BIE and other Indian Affairs offices that support BIE schools have made some progress in demonstrating capacity to address risks to Indian education. For example, BIE hired a full-time program analyst to coordinate its working group and help oversee the implementation of our recommendations on Indian education. This official has played a key role in coordinating the agency’s implementation efforts and has provided us with regular updates on the status of these efforts. BIE has also conducted hiring in various offices in recent years as part of a 2014 Secretarial Order to reorganize the bureau. For example, it has hired school safety officers and personnel in offices supporting the oversight of school spending. However, about 50 percent of all BIE positions have not been filled, including new positions that have been added as a result of the agency’s restructuring, according to a BIE official. Moreover, agency officials told us that vacancies remain in several key positions, including the Chief Academic Officer and the Associate Deputy Director for Bureau Operated Schools. Furthermore, BIE and other Indian Affairs offices that support BIE schools have not developed a workforce plan to address staffing and training gaps with key staff, which we previously recommended. Such a plan is important to allow BIE and other Indian Affairs offices to better understand workforce needs and leverage resources to meet them. BIE officials told us they have held workforce planning sessions and anticipate completing work on our recommendation to develop a workforce plan at the end of 2018. Energy. In November 2016, we recommended that BIA establish a documented process for assessing the workforce at its agency offices. BIA has taken a number of actions, such as conducting an internal survey to identify general workforce needs related to oil and gas development. This survey information supported staffing decisions for the recently created Indian Energy Service Center. However, BIA officials told us the bureau does not have the staff or resources to implement a comprehensive workforce planning system that would be needed to ensure it has staff in place to meet its organizational needs. Health Care. IHS has made some progress in demonstrating it has the capacity and resources necessary to address the program risks we identified in our reports. For example, IHS officials stated that the agency is expanding the role of internal audit staff within its enterprise risk management program to augment internal audits and complement audits by the HHS Inspector General and GAO. However, according to IHS, there are still vacancies in several key positions, including the Director of the Office of Resource Access and Partnerships, and the Office of Finance and Accounting. To fully meet the capacity criterion, all of the agencies need to assess tradeoffs between these and other administration priorities in terms of people and resources, and the agencies should provide to decision makers with key information on resources needed to address management weaknesses. Action Plan To meet the action plan criterion, an agency needs to have a corrective action plan that defines the root causes, identifies effective solutions, and provides for substantially completing corrective measures in the near term, including steps necessary to implement the solutions we recommended. Indian Affairs, BIE, BIA, and IHS have shown progress in identifying actions to address many of our recommendations—leading us to believe they can partially meet the action plan criterion before our next update of the High Risk List. For example: Education. BIE has taken several steps to develop action plans to address management weaknesses. For example, BIE implemented a new policy for overseeing BIE school spending, including developing written procedures and risk criteria for monitoring school expenditures. BIE also developed a strategic plan, which we recommended in September 2013. The plan provides the agency with goals and strategies for improving its management and oversight of Indian education, and establishes detailed actions and milestones for the implementation. BIE notified us that it has completed the plan and expects to publish it on June 11, 2018, and will begin implementation starting in July 2018. We will review the strategic plan once it has been published. In addition, Indian Affairs’ Office of Facilities, Property & Safety Management has developed and implemented revised comprehensive guidelines that addressed several of our findings on weaknesses with BIE school safety identified in our March 2016 report. However, Indian Affairs has not provided us with evidence that it has developed and put in place action plans on other important issues, such as a comprehensive, long-term capital asset plan to inform its allocation of school construction funds, which we recommended in May 2017. Energy. BIA officials met with us several times over the past few months to discuss planned actions for addressing management weaknesses related to Indian energy resources, and they identified actions they have taken to help implement some of our recommendations. For instance, BIA officials told us they have proposed several modifications to the bureau’s land records data management system that will enable increased tracking and monitoring of key documents that BIA must review prior to the development of Indian energy resources. BIA officials we met with have demonstrated an understanding that addressing long-standing management weaknesses is not accomplished through a single action but through comprehensive planning and continued movement toward a goal. However, the agency does not have a comprehensive plan to address the root causes of all identified management shortcomings. Health Care. Senior leaders in IHS have prioritized addressing our recommendations by implementing four recommendations we highlighted in our February 2017 update to the High Risk List. IHS incorporated our recommendations into its risk management work plan starting in 2017, and according to IHS officials, they will annually review the effectiveness of the agency’s internal controls, and where controls are deemed insufficient, take actions to strengthen them. IHS officials we met with have demonstrated an understanding that addressing long-standing management weaknesses requires that they develop a corrective action plan that defines root causes, identifies solutions, and provides for substantially completing corrective measures. However, agency officials have not yet developed a corrective action plan. To fully meet the action plan criterion, a comprehensive plan that identifies actions to address the root causes of its management shortcomings would have to come from top leadership with a commitment to provide sufficient capacity and resources to take the necessary actions to address management shortcomings and risks. Monitoring To meet the monitoring criterion, an agency needs to demonstrate that a program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. For example, agencies can demonstrate that they have a systematic way to track performance measures and progress against goals identified in their action plans. We have been working with the agencies to help clarify the need to establish a framework for monitoring progress that includes goals and performance measures to track their efforts and ultimately verify the effectiveness of their efforts. BIA and IHS made progress in holding frequent review meetings to assess the status of implementing our recommendations but have not yet taken needed steps to monitor their progress in addressing the root causes of their management weaknesses. In addition, Indian Affairs has made some progress in meeting the monitoring criterion on Indian education. For example, the agency has implemented a plan to monitor the effectiveness of corrective measures to address school safety program weaknesses. However, the agency has not yet demonstrated that it is monitoring other areas, such as showing that it is using safety program outcomes to evaluate and manage the performance of regional safety inspectors. To fully meet the monitoring criterion, the agencies need to set up goals and performance measures as they develop action plans and take further actions to monitor the effectiveness of actions to address root causes of identified management shortcomings. Demonstrated Progress To meet the demonstrated progress criterion, an agency needs to demonstrate progress in implementing corrective measures and in resolving the high-risk area. We made 52 recommendations to improve management weaknesses at Indian Affairs, BIE, BIA, and IHS, of which 34 are still open. Since our testimony in September 2017, we found that Indian Affairs has made significant progress in implementing corrective actions in education as demonstrated by our closure of nearly a third of our recommendations directed to Indian Affairs related to education programs. We found that BIA and IHS also made some progress in implementing corrective actions related to the management of energy resources and healthcare programs. Specifically, since our testimony in September 2017, BIA took actions resulting in the implementation of 2 of 14 recommendations, and IHS took actions that resulted in the implementation of four recommendations. The following examples show actions Indian Affairs, BIA, and IHS took to partially meet the demonstrated progress criterion. Education. As of early June 2018, Indian Affairs had fully addressed 8 of the 23 outstanding education recommendations we identified in our September 2017 testimony, and we have closed them. BIE implemented half of the closed recommendations, including 2 on oversight of BIE school spending identified as high priority in a March 2018 letter from the Comptroller General to the Secretary of the Interior. The rest of the recommendations we closed were implemented by personnel in Indian Affairs’ Office of Facilities, Property & Safety Management and related to oversight of school safety and construction. Overall, Indian Affairs’ efforts since we issued our High Risk List update in February 2017 represent a significant increase in activity implementing our recommendations. Substantial work, however, remains to address our outstanding recommendations in several key areas, such as in accountability for BIE school safety and school construction projects. For example, BIA has reported taking some actions to address recommendations in our May 2017 report on improving accountability of its safety employees who inspect BIE schools. However, it has not provided us with documentation of these actions. Energy. In June 2015, we recommended that BIA take steps to improve its geographic information system (GIS) capabilities to ensure it can verify ownership in a timely manner. Since our last update in September 2017, BIA has made significant progress in enhancing its GIS capabilities by integrating map-viewing technology and capabilities into its land management data system. In addition, we recommended that BIA take steps to identify cadastral survey needs. BIA’s enhanced map-viewing technology also allows the bureau to identify land boundary discrepancies, which can then be researched and corrected. Further, BIA identified unmet survey needs that were contained within the defunct cadastral request system and developed a new mechanism for its regions and agency offices to make survey requests. We believe these actions show significant progress in addressing management weaknesses associated with data limitations and outdated technology. Health Care. In April 2013, we recommended that IHS monitor patient access to physician and other nonhospital care to assess how capped payment rates may benefit or impede the availability of care. In response to our recommendation, IHS developed an online tracking tool that enables the agency to document providers that refuse to contract for lower rates. In October 2017, IHS officials met in person with us and provided a demonstration of the tracking tool. To fully meet the demonstrating progress criterion, agencies need to continue taking actions to ensure sustained progress and show that management shortcomings are being effectively managed and root causes are being addressed. In conclusion, we see some progress in all of the criteria, including leadership commitment, at all agencies, especially related to education programs. However, permanent leadership that provides continuing oversight and accountability is needed. We also see varying levels of progress at all of the agencies in understanding what they need to do to be removed from the High Risk List by identifying steps that can be incorporated into corrective action plans to address most recommendations. We look forward to working with the agencies to track their progress in implementing a framework for monitoring and validating the effectiveness of planned corrective actions. In addition, all the agencies have made progress in implementing some key recommendations. Perhaps the biggest challenge for the agencies will be achieving the capacity and identifying the resources required to address the deficiencies in their programs and activities. This challenge cannot be overcome by the agencies without a commitment from the administration to prioritize fixing management weaknesses in programs and activities that serve tribes and their members. Chairman Hoeven, Vice Chairman Udall, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. GAO Contacts and Staff Acknowledgments If you or your staff have any questions about education issues in this testimony or the related reports, please contact Melissa Emrey-Arras at (617) 788-0534 or emreyarrasm@gao.gov. For questions about energy resource development, please contact Frank Rusco at (202) 512-3841 or ruscof@gao.gov. For questions about health care, please contact Jessica Farb at (202) 512-7114 or farbj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement include Christine Kehr (Assistant Director), Jay Spaan (Analyst-in-Charge), Edward Bodine, Kelly DeMots, William Gerard, Greg Marchand, Elizabeth Sirois, and Kiki Theodoropoulos. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study GAO's High Risk List identifies federal program areas that are high risk due to their vulnerability to mismanagement, among other things. GAO added the federal management of programs that serve Indian tribes and their members to its February 2017 biennial update of high-risk areas in response to management weaknesses at Interior and HHS. This testimony provides examples of actions taken and progress made by these agencies to address the five criteria GAO uses for determining whether to remove a high-risk designation (leadership commitment, capacity, action plan, monitoring, and demonstrated progress). To conduct this work, GAO drew on findings from GAO reports issued from September 2011 through September 2017 and updated that work by reviewing agency documentation and interviewing agency officials. What GAO Found GAO designated the federal management of programs that serve tribes and their members as high risk, and officials from the Department of the Interior's Office of the Assistant Secretary-Indian Affairs (Indian Affairs), the Bureau of Indian Education (BIE), the Bureau of Indian Affairs (BIA), and the Department of Health and Human Services' (HHS) Indian Health Service (IHS) expressed their commitment to addressing the issues that led to the designation. Since GAO last testified before this committee on September 13, 2017, Indian Affairs, BIE, BIA, and IHS have demonstrated varying levels of progress to partially meet most or all of the criteria for removing a high-risk designation. However, additional progress is needed to fully address management weaknesses, particularly in the areas of leadership commitment and capacity. Leadership commitment . To meet the leadership commitment criterion for removal of a high-risk designation, the agency needs to have demonstrated strong commitment and top leadership support to address management weaknesses. Indian Affairs, BIE, BIA, and IHS each took some actions to partially meet the leadership criterion. For example, the BIE Director formed an internal working group, convened meetings with other senior leaders within Indian Affairs, and publicly stated that his agency is committed to ensuring the implementation of prior GAO recommendations on Indian education. In addition, BIA officials demonstrated leadership commitment by, for example, issuing a memorandum requiring the use of a centralized data management system to track requests for land ownership records. To fully meet the leadership commitment criterion, all the agencies need, among other things, stable, permanent leadership that has assigned the tasks needed to address weaknesses and that holds those assigned accountable for progress. Capacity . To meet the capacity criterion, an agency needs to demonstrate that it has the capacity (i.e., people and other resources) to resolve its management weaknesses. Indian Affairs, BIE, BIA, and IHS each made progress identifying capacity and resources to partially meet the capacity criterion. For example, BIE hired school safety officers and personnel in offices supporting the oversight of school spending. BIA conducted a survey to identify workforce needs related to energy development to support staffing decisions for the recently created Indian Energy Service Center. IHS officials told us that the agency is expanding the role of internal audit staff within its enterprise risk management program to augment internal audits and complement audits by the HHS Inspector General and GAO. However, all the agencies have vacancies in key offices. For example, BIA officials said the agency does not have the staff or resources to implement a comprehensive workforce planning system to ensure it has staff in place at its agency offices to meet its organizational needs concerning numerous activities, including energy resources. To fully meet the capacity criterion, all the agencies need to assess tradeoffs between these and other administration priorities in terms of people and resources, and should provide key information to decision makers on resources needed to address the criteria and related management weaknesses. What GAO Recommends GAO has made 52 recommendations to improve management weaknesses at some Interior and HHS agencies, of which 34 are still open. Some of these weaknesses led to the agencies' placement on the High Risk List. GAO sees varying levels of progress at the agencies in understanding what they need to do to be removed from the list and will continue to closely monitor their progress.
gao_GAO-18-48
gao_GAO-18-48_0
Background Misconduct can occur in any workplace. When employee misconduct happens, an agency may incur a number of direct and indirect costs depending on how the agency chooses to address misconduct. For the agency, direct costs can mean potentially significant time and resource investments, including investigations, adversarial relationships between management and the employee, costs to the agency as a result of the misconduct committed (e.g., time and attendance or credit card fraud), and reduced employee engagement. The subject-matter experts we interviewed told us that, based on their experiences, the time it takes to address a case of misconduct may range from a couple of weeks to years. The time range depends on whether the employee appeals their case and other factors. Agencies may also incur litigation expenses if an employee decides to appeal an adverse action. While there are costs to addressing misconduct, agencies also incur indirect costs when misconduct goes unaddressed in the workplace. These indirect costs include corrosive effects on other employees’ morale, higher employee turnover, reduced productivity, and lower employee commitment to their work or agency. Indirect costs also include redirecting management’s attention away from achieving the agency’s mission. Employee misconduct in the federal government is regulated by a well- developed body of statutes and regulations as well as decisions from MSPB and U.S. Court of Appeals for the Federal Circuit and Supreme Court. While there is no general definition of the term “employee misconduct” in a statute or government-wide regulation, Standards of Ethical Conduct are prescribed by the Office of Government Ethics at 5 CFR Part 2635 and agencies may also elaborate on types of misconduct in handbooks, tables of penalties (listings of some of the most common offenses with recommended ranges of penalties), and other internal guidance. There is a large body of law by MSPB addressing discipline for employee misconduct in the federal government that contains criteria of various forms of misconduct, such as, “insubordination,” “excessive absence,” and “misuse of government property.” According to OPM officials, there are instances in law and regulation where types of misconduct are referenced concerning appointment into the competitive service. Chapter 73 (Suitability, Security, and Conduct) addresses certain types of misconduct of executive branch employees. Generally speaking, an employee’s violation of an agency’s regulation or policy may cause the agency to take disciplinary or corrective action. Ultimately, if an agency needs to take an adverse action for inappropriate workplace behavior, it must do so “for such cause as will promote the efficiency of the service” as provided for in Title 5, Chapter 75. OPM has also prescribed some regulations on employee responsibilities and conduct. One of the nine Merit System Principles set forth by the Civil Service Reform Act of 1978 that govern the management of the federal workforce states that federal employees “should maintain high standards of integrity, conduct, and concern for the public interest.” According to MSPB, when there is misconduct by a federal employee, management’s goal should be to either persuade the employee to behave properly or to remove the employee if the conduct is serious enough. Moreover, OPM maintains that supervisors have a responsibility to set clear rules and expectations for employees in the workplace. It is imperative that federal agencies manage their workforces effectively, which includes the effective use of discipline when addressing employee misconduct. In addition, employees may be disciplined for conduct that that they knew or should have known was unacceptable. Similarly, federal executive branch employees have a responsibility to adhere to principles of ethical conduct and should avoid any actions that appear to violate the law or ethical standards. Overall, the objective of discipline is to deal with employees who are unwilling or unable to behave properly, and, where management deems it possible and appropriate, correct deficiencies in employee conduct. When management decides to take an action short of removal, discipline can deter misconduct and correct situations interfering with productivity. Conduct-based actions are important tools designed to aid supervisors in maintaining an efficient and orderly work environment. Agencies Must Follow Statutory and Regulatory Procedures When Taking Adverse Actions for Employee Misconduct Under Chapter 75 of Title 5 Most agencies are required to adhere to formal, statutorily established guidelines under chapter 75 when taking adverse actions against an employee for misconduct. Chapter 75 of Title 5 includes two subsections that outline the requirements for (1) non-appealable adverse actions such as suspensions of 14 days or less or (2) appealable adverse actions such as reductions in pay or grade, suspensions of more than 14 days, and removals (see figure 1). Subchapter I actions are covered by sections 5 U.S.C. 7501-7504 (Subsection I) and are referred to as “non- appealable actions,” while Subchapter II actions covered by sections 5 U.S.C. 7511-7514 are referred to as “appealable actions” based on whether or not they can be appealed to the MSPB. According to a MSPB report, through the Civil Service Reform Act of 1978 (CSRA), “Congress sought to ensure that agencies could remove employees who engage in misconduct while protecting the civil service from the harmful effects of management acting for improper reasons, such as discrimination or retaliation for whistleblowing.” OPM regulations specify the process agencies must pursue to take adverse actions. These regulations also specify the procedural and appeal rights to which employees facing adverse actions are entitled. According to OPM officials, agency policies usually cover lesser disciplinary actions, such as oral and written reprimands, letters of warning, and letters of counseling. Employees may grieve these actions depending on the agency’s administrative or negotiated grievance processes. According to OPM, agencies may issue these actions without following the procedural requirements for adverse actions under 5 U.S.C. Chapter 75. The procedural rights due to employees subject to adverse actions covered by Chapter 75 are derived both from Chapter 75 and from the U.S. Constitution. In 1985, the U.S. Supreme Court held that tenured or post-probationary public employees who may be terminated only for cause have a constitutional property interest in continued employment and cannot be deprived of their jobs without due process of law. The process that was due in that case was notice of the proposed removal before it occurred and the opportunity to present reasons why the proposed action should not be taken. Chapter 75 and OPM regulations promulgated thereunder establish additional procedural requirements extending to actions other than removal that go beyond what the Due Process Clause of the U.S. Constitution would itself require under current precedent. For example, they require that employees be given advance notice of a suspension or a reduction in pay with the opportunity to respond in writing with supporting affidavits. Subchapter I of Chapter 75 and Corresponding Regulations Describe Procedures for a Suspension of 14 Days or Less An agency may take an adverse action under Subchapter I of Chapter 75 only for such cause as will promote the efficiency of the service. When proposing to suspend an employee for 14 days or less, an agency must give the employee advance written notice stating the reasons for the proposed suspension. The agency must also inform the employee of his or her right to review the material which is relied on to support the reasons for the action. The agency must give the employee a reasonable time (no less than 24 hours) to answer orally and in writing, to furnish affidavits and other documentary evidence in support of the answer, and to be represented by an attorney or other representative. Lastly, the agency is to give the employee a written decision with the specific reasons for the suspension on or before the effective date of the action. An employee may challenge a suspension of 14 days or less through an agency administrative grievance procedure, if applicable. If the employee is represented by a union with a collective bargaining agreement (CBA) with the agency that includes an applicable grievance procedure, the employee may challenge the suspension only under the CBA unless the employee is alleging that the suspension was discriminatory. If the employee wishes to challenge the suspension as discriminatory or retaliatory under the EEO laws, the employee may file an EEO complaint with agency followed by a request for a hearing with the EEOC. The employee may also file a complaint with the OSC and then, if necessary, an Individual Right of Action appeal with the MSPB, to assert that the suspension was in retaliation for the employee’s whistleblower activity. If the employee is represented by a union that has a collective bargaining agreement with the agency that includes an applicable negotiated grievance procedure, the employee may only file a grievance under the agency’s collective bargaining agreement. Subchapter II of Chapter 75 Describes Steps Agencies Must Take to Address More Significant Cases of Employee Misconduct Than Those Addressed Under Subchapter I Subchapter II of Chapter 75 addresses steps agencies must follow to take the four adverse actions listed below. These following actions are referred to as appealable adverse actions: suspensions longer than 14 days; reductions in pay; and removals. An agency may take an adverse action under Subchapter II only for such cause as will promote the efficiency of the service. Subchapter II and OPM regulations contain more extensive procedural requirements for removals, reductions in pay or grade, and suspensions of over 14 days. The employee is entitled to at least 30 days advanced written notice of the proposed action, unless there is reasonable cause to believe the employee has committed a crime for which a sentence of imprisonment may be imposed. The notice must state the reasons for the action and inform the employee of their right to review the material on which the reasons stated in the notice are based. Agencies typically provide the material supporting the proposal to the employee with the notice. The proposal is usually prepared by the employee’s supervisor in consultation with human resources and, sometimes, the agency’s legal staff. The agency must give the employee a reasonable amount—no less than 7 days—of official time to review the supporting material, to prepare an answer orally and in writing, and to furnish affidavits and other documentary evidence in support of the answer. According to OPM and MSPB officials, normally the agency designates an official other than the person who proposes the adverse action to review the employee’s response and make the decision. The employee is entitled to be represented by an attorney or other representative, including a union steward if the employee is a bargaining unit member. The employee is entitled to a written decision on or before the effective date specifying the reasons for the decision and advising the employee of any appeal and grievance rights under 5 CFR § 752.405. An employee may challenge discipline under Subchapter II through an agency administrative grievance procedure, if applicable, or by filing a grievance under an applicable CBA. An employee may also appeal adverse actions covered by Subchapter II to the MSPB unless the employee first filed a grievance challenging the action under the CBA. Accordingly, a large number of MSPB decisions address the elements and relative seriousness of various kinds of misconduct. According to OPM, if the employee wishes to challenge an appealable adverse action under subchapter II as discriminatory or retaliatory under the EEO laws, the employee may file a “mixed case” EEO complaint with agency. The agency then issues a final agency decision that may be appealed to the MSPB. An employee affected by an appealable action (removal, suspension for more than 14 days, reduction in pay or grade) who believes that the action was motivated by prohibited discrimination, such as a person’s race, color, religion, sex, national origin, age or disability, may also file a “mixed case” appeal directly with MSPB and raise the discrimination claim in that forum. The employee may seek review of MSPB’s decision on the discrimination claim before the EEOC. If MSPB and EEOC disagree on the discrimination claim and MSPB does not defer to EEOC’s view, then a special panel of the EEOC and MSPB will be convened to resolve the disagreement. When deciding an appropriate penalty for misconduct, agency officials are to make decisions on a case-by-case basis, taking into consideration all relevant circumstances. Deciding officials within the agency should consult the Douglas Factors –12 criteria developed by MSPB to guide such decisions (see appendix II). In the Douglas vs. Veterans Administration decision, MSPB found that a penalty will be sustained as long as “managerial judgment has been properly exercised within tolerable limits of reasonableness.” The list of Douglas Factors is not exhaustive. According to MSPB, weighing all relevant aggravating and mitigating factors and the totality of the circumstances is critical in any disciplinary case. The process agencies use to identify and address employee misconduct is illustrated in figure 2. Agencies may use progressive discipline to help determine which course of action to take when responding to misconduct. OPM officials define progressive discipline as the “imposition of the least serious disciplinary or adverse action applicable to correct the issue or misconduct with penalties imposed at an escalating level for subsequent offenses.” The Douglas factors incorporate the concept of using a lesser penalty in appropriate circumstances. For instance, if an employee commits a first offense, the agency may choose to suspend the employee for 14 days or less. After that, the employee might learn from his or her mistake or correct the action, and not commit another offense, and therefore the agency will not discipline the employee again. However, the President has now prescribed that “supervisors and deciding officials should not be required to use progressive discipline”, and that “the penalty for an instance of misconduct should be tailored to the facts and circumstance.” This will affect how agencies will determine appropriate penalties going forward. Alternatively, if the employee commits the same offense a second time, the agency may choose to suspend the employee for longer, or impose stronger adverse actions, including removal. According to OPM officials, progressive discipline is not defined or required by civil service law, rules or regulations. Chapter 75 provides that an employee with appeal rights who wants to contest an agency decision to remove, suspend for over 14 days, or reduce in pay or grade may appeal the agency’s decision with MSPB. If that employee is a member of a collective bargaining unit, the employee also has the option of pursuing a grievance under negotiated grievance procedures if the appeal has not been excluded from coverage by the collective bargaining agreement. The employee may pursue either option, but not both. The employee may seek review of an arbitrator’s decision before the U.S. Court of Appeals for the Federal Circuit. If the employee is challenging an adverse action within the jurisdiction of the MSPB and also alleged unlawful discrimination before the arbitrator or was prevented from doing so by the negotiated grievance procedure, the employee may appeal the arbitrator’s decision to the MSPB. In addition, the union may appeal an arbitration award concerning a suspension of 14 days or less to the Federal Labor Relations Authority (FLRA) on behalf of the employee. See figure 3 for the collective bargaining unit appeals process for major disciplinary actions. Employees may use several avenues if they elect to appeal adverse actions through the statutory appeals process for such actions (removal, suspension of more than 14 days, and reduction in grade or pay). If the employee believes the disciplinary action was motivated by unlawful discrimination, he or she may file a discrimination complaint with the agency or file an appeal directly with MSPB. If the employee believes the disciplinary action was taken in retaliation for whistleblowing, he or she may choose to file a whistleblower retaliation complaint before deciding to appeal to the MSPB. If the employee or agency does not agree with the decision rendered by a MSPB administrative judge (AJ), he or she may seek review before the full MSPB. See figure 4 for statutory appeals process. Initial Appeals to MSPB Take Around 100 Days to Resolve We analyzed MSPB’s data and found initial appeals at MSPB generally take from 63 to152 days to render a decision. MSPB has a policy goal of resolving cases by an administrative judge on or before 120 days after the filing of the appeal. An employee or agency can appeal an initial MSPB decision in a process called petition for review (PFR). PFR cases are reviewed by the full MSPB, and range from an additional 99 to 251 days, based on our analysis of the MSPB’s data. The time that it takes to resolve cases at MSPB is consistent for demotions, suspensions of greater than 14 days, and removals. According to MSPB officials, the system is designed to require an individual to choose a path of review to the exclusion of other paths. Depending on the claims raised, there may be multiple levels of review of a single action before multiple fora. However, there is only one hearing at the administrative level; therefore, the timeline to resolve an adverse action appeal can be longer than the initial appeal and PFR. According to MSPB officials, the selected CHCOs, and the subject-matter experts we interviewed, agencies most often make the following errors which may cause MSPB to reverse the adverse action decision: Failure to follow procedures by agency: MSPB may overturn an adverse action decision if the agency did not adhere to the processes set out in statute and regulation. This most often means that the agency did not give the employee a chance to respond to the adverse action charge or did not notify them of their rights to an attorney. Failure to follow procedures by deciding official: An action may be vulnerable to a modification or reversal upon appeal if the deciding official did not fulfill their role appropriately in weighing the evidence through a Douglas Factors analysis. Ex parte communications: A challenge may be overturned if a deciding official gave consideration to any issue not in the proposal letter. Incorrect labeling (or charge): Nothing in law or regulation requires an agency to attach a label to a charge of misconduct. However, if labels are used, they must be proven. An example used by MSPB provides that if an agency uses the label of “theft” as its charge, then the agency must prove that the employee “intended to permanently deprive the owner of possession” of the item in question. Experts told us that MSPB requires agencies to prove all legal aspects of a misconduct label. Federal courts have held that it is impermissible to allow the official who makes the final decision in a removal proceeding to rely on aggravating factors regarding either the alleged offense or the proposed penalty that were not contained in the notice, and to which the employee did not have an opportunity to respond. MSPB is bound by this precedent. Alternative Disciplines Can Help Agencies in Determining the Appropriate Response to Misconduct but Several Factors Affect How Agencies Respond Alternative discipline is an approach to address misconduct that is available to agencies in lieu of traditional penalties (e.g., letters of reprimand and suspensions of 14 days or less). According to MSPB, agencies may choose to offer alternative discipline at any stage of the disciplinary process. OPM officials said alternative disciplines tend to be more focused on taking a corrective or remedial response rather than punitive actions against an employee. In a report on alternative discipline, MSPB states that alternative discipline can take many forms and is an effort undertaken by an employer to address employee misconduct using a method other than traditional discipline. As an alternative discipline approach, it is recommended by MSPB that agencies may consider entering into an agreement with an employee. In general, such an approach involves a legally binding written agreement between the employee and the agency addressing an act of misconduct. If the employee violates the agreement, the agency will proceed with additional or more serious forms of discipline, up to and including removal. MSPB also recommends that managers and human resources personnel consult with legal counsel when drafting and implementing an alternative discipline agreement that requires the employee’s consent, adding that it is extremely important for agreements to meet certain legal requirements to form a valid agreement. We compiled a non-exhaustive list of alternative discipline based on a literature review and interviews. Subject-matter experts, including the panel of CHCOs, reviewed this list and they cited benefits and drawbacks to some of the approaches (see table 1). MSPB noted in its 2008 report that the specific alternative discipline approach an agency decides to use should be based on the nature and severity of the misconduct. According to OPM officials, alternative discipline approaches are not appropriate for egregious acts of misconduct or when the employee is remorseless but rather lower level offenses where an employee may show remorse for the misconduct and demonstrate that she or he can be rehabilitated. Egregious acts of misconduct may involve discrimination, reprisal or retaliation, or sexual harassment. Alternative discipline approaches are also not appropriate when the employee’s continued presence in the workplace would pose a threat to the employee or others. On a case by case basis, an agency may decide to provide counseling or additional training as appropriate, depending on the facts and circumstances that address specific acts of minor misconduct. Additionally, agencies have flexibility in using alternative discipline as a final effort before taking formal action such as suspension or removal. However, in its 2008 report, MSPB found that managers had applied alternative discipline approaches ineffectively, resulting in further inefficiencies in the civil service. Specifically, MSPB recommended that managers and human resources personnel consult with legal counsel when drafting and implementing an alternative discipline agreement that requires the employee’s consent, adding that it is extremely important for agreements to meet certain legal requirements to form a valid agreement. CHCOs and subject-matter experts said managers and supervisors should coordinate internally with human resources staff, employee relations, and legal counsel when assessing whether an alternative discipline approach would result in correcting improper behavior and ultimately improve their workforce. Some subject-matter experts we interviewed expressed concern that workforces would view alternative discipline measures as providing opportunities for employees to avoid accountability or encouraging similar negative behaviors from coworkers rather than penalizing the employee more stringently through a formal adverse action process. These subject- matter experts identified community service, buy-outs, involvement in process improvements, and clean-slate agreements as approaches that had this kind of effect. Additionally, some subject-matter experts told us that alternative discipline approaches such as community service and paper suspension agreements could have the unintended effect of benefiting the employee being disciplined. For example, community service may allow the employee to serve the alternative discipline during their scheduled duty time instead of performing their regularly assigned duties. This may require the employee’s co-workers to take on additional work while the employee serves the alternative discipline. Additionally, while a paper suspension limits interruption to work production, it also allows the employee to work in a pay status while carrying out the suspension. According to feedback we received from the CHCOs, some of these alternative discipline approaches were used more often than others and some approaches were more effective at addressing employee misconduct. We did not evaluate how often or the extent to which any of these approaches are used at agencies, nor did we consider the propriety or legality of these approaches. According to the CHCOs and subject-matter experts, agency managers and supervisors may be able to effectively resolve employee misconduct cases through the use of alternative approaches, which can shorten the timeline and simplify the adverse action process in a manner that has the most potential to prevent additional harm to the workplace and avoid the potentially high costs of litigating a misconduct case. Several Factors Can Affect Whether and How an Agency Addresses Employee Misconduct Current and former agency officials and subject-matter experts we interviewed told us in interviews that several factors can affect whether and how an agency responds to misconduct. Both agency officials and subject-matter experts told us that supervisors may not report misconduct due to fear that an employee could counter with their own complaint. Several CHCOs and subject-matter experts told us that an agency’s approach to dealing with misconduct can influence how first-line supervisors act. In a recently released MSPB publication that highlighted selected results of its 2016 Merit Principle survey of managers and supervisors about challenges to addressing employee misconduct, 80 percent of managers and supervisors agree to some extent or a great extent that their agency’s culture poses a challenge when attempting to remove an employee for serious misconduct. Additionally, MSPB’s report provided the perspectives of managers and supervisors regarding the factors that affect how agencies address misconduct, including 77 percent of managers/supervisors agree to some extent or a great extent that they do not feel supported by their agencies’ senior leadership in their actions to remove an employee for serious misconduct. 88 percent of managers/supervisors somewhat or strongly agree that some supervisors do not manage their employees’ conduct because the supervisors want to avoid conflict. 64 percent of managers/supervisors agree to some extent or a great extent that they do not fully understand the process to remove an employee for misconduct. MSPB’s 2016 survey findings were consistent with what agency officials and subject-matter experts told us during interviews. Removal Actions for Misconduct Taken Under Chapter 75 Are Relatively Rare Our analysis of OPM data from fiscal year 2006 to 2016 shows that, on average, agencies disciplined approximately 17,000 or less than 1 percent of the federal workforce per year under Subchapter II of Chapter 75. The number of employees who separate from the federal workforce for misconduct under alternative means, such as settlements, is not known and would not be recorded as misconduct in OPM’s EHRI database, according to agency officials and experts. Many of the CHCOs and subject-matter experts we interviewed told us that while data around such cases are not collected government-wide, they believe internal resolutions using alternative approaches to address misconduct occur frequently. Trends in Misconduct Removals Are Associated with Fluctuations in Probationary Employee Numbers According to EHRI data, as the number of probationary employees fluctuated over time, the number of terminations generally followed the same trend. One of the likely reasons for this fluctuation is that probationary employees are more likely to be terminated than career employees who are no longer in a probationary status because probationary employees are not yet subject to the Chapter 75 process protection afforded career employees. Similar to addressing performance issues, it is generally easier to terminate employees for misconduct during the probationary period. As we previously reported, the probationary period is an important management tool to evaluate the conduct and performance of an employee and should be treated as the last step in the hiring process. According to OPM, appropriate actions taken within the probationary period are the best way to avoid long-term problems. The Most Widely Used Form of Formal Discipline for Misconduct Is Suspension; Approximately One-Fourth of Suspended Employees Have Multiple Suspensions Our data analysis of personnel actions against employees for misconduct shows that the most common form of discipline is suspension. In 2016, agencies made 10,249 suspensions, 7,411 removals, and 114 demotions for misconduct (the numbers refer to the number of adverse actions that agencies made in 2016, not the number of employees that received adverse actions; one employee can be suspended multiple times, and each suspension is recorded as a separate personnel action in the employee’s SF-50). The data we analyzed indicated that approximately one-fourth of suspended employees have multiple suspensions. According to OPM officials, third parties such as the MSPB will review whether disciplinary actions are taken “only for such cause as will promote the efficiency of the service” which includes the assessment of the relevant Douglas factors. Figure 5 shows how many suspensions, demotions, and removals took place from fiscal years 2006 to 2016 according to EHRI data. Better Data on Employee Misconduct Could Strengthen OPM’s Oversight and Provide Clarity to Agencies Regarding How to Address Misconduct OPM collects data on personnel actions reported by most agencies and stores this information in the EHRI database, but these data could be improved to provide OPM with better information to help agencies address misconduct. Because not all misconduct data are entered into the database, the data presented in this report do not represent the entirety of employee misconduct instances that occur in the federal government. Personnel actions in the EHRI database originate from data that agencies send to OPM through the Standard Form 50 (SF-50), a form that documents personnel actions. OPM officials told us that lesser disciplinary actions such as a letter of reprimand are not documented by an SF-50. Without maintaining comprehensive data regarding the extent and nature of misconduct in the federal government, OPM risks missing opportunities to provide agencies with guidance and other tools, such as targeted training to help agencies better address cases of misconduct. Indeed, better data could help OPM and agencies identify systemic misconduct issues, such as misuse of government property or physical aggression toward a co-worker, as well as emerging problems that benefit from early detection and/or more comprehensive approaches. It should be noted that for the codes that indicate performance or misconduct as the underlying cause for the adverse action, it is not possible to make a clear distinction between whether the action was specifically related to misconduct, performance, or a mix of the two. Therefore, some cases include a mix of employee poor performance and misconduct. OPM officials said they do not have a sense of how frequently agencies use these (and other non-specific) nature of action (NOA) codes for misconduct-related actions. According to OPM officials, by establishing rules in terms of improving the efficiency of the service and the types of actions that will require specific procedures, Congress provided managers with maximum flexibility to pursue adverse actions whenever it would promote the efficiency of the service, whether the underlying impetus was a conduct issue or a failure to perform. OPM officials told us the Guide to Processing Personnel Actions directs agencies to indicate the nature of personnel actions in the EHRI database through the NOA codes. These codes indicate the employee type, the nature of the personnel action to be recorded in EHRI, as well as the underlying cause (e.g., conduct or performance) for the personnel action. OPM performs validity checks on the NOA codes and legal authorities to assure the agencies are compliant with OPM reporting requirements. OPM also periodically reviews agencies’ use of NOA codes and legal authorities in general. The EHRI database does not collect or store the specific type of misconduct—only that the personnel action belongs in the misconduct category. Several CHCOs and subject-matter experts who we interviewed agreed this flexibility is helpful to agencies. For example, officials said that while common types of misconduct exist, such as time-and-attendance infractions, many unique types of misconduct cannot be placed into easily identifiable categories. The officials added that it would be easy for agencies to mislabel misconduct. For instance, OPM officials said that disobeying an agency’s policy or rules could manifest itself in many different ways. Moreover, we found inconsistencies in the data OPM provided. For example, during this review, we initially used stored EHRI data from previous audits for fiscal years 2006 to 2014. We used NOA codes provided by OPM officials to analyze employee misconduct data in the executive branch. When we compared the results of our data analysis for this period to the data OPM provided for the same period, we found their data identified approximately 500 more adverse actions per year. Though we consulted with OPM, we were unable to resolve these differences. OPM officials noted that agencies submit data on a rolling basis and may later correct it, and, some SF-50 forms are filed after the fiscal year ends, so our stored data may not include these actions. According to OPM officials, agencies generally have day-to-day oversight for determining use of NOA codes and legal authorities. Agencies are required to report a valid NOA code and legal authorities that is found in OPM’s Guide to Data Standards. Guidance to agencies for classifying misconduct into the correct nature of action codes is provided in The Guide to Processing Personnel Actions. Although OPM verifies that agencies provide valid NOA codes in their data, they assert that agencies have responsibility for determining which NOA codes to use for each personnel action based on OPM documentation. As we noted in a 2017 report on federal human resources data, OPM developed EHRI to (1) provide for comprehensive knowledge management and workforce analysis, forecasting, and reporting to further strategic management of human capital across the executive branch; (2) facilitate the electronic exchange of standardized human resources data within and across agencies and systems and the associated benefits and cost savings; and (3) provide unification and consistency in human capital data across the executive branch. An important part of OPM’s role is to support federal agencies’ human capital management activities, which includes ensuring that agencies have the data needed to make staffing and resource decisions to support their missions. EHRI data are essential to government-wide human resource management and evaluation of federal employment policies, practices, training, and costs. The ability to capitalize on this information is dependent, in part, on the reliability and usefulness of the collected data. According to Federal Internal Control Standards, management is to obtain relevant data from reliable internal and external sources in a timely manner based on the identified information requirements. More specific guidance from OPM to agencies on which NOA codes to use for misconduct cases will increase confidence in the data, without requiring practitioners to capture and tabulate the type of misconduct. More importantly, enhanced data on the extent and nature of misconduct will improve OPM’s oversight ability and agencies’ ability to target management training and identify specific trends in misconduct. Most MSPB Appeals Are Resolved by the Parties, Which Benefits Both the Agency and Appellant, According to Officials Our analysis of MSPB data found that the most frequent appeal outcome is a settlement. MSPB said settlements often benefit both the agency and the appellant because they manage risk. The officials said that when entering an adverse action appeal with MSPB, both the agency and the appellant face a risk: the agency is at risk of spending time and money for litigation only to, in some cases, have its decision overturned; the appellant is at risk of being removed from his or her position with a permanent mark on his or her record, which may make finding another job difficult. To avoid these outcomes for both parties, an agency may offer the employee a variety of settlement options to incentivize them to willingly leave. Settlement options may include, but are not limited to back-pay for the time that the employee was out of work, but still litigating the appeal; and paying the employee’s attorney fees. OPM notes, on the other hand, that the MSPB’s data might not always reflect voluntary settlements. According to OPM officials, the MSPB has a large caseload and typically strives to induce the agency to settle. OPM officials noted that the pressure to settle cases regardless of merit after the agency has made the determination that discipline is necessary and has gone through the procedure to carry it out may be one of the most significant deterrents to dealing with misconduct or performance under Chapter 75. Figure 6 shows the number of MSPB appeals that were filed from fiscal years 2006 to 2016 that were affirmed, reversed, settled, or dismissed. We also analyzed data from MSPB’s database of appeals cases. MSPB hears appeals from those adverse actions that Congress made appealable under Subchapter II of chapter 75, including suspensions of greater than 14-days, demotions, and removals. These actions can be taken for performance problems as well as misconduct under Chapter 75—MSPB does not differentiate between performance and misconduct in its database. Rather, the agency categorizes its cases by legal authority. Therefore, similar to OPM’s EHRI data, any analysis with MSPB’s data may include performance appeals as well as misconduct appeals under Chapter 75. Key Steps Agencies Can Take to Better Prevent and Address Employee Misconduct On the basis of our literature review, as well as interviews with CHCOs and subject-matter experts, we identified key promising practices and lessons learned that can help agencies better prevent and address employee misconduct. These key practices include tables of penalties, engaging employees, making full use of the probationary periods, and maintaining effective lines of communication and collaboration between the human resources office staff, line-level management, and agencies’ legal counsel. Going forward, it will be important for OPM and agencies, in concert with the CHCO Council to examine each of these practices and lessons learned, refine, as appropriate, and share how best to implement these practices. Tables of Penalties Can Help Guide Responses to Misconduct We found that tables of penalties—a list of recommended disciplinary actions for various types of misconduct—though not required by statute, case law, or OPM regulations, nor used by all agencies, can help ensure the appropriateness and consistency of a penalty in relation to an infraction. Further, tables of penalties can help ensure the disciplinary process is aligned with merit principles because they make the process more transparent, reduce arbitrary or capricious penalties, and provide guidance to supervisors. According to the panel of CHCOs and the subject-matter experts we interviewed, a table of penalties may also provide information on the period over which offenses are cumulative, for purposes of assessing progressively stronger penalties. The officials described tables of penalties as a listing of common infractions committed most frequently by agency employees, along with a suggested range of penalties for first, second and third offenses; however, the range of penalties should not be too broad, and the penalties should be progressive, meaning that they increase in harshness with each subsequent offense committed by the employee. The CHCOs and subject-matter experts said a table of penalties should also provide sufficient flexibility in the penalty range (e.g., 1-day to 5-day suspensions for a first offense) to consider mitigating and aggravating factors when considering discipline for misconduct. OPM officials stated that where an agency elects to have a table of penalties, it should serve as a guide in addressing misconduct, noting that it does not serve as a substitute for management’s judgment. According to OPM officials, management must take into account the applicable Douglas Factors, and must consider other appropriate circumstances not covered by the Douglas Factors. Neither OPM nor MSPB provide any written guidance to agencies in developing their tables of penalties. However, OPM officials told us their agency is available to provide assistance upon request to agencies that elect to use a table of penalties. Views on the usefulness of the tables of penalties were mixed among agency officials and subject-matter experts. On the one hand, some agency officials and other subject-matter experts told us the table of penalties can assist agencies in determining an appropriate penalty and ensure consistency of penalty selection from case to case. For that reason, they said the tables can also help ensure the action taken is legally defensible based on past similar cases. MSPB officials told us that they believe table of penalties, which rely on the Douglas Factors, can help human capital practitioners when making decisions about employee misconduct cases. On the other hand, several subject-matter experts and agency officials, including OPM, indicated that table of penalties tend to be too broad in the range of penalties for individual offenses, which they said ultimately limited their usefulness in the decision-making process. OPM officials said their agency does not use a table of penalties nor does it support encouraging agencies to establish tables of penalties. According to OPM officials, where table of penalties exist, they are established at an agency’s discretion and not under OPM’s auspices. OPM officials believe agencies have the ability to address misconduct appropriately without a table of penalties and with sufficient flexibility to determine the appropriate penalty for each instance of misconduct. Further, OPM officials said that agencies that adopt a table of penalties will be required to consider its table of penalties, if applicable, as part of the MSPB’s Douglas Factors analysis which, in their view, imposes an additional condition on the agency’s ability to defend its actions. Finally, OPM said there is no substitute for management judgment and that tables of penalties should not be applied so inflexibly as to impair consideration of other factors relevant to the individual case. In short, tables of penalties, if drafted at an appropriate level of detail and used in conjunction with the Douglas Factors and other case-specific forms of discretion, could provide agencies with reasonable assurance that similar cases of misconduct are addressed with similar penalties as appropriate, and can reduce the risk of inconsistently and potentially unfairly applying remedial measures. Set Clear Expectations and Engage Employees Agency officials and subject-matter experts told us that having effective agency policies and programs that set clear expectations around behavior and that engage employees may help reduce the number of misconduct incidents that occur. These policies and programs may also mitigate the damage when an incident does occur. Several subject-matter experts said that agencies should set formal expectations early and reinforce these expectations throughout an employee’s career. To this point, as we discussed in our 2015 report on addressing substandard employee performance, when addressing misconduct agencies should help managers and supervisors take appropriate action if misconduct occurs during an employee’s probationary period. Some subject-matter experts indicated that agencies may also consider conducting more thorough job screening and hiring processes which could help determine if the individual is a good fit for their agency. Specifically, the subject-matter experts mentioned that agencies should take a closer look at a prospective employee’s work history and carefully check references. We also learned from our interviews that, as a deterrent, agencies must clearly communicate that an employee will be held accountable for any acts of misconduct. According to OPM, agencies can mitigate the risks of these difficulties by establishing a well-trained, experienced, and empowered employee and labor relations staff. OPM said these individuals play a crucial role in educating supervisors and managers in taking appropriate and sustainable disciplinary actions. CHCOs and subject-matter experts provided a number of key promising practices that an agency can use to mitigate and address employee misconduct, including: Demonstrating positive conduct at the agency’s senior leadership (tone at the top): Through policies and their own individual actions, senior leaders must exhibit positive workplace behavior as an example to agency employees. Maintaining a good workplace atmosphere: Agencies should take steps to monitor workforce morale and initiate programs that encourage respect and community. Engaging employees by connecting them directly to the agency’s mission: Employees should have a sense of purpose and commitment toward their employer and its mission which can lead to better organizational performance. Making full use of the probationary period for employees: Supervisors should use probationary periods as an opportunity to evaluate an employee’s performance and conduct to determine if an appointment to the civil service should become final. Setting and communicating clear rules and expectations regarding employee conduct: Agencies should set expectations about appropriate conduct in the workplace and communicate consequences of inappropriate conduct at the earliest possible time after on-boarding an employee. Assuring that employees conform to any applicable standards of conduct: Supervisors and managers, with the support of their agencies’ leadership and human resources staff, should train and monitor employee compliance with its stated conduct policies. Maintaining effective lines of communication and collaboration with the human resources office staff, line-level management, and agencies’ legal counsel: Agencies should establish clear lines of communication across relevant offices to ensure misconduct cases are addressed effectively and consistently. Conducting on-going training for supervisors and holding them accountable for addressing misconduct in a timely manner when it occurs: Supervisors should be trained in identifying employee misconduct cases and knowledgeable about the process for addressing such cases. More Effective Training Could Help Supervisors Identify and Deal with Misconduct MSPB and OPM officials as well as subject-matter experts said human resources staff and line-level supervisors and managers would benefit from additional training in how to address employee misconduct. The subject-matter experts told us that managers do not receive sufficient training in how to identify and subsequently deal with misconduct in the workplace. Specifically, subject-matter experts told us that many supervisors and managers do not understand the requirements needed to remove an employee for misconduct, including misconceptions about the standard of proof required. Many subject-matter experts repeated observations MSPB made in its 2008 report that without sufficient training managers and supervisors may find it difficult to engage in challenging one-on-one conversations with an employee about misconduct. Agency officials and subject-matter experts also told us that supervisory training varies by agency. Our subject-matter experts said some agencies are more structured and provide staff with training curricula with required timetables to complete, while others rely on staff to self-guide the training they need. We found many agencies contract out specific training or provide learning opportunities to staff on their intranet sites via e-learning tools. Most subject-matter experts said that misconduct training is likely more effective when delivered in-person, due to the broad range of issues related to misconduct. OPM officials told us that supervisors and managers are responsible for observing and enforcing applicable laws in the federal workplace. OPM officials also indicated that training, resource allocation, skills, and knowledge all have a bearing on the administration of the disciplinary process. According to OPM, good communication and partnerships are also critical to processing a solid, sustainable response related to misconduct. OPM guidelines require that agencies provide training when employees make critical career transitions, for instance from nonsupervisory to manager or from manager to executive. Further, OPM’s Supervisory and Managerial Curriculum Framework highlights human resources technical areas and leadership competencies necessary for success. The curriculum framework includes employee and labor relations with supporting learning objectives. OPM has specific regulatory requirements for training and development of supervisors, managers, and executives under 5 CFR § 412.202, including to provide training within 1 year of an employee’s initial appointment to a supervisory position and follow up periodically, but at least once every 3 years, by providing each supervisor and manager additional training on the use of appropriate actions, options, and strategies: improve employee performance and productivity; conduct employee performance appraisals in accordance with agency appraisal systems; and identify and assist employees with unacceptable performance. According to 5 U.S.C. § 4103, it is the responsibility of each agency to train its employees. According to OPM officials, it is not responsible under the CSRA for providing training for the federal workforce. However, while agencies are accountable for providing required training for their supervisors, OPM has a key role in ensuring the training meets the government-wide needs of supervisors. By taking steps to help agencies improve the training they provide supervisors and managers on addressing misconduct, OPM could help those managers ensure they have the knowledge and skills to effectively deal with misconduct in the workplace. For example, OPM could consider the feasibility of developing more in-person training modules designed to provide interactive or role play scenarios around addressing employee misconduct. Furthermore, subject-matter experts said if an agency is not training new supervisors to equip them with the appropriate skills to address misconduct, there may be inconsistencies in how an agency handles misconduct across the agency. Without sufficient training, supervisors and managers may not be addressing misconduct appropriately, if at all. Internal Collaboration Can Help Agency Components Better Communicate about Misconduct Cases Many of the subject-matter experts we interviewed said that it is important that the primary stakeholders—first-level supervisors and managers and human resources and general counsel offices—collaborate on the agency’s approach to dealing with misconduct. We found agencies vary in how collaboration takes place. For example, some subject-matter experts and CHCOs told us that an agency may choose to handle a case by having their human resources staff and management work closely together. The subject-matter experts we interviewed said this collaboration can sometimes include general counsel staff, if necessary. For example, at EPA, the human resources office collaborates with the office of general counsel and the agency’s Office of Inspector General Office of Investigations (OI). EPA officials told us that their agency’s human resources office, OI, general counsel, and labor relations meet bi-weekly to discuss ongoing misconduct investigations to provide a report of investigations to EPA’s senior management on the facts surrounding allegations of employee misconduct. According to EPA, OI also provides real-time notification whenever OI receives information concerning serious misconduct, before the investigation is completed, so EPA management can take appropriate immediate mitigating steps, should it be necessary. However, OI does not have a role in determining the type of discipline, if any, to be imposed upon the employee, nor does OI have any role in helping to prevent misconduct in EPA’s workplace. We did not obtain data to verify that this process has been successful, but agree that enhanced communication among key stakeholders is important to addressing misconduct. An Agency’s Culture and Mission Can Impact How Agencies Approach and Respond to Misconduct Most of our subject-matter experts told us an agency’s culture and the nature of its work play a significant role in how the agency addresses employee misconduct. For example, several subject-matter experts told us law enforcement and defense-related agencies or other particular jobs where injuries may occur or lives may be at risk often have significantly less tolerance for employee misconduct than other agencies. In addition, OPM officials said that according to MSPB past studies, if an agency views federal employee due process procedural rights as burdensome and restrictive, this may discourage supervisors from addressing misconduct as it occurs. An MSPB report addressed concerns that the culture in many federal agencies prevents them from effectively dealing with problem employees. Many of the subject-matter experts we interviewed indicated that if an agency’s culture is risk averse, it may be less aggressive in pursuing adverse actions, and instead either ignore misconduct or reassign an employee without holding him or her accountable for the misconduct. Conclusions The process for dismissing an employee for misconduct can be complex and lengthy. However, many of these process challenges can be avoided or mitigated with effective performance management. Supervisors who take performance management seriously and have the necessary training and support to address misconduct can help employees either change their conduct or be subject to removal from the federal workforce. OPM has a role in ensuring that agencies have the tools and guidance they need to effectively address misconduct and maximize the productivity of their workforces. Though OPM already provides a variety of tools, guidance, and training to help agencies address issues related to misconduct, we found opportunities to do more to identify the nature of employee misconduct, improve training tools for managers, and make tools and guidance available for agencies when and where they need it. Recommendations for Executive Action: We are making the following three recommendations to the Director of OPM: The Director of OPM, after consultation with the CHCO Council, should explore the feasibility of improving the quality of data on employee misconduct by providing additional guidance to agencies on how to record instances of misconduct in OPM’s databases. (Recommendation 1) The Director of OPM, after consultation with the CHCO Council, should broadly disseminate to agencies the promising practices and lessons learned, such as those described in this report, as well as work with agencies through such vehicles as the CHCO Council, to identify any additional practices. (Recommendation 2) The Director of OPM, after consultation with the CHCO Council, should provide guidance to agencies to enhance the training received by managers/supervisors and human capital staff to ensure that they have the guidance and technical assistance they need to effectively address misconduct and maximize the productivity of their workforces. (Recommendation 3) Agency Comments and Our Evaluations We provided a draft of this product to the Acting Chairman of MSPB and Acting Director of OPM for comment. The Acting Chairman of MSPB provided technical comments on the draft. We incorporated these comments, as appropriate. MSPB did not comment on the recommendations. OPM’s Associate Director for Employee Services provided written comments on the draft, and these comments are reproduced in appendix III. In its comments, OPM noted that while we had made many of the changes OPM suggested, the changes still did not reflect all of OPM’s feedback, and also contained what it believed to be inaccurate information and incomplete representations of OPM’s views. To the contrary, we maintain that our report contains accurate factual information and represents the views of OPM that we collected through reviewing documents, interviewing OPM officials, and incorporating OPM’s written feedback. Of our three recommendations, OPM partially concurred with two recommendations, and did not concur with one recommendation. For those recommendations OPM partially concurred with, OPM described the steps it planned to take to implement them. We stand by our recommendations which we maintain would give OPM and Congress better visibility over the extent and nature of employee misconduct in the federal government, as well as help strengthen agencies’ capacity to address misconduct. With respect to OPM’s overall comments, OPM noted that Chapter 75 is a set of procedural requirements that must be met when certain actions are contemplated that would impact an employee’s pay, specifying that it was never intended to encompass or catalogue all forms of action an agency could take to address misconduct. On this issue, we agree with OPM on the purpose of Chapter 75 and noted as much in our description of the statutorily established guidelines and procedures throughout this report. OPM also noted that there is no general statutory definition of misconduct, and that managers need maximum flexibility to pursue adverse actions, whether the underlying impetus is a conduct issue, a failure to perform, or any other reasons related to federal employment. We also agree with OPM on this point, as our report makes clear that, in certain cases, employee performance and misconduct can overlap, conflating the two issues. As indicated in this report, OPM believes a table of penalties creates additional conditions and restrictions on an agency’s ability to address misconduct and does not improve the agency’s ability to address misconduct effectively. Accordingly, OPM does not require or encourage agencies to adopt tables of penalties. Our report recognizes both the pros and cons of an agency having a table of penalties and the circumstances under which they could be effective. However, we believe the use of a table of penalties ensures the appropriateness and consistency of a penalty in relation to the charge. It also ensures merit system principles guide the process by providing penalty transparency, reducing arbitrary or capricious penalties, and serve as a guide for managers and supervisors who deal with these issues. With respect to our recommendations, OPM did not concur with our first recommendation to explore the feasibility of improving the quality of data on employee misconduct by providing additional guidance to agencies on how to record instances of misconduct in OPM’s databases. Specifically, OPM noted that the OPM Guide to Processing Personnel Actions is a thorough resource that has been and continues to be successfully relied upon by agencies to document adverse actions as expressly defined in Chapter 75. We acknowledge OPM’s view that NOA codes were never intended or designed to allow reporting of adverse actions down to the degree of a particular kind of misconduct involved, but we maintain that our recommendation would increase confidence in the data on misconduct and make it more useful to OPM and agencies. Further, OPM’s non-concurrence with this recommendation seems inconsistent with the Administration’s own initiatives, including the May 2018 Executive Order Promoting Accountability and Streamlining Removal Procedures Consistent with Merit System Principles, which was released after OPM commented on our draft report. Specifically, the Executive Order requires all federal agencies, beginning in FY18, and for each fiscal year thereafter, to provide a report to the OPM Director containing detailed data about how it addressed issues of misconduct. For example, agencies will need to report out on (i) the number of civilian employees in a probationary period or otherwise employed for a specific term who were removed by the agency; (ii) the number of adverse personnel actions taken against civilian employees by the agency, broken down by type of adverse personnel action, including reduction in grade or pay (or equivalent), suspension, and removal; and (iii) the number of decisions on proposed removals by the agency taken under chapter 75 of title 5, United States Code, not issued within 15 business days of the end of the employee reply period. We maintain that enhanced data on the extent and nature of misconduct will help strengthen OPM and congressional oversight and better position agencies to address misconduct through management training and other approaches. OPM partially concurred with our second recommendation to broadly disseminate to agencies the promising practices and lessons learned, such as those described in this report, as well as work with agencies through such vehicles as the CHCO Council, to identify any additional practices to help agencies better address employee misconduct. Indeed, the President’s Management Agenda (PMA) for 2018 states, “Aligning and managing the Federal workforce of the 21st Century means spreading effective practices among human resources specialists.” In response to this recommendation, OPM noted that some of the key practices and lessons discussed in this report are already part of OPM’s comprehensive accountability toolkit in addressing employee misconduct across the federal government and are frequently communicated through on-going educational outreach to federal agencies and available on OPM’s website. Specifically, OPM said it will decide which appropriate measures it should take to obtain examples of practices agencies believe are promising and will broadly disseminate any of these practices and lessons learned as identified by OPM. We acknowledge OPM’s existing efforts to develop and disseminate promising practices and lessons learned, and also maintain that OPM should also be open to considering additional practices from other sources. OPM also partially concurred with our third recommendation to provide guidance to agencies to enhance the training received by managers/supervisors and human capital staff to ensure that they have the guidance and technical assistance they need to effectively address misconduct and maximize the productivity of their workforces. In its response, OPM said it will continue to play its statutory role under 5 U.S.C. Chapter 41 and will support agencies on a cross-agency priority goal, which it believes could be read to encompass training, pursuant to the PMA, for example by providing guidance to agencies on training requirements for managers, supervisors and human resources staff. However, OPM notes that it is not responsible under current statute for providing training to the federal workforce. As stated in the report, while agencies are accountable for providing required training for their supervisors, OPM has a key role in ensuring the training meets the needs of supervisors. Further, OPM’s position on this recommendation seems inconsistent with the Administration’s own initiatives, including the May 2018 Executive Order which states that “the OPM Director and the Chief Human Capital Officers Council shall undertake a Government-wide initiative to educate Federal supervisors about holding employees accountable for unacceptable performance or misconduct under those rules,” following any final rules issued pursuant to parameters set in the Order. Indeed, the PMA states, “In order to best leverage the workforce to achieve our mission efficiently and effectively, Government needs to remove employees with the worst performance and conduct violations.” By taking steps to help agencies improve the training they provide supervisors and managers on addressing misconduct, OPM could help those managers ensure they have the knowledge and skills to effectively deal with misconduct in the workplace. OPM also provided technical comments, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Director of the Office of Personnel Management, the Chairman of the Merit Systems Protection Board, as well as to the appropriate congressional committees and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-2757 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology You asked us to examine the process for addressing misconduct and to identify any challenges in removing employees for misconduct. Our objectives were to (1) describe the process that agencies are generally required to follow in responding to employee misconduct in the federal service; (2) identify alternative approaches to the formal legal process that agencies can use to respond to misconduct, and assess what factors affect agencies’ responses; (3) describe trends in removals and other adverse actions resulting from misconduct; and (4) identify key steps agencies can take to help them better prevent and address misconduct. To describe the process that most agencies are generally required to follow in responding to employee misconduct in the federal service, we reviewed relevant sections of Title 5 Chapter 75 of the U.S.C. (herein Chapter 75) which contains the statutory process for formally disciplining employees for misconduct and performance. We also reviewed the Civil Service Reform Act and OPM regulations to describe and determine the authority agencies have to address employee misconduct in the federal service, including formal procedural and employee appeal rights. Additionally, we reviewed 5.U.S.C. §§ 7701 and 7702 (herein Chapter 77), which contains the statutory process for employee appeals with the Merit Systems Protection Board (MSPB) subsequent appeals to the Equal Employment Opportunity Commission (EEOC). 5 U.S.C. § 7701(a)-(b); 5 U.S.C. § 7702(b). counted the outcomes of the cases from 2006-2016, by year and in aggregate (e.g. out of the total number of cases, how many were reversed, upheld, mitigated, or settled). For the purpose of our analysis, we used the following nature of action (NOA) code categories in OPM’s EHRI database: (1) codes directly attributed to misconduct; and (2) codes that indicate a mix of misconduct or poor performance. Based on data limitations in both databases, we did not make any evaluative assessments from our data analysis. To identify alternative approaches to the formal legal process, we reviewed documentation provided by the Merit System Protection Board (MSPB) on alternative discipline approaches used by agencies to address employee misconduct. We also reviewed OPM regulations and documents to determine the authority agencies have to address employee misconduct in the federal service, including formal procedural and employee appeal rights. We interviewed current and former practitioners, subject-matter experts, and academics to identify alternative approaches that they were aware of or were commonly used at agencies to address employee misconduct. To develop our list of alternative discipline approaches to addressing employee misconduct, we conducted a literature review and reviewed reports and documents to identify alternative discipline approaches commonly used to address employee misconduct in the federal sector. After compiling our non-exhaustive list of alternative approaches, we contacted our previously interviewed subject-matter experts and asked them to provide their final thoughts or suggestions to our alternative discipline approaches. We included those additional approaches to the list. We interviewed human capital experts from academia, unions, and former and current human resources practitioners. We also interviewed a panel of CHCOs to gain insight into the agency perspective on addressing employee misconduct. To identify CHCO members, we asked the Director of the CHCO council to select CHCOs that have knowledge and experience in addressing employee misconduct. Agency size and mission were also considered as part of the selection process to gain a range of perspectives. Our panel of CHCOs was from the Departments of Commerce, Defense, and Housing and Urban Development, the National Science Foundation, and the Nuclear Regulatory Commission. We also reviewed prior work by MSPB in developing our list of commonly used alternative discipline approaches to employee misconduct in the federal sector. To describe and assess the factors that can affect an agency’s response to employee misconduct, we interviewed: OPM officials and representatives from Employee Services, Human Resources Solutions, Planning and Policy Analysis, and the Office of the Chief Information Officer MSPB officials from the Office of the acting Chairman & Vice Chairman, Office of Information Resources Management, and the Office of Policy & Evaluation; Panel of Chief Human Capital Officers (CHCO) National Treasury Employees Union officials; American Federation of Government Employees officials; Federal Managers Association officials; Individual members of the Federal Employees Lawyers Group; Partnership for Public Service officials; Senior Executives Association officials; and Selected individuals with expertise in human capital management, specifically focused on employee misconduct, from academia and the private sector. We selected our list of interviewees based on GAO’s guidance for selecting experts, the interviewees’ practical experience in applying and practicing administrative law, and for academics in their specific areas of research. To assess the factors that agencies use to deal with employee misconduct, we analyzed the interviewee responses and identified key themes that were common throughout our interviews and, we counted the frequency of those key themes. To describe the trends in removals and adverse actions resulting from misconduct at Chief Financial Officer (CFO) Act agencies, we analyzed OPM’s Enterprise Human Resource Integration (EHRI) data from fiscal years 2006 to 2016. The 24 CFO Act agencies are listed at 31 U.S.C. § 901(b) and include: U.S. Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Justice, Labor, Transportation, the Treasury, Veterans Affairs, and State, as well as the U.S. Agency for International Development, Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, National Science Foundation, Nuclear Regulatory Commission, Office of Personnel Management, Small Business Administration, and the Social Security Administration. These agencies account for a very high proportion of the total federal labor force. For reporting purposes, we only provide data on the number of adverse actions rather than use the number of employees subject to an adverse action, because one employee may be subject to more than one adverse action. This can be a result of progressive discipline or could indicate issues related to data reliability. As part of our analysis, we identified all employees subject to each type of adverse action (removal, suspension, and demotion) and quantified the number of similar adverse actions taken against the same person. 5 U.S.C. §§ 7502 and 7512. probationary employees to provide some relative statistics. To show trends in misconduct removals associated with probationary employees, we identified the number of adverse actions taken against probationary employees (overall and by type of action). To determine the trends in employee appeals to MSPB, we analyzed MSPB’s appeals data, on adverse actions taken under Chapter 75 from fiscal years 2006 to 2016. To understand what types of adverse actions are driving appeals to MSPB, we calculated the underlying adverse action for each case by fiscal year of appeal filing. To determine how appeals were resolved by MSPB, we identified the number of appeals that were settled, mitigated, dismissed, reversed, affirmed, or otherwise resolved. We calculated overall trends by fiscal year as well as trends by fiscal year for each type of underlying adverse action. To determine how long the appeals process takes, we calculated the mean time for resolution along with other statistics (minimum, maximum, 25th percentile, 75th percentile) for different types of adverse action. Additionally, because appellants can file a Petition for Review (PFR) to the larger MSPB body, we looked at the time for the initial appeal, the PFR, and the total time from filing through final decision. To assess the reliability of both EHRI and MSPB data, we reviewed past GAO data reliability assessments, interviewed relevant agency officials, and conducted electronic testing to evaluate the accuracy and completeness of the data used in our analyses. We determined the data used in this report to be sufficiently reliable for our purposes, subject to the constraints identified in our report. To identify and provide key promising practices and lessons learned at agencies from encountering and responding to employee misconduct, we conducted a literature review to identify practices and lessons learned associated with employee misconduct in the federal sector. We interviewed officials from OPM, MSPB, the Equal Employment Opportunity Commission (EEOC), and the Office of Special Counsel (OSC), to obtain their perspectives on responding to employee misconduct through alternative approaches. We interviewed officials from the Environmental Protection Agency (EPA) to obtain their perspectives on recent efforts to better coordinate with their Inspector General to address cases of employee misconduct. We also obtained the perspectives of a panel of CHCOs from selected agencies as well as former human capital practitioners and other subject-matter experts with extensive experience working on employee misconduct issues. Appendix II: Douglas Factors –12 Criteria Developed by the MSPB to Guide Agency Decisions on Employee Misconduct The Merit Systems Protection Board in its landmark decision, Douglas vs. Veterans Administration, 5 M.S.P.R. 280 (1981), established non- exclusive criteria that supervisors must consider, as appropriate, in determining an appropriate penalty to impose for an act of employee misconduct (“The Douglas Factors”). The following relevant factors must be considered in determining the severity of the discipline: 1. The nature and seriousness of the offense, and its relation to the employee’s duties, position, and responsibilities, including whether the offense was intentional or technical or inadvertent, or was committed maliciously or for gain, or was frequently repeated; 2. the employee’s job level and type of employment, including supervisory or fiduciary role, contacts with the public, and prominence of the position; 3. the employee’s past disciplinary record; 4. the employee’s past work record, including length of service, performance on the job, ability to get along with fellow workers, and dependability; 5. the effect of the offense upon the employee’s ability to perform at a satisfactory level and its effect upon supervisors’ confidence in the employee’s work ability to perform assigned duties; 6. consistency of the penalty with those imposed upon other employees for the same or similar offenses; 7. consistency of the penalty with any applicable agency table of 8. the notoriety of the offense or its impact upon the reputation of the 9. the clarity with which the employee was on notice of any rules that were violated in committing the offense, or had been warned about the conduct in question; 10. the potential for the employee’s rehabilitation; 11. mitigating circumstances surrounding the offense such as unusual job tensions, personality problems, mental impairment, harassment, or bad faith, malice or provocation on the part of others involved in the matter; and 12. the adequacy and effectiveness of alternative sanctions to deter such conduct in the future by the employee or others. The list was not intended to be exhaustive. Appendix III: Comments from the Office of Personnel Management Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact: Robert Goldenkoff, (202) 512-2757 or goldenkoffr@gao.gov. Staff Acknowledgments: In addition to the contact named above, Tom Gilbert, Assistant Director, and Anthony Patterson, Analyst-in-Charge, supervised the development of this report. Isabel Band, Crystal Bernard, Jehan Chase, Sara Daleski, Shirley Jones, Serena Lo, Krista Loose, Amanda Miller, and Kayla Robinson made major contributions to all aspects of this report. Robert Gebhart and Robert Robinson provided additional assistance.
Why GAO Did This Study Misconduct is generally considered an action by an employee that impedes the efficiency of the agency's service or mission. Misconduct incidents can affect other aspects of employee morale and performance and impede an agency's efforts to achieve its mission. GAO was asked to examine how executive branch agencies address employee misconduct. This report (1) describes the process agencies are required to follow in responding to employee misconduct; (2) identifies alternative approaches to the formal process that agencies can use and assesses what factors affect agencies' responses to misconduct; (3) describes trends in removals and other adverse actions resulting from misconduct; and (4) identifies key practices agencies can use to help them better prevent and address misconduct. To address these objectives, GAO reviewed relevant sections of title 5 of the U.S.C; analyzed MSPB and OPM data, and interviewed, among others, agency officials and subject-matter experts. What GAO Found Chapter 75 of title 5 of the U.S. Code specifies the formal legal process that most agencies must follow when taking adverse actions, i.e., suspensions, demotions, reductions in pay or grade, and removals, for acts of employee misconduct. Chapter 75 details the built-in procedural rights certain federal employees are entitled to when faced with adverse actions. Depending on the nature of misconduct, an agency may use utilize alternative discipline approaches traditionally used in government to correct behavior. Alternative discipline is an approach to address misconduct that is available to agencies in lieu of traditional penalties (e.g., letters of reprimand and suspensions of 14 days or less). An example is a last chance agreement, whereby an employee recognizes the agency's right to terminate him or her should another act of misconduct occur. Based on the data collected by the Office of Personnel Management (OPM), agencies formally discipline an estimated 17,000 employees annually under Chapter 75, or less than 1 percent of the federal workforce, for misconduct. Based on OPM data, in 2016, agencies made 10,249 suspensions, 7,411 removals, and 114 demotions for misconduct. However, because of weaknesses in OPM's data on employee misconduct, which is provided by the agencies, OPM is unable to accurately target supervisory training to address misconduct, and decision-makers do not know the full extent or nature of this misconduct. Key lessons learned can help agencies better prevent and respond to misconduct. For example, tables of penalties provide a list of the infractions committed most frequently by agency employees, along with a suggested range of penalties for each to ensure consistent treatment for similar offenses. However, not all agencies have a table of penalties, including OPM, nor are agencies required by statute, case law or OPM regulations. Subject-matter experts we contacted identified additional promising practices that agencies can use to respond employee misconduct. Some of these are presented below. Agencies are accountable for providing required training to their managers. However, agency officials and subject-matter experts we interviewed said federal managers may not address misconduct because they are unfamiliar with the disciplinary process, have inadequate training, or receive insufficient support from their human resources offices. What GAO Recommends GAO recommends that OPM, working with the Chief Human Capital Officers Council, (1) take steps to improve the quality of data collected on misconduct; (2) leverage lessons learned to help agencies address misconduct; and (3) improve guidance on training supervisors and human resources staff on addressing misconduct. OPM partially concurred with two recommendations, and disagreed with the first, stating that its guidance has been successfully relied upon by agencies. GAO maintains the action is needed to help strengthen oversight.
gao_GAO-18-141T
gao_GAO-18-141T_0
Background The cost of the census has been escalating over the last several decennials. The 2010 decennial was the costliest U.S. Census in history at about $12.3 billion, and was about 31 percent more costly than the $9.4 billion 2000 Census (in 2020 dollars). The average cost for counting a housing unit increased from about $16 in 1970 to around $92 in 2010 (in 2020 dollars). According to the Bureau, the total cost of the 2020 Census is estimated to be approximately $12.5 billion dollars (in 2020 dollars). As discussed later in this statement, however, the cost of the 2020 Census will likely be higher than this current estimate. Meanwhile, the return of census questionnaires by mail (the primary mode of data collection) declined over this period from 78 percent in 1970 to 63 percent in 2010 (see figure 1). Declining mail response rates—a key indicator in determining the cost-effectiveness of the census—are significant and lead to higher costs. This is because the Bureau sends temporary workers to each non-responding household to obtain census data. As a result, non-response follow-up is the Bureau’s largest and most costly field operation. In many ways, the Bureau has had to invest substantially more resources each decade to match the results of prior enumerations. Further, achieving a complete and accurate census is becoming an increasingly daunting task, in part, because the nation’s population is growing larger, more diverse, and more reluctant to participate. When the census misses a person who should have been included, it results in an undercount; conversely, an overcount occurs when an individual is counted more than once. Such errors are particularly problematic because of their impact on various subgroups. Minorities, renters, and children, for example, are more likely to be undercounted by the census. The Bureau faces an additional challenge of locating unconventional and hidden housing units, such as converted basements and attics. For example, as shown in figure 2, what appears to be a small, single-family house could contain an apartment, as suggested by its two doorbells. If an address is not in the Bureau’s address file, its residents are less likely to be included in the census. The Bureau Has Redesigned the 2020 Census to Help Control Costs The basic design of the enumeration—mail out and mail back of the census questionnaire with in-person follow-up for non-respondents—has been in use since 1970. However, a key lesson learned from the 2010 Census and earlier enumerations, is that this “traditional” design is no longer capable of cost-effectively counting the population. In response to its own assessments, our recommendations, and studies by other organizations, the Bureau has fundamentally re-examined its approach for conducting the 2020 Census. Specifically, its plan for 2020 includes four broad innovation areas (re-engineering field operations, using administrative records, verifying addresses in-office, and developing an Internet self-response option). The Bureau has estimated that these innovations could result in savings of over $5 billion (in 2020 dollars) when compared to its estimates of the cost for conducting the census with traditional methods. However, in June 2016, we reported that the Bureau’s life-cycle cost estimate of $12.5 billion, developed in October 2015, was not reliable and did not adequately account for risk, as discussed later in this statement. Bureau Plans to Use IT to Drive Innovation To help drive these innovations, the Bureau plans to rely on both new and legacy IT systems and infrastructure. For example, the Bureau is developing or modifying 11 IT systems as part of an enterprise-wide initiative called Census Enterprise Data Collection and Processing (CEDCaP), which is managed within the Bureau’s IT Directorate. This initiative is a large and complex modernization program intended to deliver a system-of-systems to support all of the Bureau’s survey data collection and processing functions, rather than continuing to rely on unique, survey-specific systems with redundant capabilities. In addition, according to Bureau officials, the 2020 Census Directorate or other Bureau divisions are developing or modifying 32 other IT systems. To help inform, validate, and refine the operational design of the 2020 Census, and to test several of the IT systems, the Bureau has held a series of operational tests since 2012. Among these, in March 2017, the Bureau conducted a nationwide test (referred to as the 2017 Census Test) of households responding to census questions using paper, the Internet, or the phone. This test evaluated key new IT components, such as the Internet self-response system and the use of a cloud-based infrastructure. The Bureau is currently conducting the 2018 End-to-End Test, which began in August 2017 and runs through April 2019. It is the Bureau’s final opportunity to test all key systems and operations to ensure readiness for the 2020 Census. The Bureau’s plans for this test include, among other things, address canvassing, self-response (via paper, Internet, and phone), and nonresponse follow-up. To support its 2018 End-to-End Test, the Bureau plans to deploy and use 43 systems incrementally to support nine operations from December 2016 through the end of the test in April 2019. These nine operations are: (1) in-office address canvassing, (2) recruiting staff for address canvassing, (3) training for address canvassing, (4) in-field address canvassing, (5) recruiting staff for field enumeration, (6) training for field enumeration, (7) self-response (i.e., Internet, phone, or paper), (8) field enumeration, and (9) tabulation and dissemination. Appendix I includes additional details about the 43 systems, the operation or operations they support, and key deployment dates. The Bureau Needs to Manage Risks of Implementing Innovations The Bureau Plans Four Innovation Areas for 2020, but Has Scaled Back Key Census Tests The four innovation areas the Bureau plans for 2020 show promise for a more cost-effective head count (see table 1). However, the innovations also introduce new risks, in part, because they include new procedures and technology that have not been used extensively in earlier decennials, if at all. Our prior work has shown the importance of the Bureau conducting a robust testing program, including the 2018 End-to-End Test. However, because of funding uncertainty the Bureau canceled the field components of the 2017 Census Test including non-response follow- up, a key census operation. In November 2016, we reported that the cancelation of the 2017 field tests was a lost opportunity to test, refine, and integrate operations and systems, and that it put more pressure on the 2018 End-to-End Test to demonstrate that enumeration activities will function under census-like conditions as needed for 2020. In May 2017, the Bureau scaled back the operational scope of the 2018 End-to-End and, of the three planned test sites, only the Rhode Island site would fully implement the 2018 End- to-End Test. The Washington and West Virginia state test sites would test address canvassing. In addition, due to budgetary concerns, the Bureau decided to remove three coverage measurement operations (and the technology that supports them) from the scope of the test. Without sufficient testing, operational problems can go undiscovered and the opportunity to improve operations will be lost, in part because the 2018 End-to-End Test is the last opportunity to demonstrate census technology and procedures across a range of geographic locations, housing types, and demographic groups. New Uses of Administrative Records Are Promising, but Introduce Challenges Administrative records—information already provided to the government as it administers other programs, such as mail collection by the U.S. Postal Service—have been discussed and used for the decennial census since the 1970s, and for 2020 the Bureau plans a more significant role for them. In July 2017, we reported that the Bureau had taken steps to ensure that its use of administrative records would lower the cost and improve the accuracy of the 2020 Census. For example, the Bureau set a rule that it would only use administrative records to count a household when a minimum amount of information was present within data sources. According to the Bureau, this would help ensure that administrative records are used only in circumstances where research has shown them to be most accurate. Additionally, before using any administrative records to support census operations, the Bureau determined it will subject each source to a quality assurance process that includes, among other things, basic checks for data integrity as well as assessments by subject matter experts of the information’s fitness for various uses by the Bureau. (See figure 3.) According to the Bureau, it links administrative records data sources to complement each other, improving their reliability and completeness. The Bureau also creates an anonymous personal identifier for each individual in the data to reduce the risk of disclosure once the data are linked across sources. In July 2017, we reported that the Bureau had already tested the uses of administrative records that hold the most potential for reducing census costs, such as counting people who did not respond to census mailings. The Bureau planned to test additional applications of administrative records for the first time during the 2018 End-to-End Test. For example, the Bureau planned to use administrative records to support quality control during its non-response field enumeration. The Bureau planned to compare response data collected by enumerators to administrative records and flag significant differences based on predefined rules. The differences might be in the total count of persons in a household or in specific combinations of personal characteristics, such as age or race. According to the Bureau, flagging such differences could be used to help identify which enumeration cases to reinterview as part of the quality control operation. However, we reported in October 2015 that the Bureau faced other challenges with using administrative records for the 2020 Census. For example, although the Bureau has no control over the accuracy of data provided to it by other agencies, it is responsible for ensuring that data it uses for the 2020 Census are of sufficient quality for their planned uses. Another challenge we identified in 2015 is the extent to which the public will accept government agencies sharing personal data for the purposes of the census. The Bureau has recognized these challenges within its risk registers. The Bureau Has Fundamentally Re- Engineered Address Canvassing for 2020 In-Office Address Canvassing. The Bureau has re-engineered its approach to building its master address list for 2020. Specifically, by relying on multiple sources of imagery and administrative data, the Bureau anticipates constructing its address list with far less door-to-door field canvassing compared to previous censuses. One major change the Bureau has made consists of using in-office address canvassing—a two-phase process that was to systematically review small geographic areas nationwide, known as census blocks, to identify those that will not need to be canvassed in the field, as shown in figure 4. The Bureau estimated that the two phases of in-office canvassing would have resulted in roughly 25 percent of housing units requiring in-field canvassing, instead of canvassing nearly all housing units in the field as done in prior decennials. With in-office address canvassing census workers compare current aerial imagery for a given block with imagery for that block dating to the time of the last decennial census in 2010. During this first phase, called Interactive Review, specially trained census workers identify whether a block appears to have experienced change in the number of housing units, flagging each block either as stable—free of population growth, decline, or uncertainty in what is happening in the imagery over time—or “active,” in which case it moves to the next phase. Addresses in stable blocks are not marked for in-field canvassing. For blocks where change is detected or suspected, the Bureau was to use a second phase of in-office canvassing, known as Active Block Resolution, to attempt to resolve the status of each address and housing unit in question within that block. During this phase, census workers use aerial imagery, street imagery, and data from the U.S. Postal Service, as well as from state, local, and tribal partners when reviewing blocks. If a block can be fully resolved during this phase of in-office canvassing, the changes are recorded in the Bureau’s master address file. If a block cannot be fully resolved during the second phase of in-office canvassing, then the entire block, or some portion of the block, is flagged for inclusion in the in-field canvassing operation. A first pass of the entire country for in-office address canvassing began in September 2015 and was completed in June 2017. In-field canvassing for the 2020 Census is scheduled to begin in August 2019. However, in July 2017 we reported that the Bureau altered its design for re-engineered address canvassing because of budget uncertainty by suspending the second phase of in-office address canvassing. Without the second phase of in-office address canvassing, blocks that are not resolved by phase one will have a greater chance of requiring in-field canvassing. Bureau officials told us at that time that they anticipated that canceling the second phase of in-office address canvassing altogether would increase their estimated in-field canvassing workload by 5 percentage points, from 25 percent to 30 percent of housing units— increasing costs. The Bureau did not develop cost and quality information on address canvassing projects, and detailed information on cost tradeoffs was not available when we requested it. The information the Bureau had did not break out the estimated cost of the different phases of in-office address canvassing through 2020. However, the total estimated cost for both phases one and two was approximately $22 million. Thus, this suspension might save a portion of the $22 million, but it will potentially increase the cost of the address canvassing operation downstream. Our July 2017 report recommended, and the Bureau agreed, that the Bureau should use its evaluations before 2020 to determine the implications of in- office address canvassing on the cost and quality of address canvassing, and use this information to justify decisions related to its re-engineered address canvassing approach. In-Field Address Canvassing for the 2018 End-to-End Test. On August 28, 2017, temporary census employees known as address listers began implementing the in-field component of address canvassing for the 2018 End-to-End Test. Listers walked the streets of designated census blocks at all three test sites to verify addresses and geographic locations. The operation ended on September 27, 2017. As part of our ongoing work, we visited all three test sites and observed 18 listers conduct address canvassing. Generally, we found that listers were able to conduct address canvassing as planned. However, we also noted several challenges. We shared the following preliminary observations from our site visits with the Bureau: Internet connectivity was problematic at the West Virginia test site. We spoke to four census field supervisors that described certain areas as dead spots where Internet and cell phone service were not available. We also were told by those same supervisors that only certain cell service providers worked in certain areas. In order to access the Internet or cell service in those areas, census workers sometimes needed to drive several miles. The allocation of lister assignments was not always optimal. Listers were supposed to be provided assignments close to where they live in order to optimize their local knowledge and to limit the numbers of miles being driven by listers to and from their assignment area. Bureau officials told us this was a challenge at all three test sites. Moreover, at one site the area census manager told us that some listers were being assigned work in another county even though blocks were still unassigned closer to where they resided. Relying on local knowledge and limiting the number of miles can increase both the efficiency and effectiveness of address canvassing. The assignment of some of the large blocks early in the operations was not occurring as planned. At all three 2018 End-to-End Test sites Bureau managers had to manually assign some large blocks (some blocks had hundreds of housing units). It is important to assign large blocks early on because leaving the large blocks to be canvassed until the end of the operation could jeopardize the timely completion of address canvassing. The global positioning system-derived location for the lister was not always corresponding to the location on the map. A Bureau official confirmed that at all three test sites, the location icon jumped around or was on the wrong street. According to a Bureau official, listers were told to override the global positioning system-derived location when confirming the geographic location of the residence. We have discussed these challenges with Bureau officials who stated that overall they are satisfied with the implementation of address canvassing but also agreed that resolving challenges discovered during address canvassing, some of which can affect the operation’s efficiency and effectiveness, will be important before the 2020 Census. We will continue to monitor address canvassing operation and plan to issue a report in the winter of 2018. The Bureau Continues to Face Challenges in Implementing and Securing Key IT Systems The Bureau Continues to Face Challenges Implementing and Managing IT Systems We have previously reported that the Bureau faced challenges in managing and overseeing IT programs, systems, and contractors supporting the 2020 Census. Specifically, it has been challenged in managing schedules, costs, contracts, and governance and internal coordination for its IT systems. As a result of these challenges, the Bureau is at risk of being unable to fully implement key IT systems necessary to support the 2020 Census. We have previously recommended that the Bureau take action to improve its implementation and management of IT in areas such as governance and internal coordination. We also have ongoing work reviewing each of these areas. Our ongoing work has indicated that the Bureau faces significant challenges in managing the schedule for developing and testing systems for the 2018 End-to-End Test that began in August 2017. In this regard, the Bureau still has significant development and testing work that remains to be completed. As of August 2017, of the 43 systems in the test, the Bureau reported that 4 systems had completed development and integration testing, while the remaining 39 systems had not completed these activities. Of these 39 systems, the Bureau reported that it had deployed a portion of the functionality for 21 systems to support address canvassing for the 2018 End-to-End Test; however, it had not yet deployed any functionality for the remaining 18 systems for the test. Figure 5 summarizes the development and testing status for the 43 systems planned for the 2018 End-to-End Test, and appendix I includes additional information on the status of development and testing for these systems. Moreover, due to challenges experienced during systems development, the Bureau has delayed key IT milestone dates (e.g., dates to begin integration testing) by several months for the systems supporting six of the nine operations in the 2018 End-to-End Test. Figure 6 depicts the delays to the deployment dates for the operations in the 2018 End-to-End Test, as of August 2017. However, our ongoing work also indicates that the Bureau is at risk of not meeting the updated milestone dates. For example, in June 2017 the Bureau reported that at least two of the systems expected to be used in the self-response operation (the Internet self-response system and the call center system) are at risk of not meeting the delayed milestone dates. In addition, in September 2017 the Bureau reported that at least two of the systems expected to be used in the field enumeration operation (the enumeration system and the operational control system) are at risk of not meeting their delayed dates. Combined, these delays reduce the time available to conduct the security reviews and approvals for the systems being used in the 2018 End-to- End Test. We previously testified in May 2017 that the Bureau faced similar challenges leading up to the 2017 Census Test, including experiencing delays in system development that led to compressed time frames for security reviews and approvals. Specifically, we noted that the Bureau did not have time to thoroughly assess the low-impact components of one system and complete penetration testing for another system prior to the test, but accepted the security risks and uncertainty due to compressed time frames. We concluded that, for the 2018 End-to- End Test, it will be important that these security assessments are completed in a timely manner and that risks are at an acceptable level before the systems are deployed. The Bureau noted that, if it continues to be behind schedule, field operations for the 2018 End-to-End Test will not be performed as planned. Bureau officials are evaluating options to decrease the impact of these delays on integration testing and security review activities by, for example, utilizing additional staff. We have ongoing work reviewing the Bureau’s development and testing delays and the impacts of these delays on systems readiness for the 2018 End-to-End Test. The Bureau faces challenges in reporting and controlling IT cost growth. In April 2017, the Bureau briefed us on its efforts to estimate the costs for the 2020 Census, during which it presented IT costs of about $2.4 billion from fiscal years 2018 through 2021. Based on this information and other corroborating IT contract information provided by the Bureau, we testified in May 2017 that the Bureau had identified at least $2 billion in IT costs. However, in June 2017, Bureau officials in the 2020 Census Directorate told us that the data they provided in April 2017 did not reflect all IT costs for the 2020 program. The officials provided us with an analysis of the Bureau’s October 2015 cost estimate that identified $3.4 billion in total IT costs from fiscal years 2012 through 2023. These costs included, among other things, those associated with system engineering, test and evaluation, and infrastructure, as well as a portion of the costs for the CEDCaP program. Yet, our ongoing work determined that the Bureau’s $3.4 billion cost estimate does not reflect its current plans for acquiring IT to be used during the 2020 Census and that the related costs are likely to increase: In August 2016, the Bureau awarded a technical integration contract for about $886 million, a cost that was not reflected in the $3.4 billion expected IT costs. More recently, in May 2017, we testified that the scope of work for this contract had increased since the contract was awarded; thus, the corresponding contract costs were likely to rise above $886 million, as well. In March 2017, the Bureau reported that the contract associated with the call center and IT system to support the collection of census data over the phone was projected to overrun its initial estimated cost by at least $40 million. In May 2017, the Bureau reported that the CEDCaP program’s cost estimate was increasing by about $400 million—from its original estimate of $548 million in 2013 to a revised estimate of $965 million in May 2017. In June 2017, the Bureau awarded a contract for mobile devices and associated services for about $283 million, an amount that is about $137 million higher than the cost for these devices and services identified in its October 2015 estimate. As a result of these factors, the Bureau’s $3.4 billion estimate of IT costs is likely to be at least $1.4 billion higher, thus increasing the total costs to at least $4.8 billion. Figure 7 identifies the Bureau estimate of total IT costs associated with the 2020 program as of October 2015, as well as anticipated cost increases as of August 2017. IT cost information that is accurately reported and clearly communicated is necessary so that Congress and the public have confidence that taxpayer funds are being spent in an appropriate manner. However, changes in the Bureau’s reporting of these total costs, combined with cost growth since the October 2015 estimate, raise questions as to whether the Bureau has a complete understanding of the IT costs associated with the 2020 program. In this regard, we have previously reported on issues with the Bureau’s cost estimating practices (which are discussed in more detail later in this statement). To address these issues, in October 2017, officials stated that the Bureau is developing a new cost estimate for the entire 2020 Census program, which they expect to release by the end of this fall. Our ongoing work also determined that the Bureau faces challenges in managing its significant contractor support. The Bureau is relying on contractor support in many key areas of the 2020 Census. For example, it is relying on contractors to develop a number of key systems and components of the IT infrastructure. These activities include (1) developing the IT platform that is to be used to collect data from a majority of respondents—those using the Internet, telephone, and non- response follow-up activities; (2) procuring the mobile devices and cellular service to be used for non-response follow-up; and (3) developing the infrastructure in the field offices. According to Bureau officials, contractors are also providing support in areas such as fraud detection, cloud computing services, and disaster recovery. In addition to the development of key technology, the Bureau is relying on contractor support for integrating all of the key systems and infrastructure. The Bureau awarded a contract to integrate the 2020 Census systems and infrastructure in August 2016. The contractor’s work was to include evaluating the systems and infrastructure and acquiring the infrastructure (e.g., cloud or data center) to meet the Bureau’s scalability and performance needs. It was also to include integrating all of the systems, supporting technical testing activities, and developing plans for ensuring the continuity of operations. Since the contract was awarded, the Bureau has modified the scope to also include assisting with operational testing activities, conducting performance testing for two Internet self-response systems, and technical support for the implementation of the paper data capture system. However, our ongoing work has indicated that the Bureau is facing staffing challenges that could impact its ability to manage and oversee the technical integration contractor. Specifically, the Bureau is managing the integration contractor through a government program management office, but this office is still filling vacancies. As of October 2017, the Bureau reported that 35 of the office’s 58 federal employee positions were vacant. As a result, this program management office may not be able to provide adequate oversight of contractor cost, schedule, and performance. The delays during the 2017 Test and preparations for the 2018 End-to- End Test raises concerns regarding the Bureau’s ability to effectively perform contractor management. As we reported in November 2016, a greater reliance on contractors for these key components of the 2020 Census requires the Bureau to focus on sound management and oversight of the key contracts, projects, and systems. As part of our ongoing work, we plan to monitor the Bureau’s progress in managing its contractor support. Effective IT governance can drive change, provide oversight, and ensure accountability for results. Further, effective IT governance was envisioned in the provisions referred to as the Federal Information Technology Acquisition Reform Act (FITARA), which strengthened and reinforced the role of the departmental CIO. To ensure executive-level oversight of the key systems and technology, the Bureau’s CIO (or a representative) is a member of the governance boards that oversee all of the operations and technology for the 2020 Census. However, in August 2016 we reported on challenges the Bureau has had with IT governance and internal coordination, including weaknesses in its ability to monitor and control IT project costs, schedules, and performance. We made eight recommendations to the Department of Commerce to direct the Bureau to, among other things, better ensure that risks are adequately identified and schedules are aligned. The department agreed with our recommendations. However, as of October 2017, the Bureau had only fully implemented one recommendation and had taken initial steps toward implementing others. Further, given the schedule delays and cost increases previously mentioned, and the vast amount of development, testing, and security assessments left to be completed, we remain concerned about executive- level oversight of systems and security. Moving forward, it will be important that the CIO and other agency executives continue to use a collaborative governance approach to effectively manage risks and ensure that the IT solutions meet the needs of the agency within cost and schedule. As part of our ongoing work, we plan to monitor the steps the Bureau is taking to effectively oversee and manage the development and acquisition of its IT systems. The Bureau Has Significant Information Security Steps to Complete for the 2018 End-to-End Test In November 2016, we described the significant challenges that the Bureau faced in securing systems and data for the 2020 Census, and we noted that tight time frames could exacerbate these challenges. Two such challenges were (1) ensuring that individuals gain only limited and appropriate access to the 2020 Census data, including personally identifiable information (PII) (e.g., name, address, and date of birth), and (2) making certain that security assessments were completed in a timely manner and that risks were at an acceptable level. Protecting PII, for example, is especially important because a majority of the 43 systems to be used in the 2018 End-to-End Test contain PII, as reflected in figure 8. To address these and other challenges, federal law and guidance specify requirements for protecting federal information and information systems, such as those to be used in the 2020 Census. Specifically, the Federal Information Security Management Act of 2002 and the Federal Information Security Modernization Act of 2014 (FISMA) require executive branch agencies to develop, document, and implement an agency-wide program to provide security for the information and information systems that support operations and assets of the agency. Accordingly, the National Institute of Standards and Technology (NIST) developed risk management framework guidance for agencies to follow in developing information security programs. Additionally, the Office of Management and Budget’s (OMB) revised Circular A-130 on managing federal information resources required agencies to implement the NIST risk management framework to integrate information security and risk management activities into the system development life cycle. In accordance with FISMA, NIST guidance, and OMB guidance, the Office of the CIO established a risk management framework. This framework requires that system developers ensure that each of the systems undergoes a full security assessment, and that system developers remediate critical deficiencies. In addition, according to the Bureau’s framework, system developers must ensure that each component of a system has its own system security plan, which documents how the Bureau plans to implement security controls. As a result, system developers for a single system might develop multiple system security plans (in some cases as many as 34 plans), which all have to be approved as part of the system’s complete security documentation. We have ongoing work that is reviewing the extent to which the Bureau’s framework meets the specific requirements of the NIST guidance. According to the Bureau’s framework, each of the 43 systems in the 2018 End-to-End Test will need to have complete security documentation (such as system security plans) and an approved authorization to operate prior to their use in the 2018 End-to-End Test. However, our ongoing work indicates that, while the Bureau is completing these steps for the 43 systems to be used in the 2018 End-to-End Test, significant work remains. Specifically: None of the 43 systems are fully authorized to operate through the completion of the 2018 End-to-End Test. Bureau officials from the CIO’s Office of Information Security stated that these systems will need to be reauthorized because, among other things, they have additional development work planned that may require the systems to be reauthorized; are being moved to a different infrastructure environment (e.g., from a data center to a cloud-based environment); or have a current authorization that expires before the completion of the 2018 End-to-End Test. The amount of work remaining is concerning because the test has already begun and the delays experienced in system development and testing mentioned earlier reduce the time available for performing the security assessments needed to fully authorize these systems before the completion of the 2018 End-to-End test. Thirty-seven systems have a current authorization to operate, but the Bureau will need to reauthorize these systems before the completion of the 2018 End-to-End Test. This is due to the reasons mentioned previously, such as additional development work planned and changes to the infrastructure environments. Two systems have not yet obtained an authorization to operate. For the remaining four systems, the Bureau has not yet provided us with documentation about the current authorization status. Figure 9 depicts the authorization to operate status for the systems being used in the 2018 End-to-End Test, as reported by the Bureau. Because many of the systems that will be a part of the 2018 End-to-End Test are not yet fully developed, the Bureau has not finalized all of the security controls to be implemented; assessed those controls; developed plans to remediate control weaknesses; and determined whether there is time to fully remediate any deficiencies before the systems are needed for the test. In addition, as discussed earlier, the Bureau is facing system development challenges that are delaying the completion of milestones and compressing the time available for security testing activities. As we previously reported, while the large-scale technological changes (such as Internet self-response) increase the likelihood of efficiency and effectiveness gains, they also introduce many information security challenges. The 2018 End-to-End Test also involves collecting PII on hundreds of thousands of households across the country, which further increases the need to properly secure these systems. Thus, it will be important that the Bureau provides adequate time to perform these security assessments, completes them in a timely manner, and ensures that risks are at an acceptable level before the systems are deployed. We plan to continue monitoring the Bureau’s progress in securing its IT systems and data as part of our ongoing work. The Bureau Needs to Improve the Reliability of Its 2020 Cost Estimate 2020 Census Cost Estimate Does Not Reflect Best Practices In June 2016, we reported that the Bureau’s October 2015 update of its life-cycle cost estimate for the 2020 Census did not conform to the four characteristics that constitute best practices, and, as a result, the estimate was unreliable. Cost estimates that appropriately account for risks facing an agency can help an agency manage large, complex activities like the 2020 Census, as well as help Congress make funding decisions and provide oversight. Cost estimates are also necessary to inform decisions to fund one program over another, to develop annual budget requests, to determine what resources are needed, and to develop baselines for measuring performance. In June 2016, we reported that, although the Bureau had taken steps to improve its capacity to carry out an effective cost estimate, such as establishing an independent cost estimation office, its October 2015 version of the estimate for the 2020 Census only partially met the characteristics of two best practices (comprehensive and accurate) and minimally met the other two (well-documented and credible). All four characteristics need to be substantially met in order for an estimate to be deemed high-quality: Comprehensive. To be comprehensive an estimate should have enough detail to ensure that cost elements are neither omitted nor double-counted, and all cost-influencing assumptions are detailed in the estimate’s documentation, among other things, according to best practices. In June 2016, we reported that, while Bureau officials were able to provide us with several documents that included projections and assumptions that were used in the cost estimate, we found the estimate to be partially comprehensive because it was unclear if all life-cycle costs were included in the estimate or if the cost estimate completely defined the program. Accurate. Accurate estimates are unbiased and contain few mathematical mistakes. We reported in June 2016 that the estimate partially met best practices for this characteristic, in part because we could not independently verify the calculations the Bureau used within its cost model, which the Bureau did not have documented or explained outside its cost model. Well-documented. Cost estimates are considered valid if they are well-documented to the point they can be easily repeated or updated and can be traced to original sources through auditing, according to best practices. In June 2016, we reported that, while the Bureau provided some documentation of supporting data, it did not describe how the source data were incorporated. Credible. Credible cost estimates must clearly identify limitations due to uncertainty or bias surrounding the data or assumptions, according to best practices. In June 2016, we reported that the estimate minimally met best practices for this characteristic in part because the Bureau carried out its risk and uncertainty analysis only for about $4.6 billion (37 percent) of the $12.5 billion total estimated life-cycle cost, excluding, for example, consideration of uncertainty over what the decennial census’s estimated part will be of the total cost of CEDCaP. In June 2016, we recommended that the Bureau take action to ensure its 2020 Census cost estimate meets all four characteristic of a reliable cost estimate. The Bureau agreed with our recommendation. We also reported in June 2016 that risks were not properly accounted for in the cost estimate and recommended that the Bureau properly account for risk to ensure there are appropriate levels for budgeted contingencies, and those recommendations have not yet been implemented. In October 2017, Bureau officials told us they were making progress towards implementing our recommendations and would provide us with that documentation when the cost estimate and supporting documentation are finalized. Moreover, Bureau officials also told us that an updated cost estimate would be available by the end of this fall. However, until the Bureau updates its estimate and we have the opportunity to review its reliability, questions will surround the quality of the 2020 Census cost estimate and the basis for any 2020 Census annual budgetary figures. The Cost of the 2020 Census Will Likely Be Higher Than Originally Planned While the Bureau has not updated its October 2015 cost estimate, several events since then indicate that the cost of the current design will be higher. For example: As previously mentioned, in August 2016 an $886 million IT integration contract was awarded. According to Bureau officials, there was no reference to this contract in the documentation for the planned contract costs supporting the October 2015 life-cycle cost estimate. In March 2017, the Bureau suspended part of how it is verifying address in-office procedures using on-screen imagery—one of its four key design innovations intended to control the cost of the 2020 Census. According to Bureau officials, the suspension of the one part of in-office canvassing will increase the workload of the more expensive in-field (door-to-door address identification) by at least five percentage points, from 25 percent to 30 percent of housing units— increasing the cost over what had been assumed as part of the earlier cost estimate. Based on cost assumptions underlying its October 2015 life-cycle cost estimate, we found, as part of our prior work, that the potential addition of five percentage points to the field workload alone could reduce the Bureau’s cost savings by $26.6 million. As earlier discussed, in May 2017, Bureau officials reported that the cost of the CEDCaP program has now increased by over $400 million, from about $548 million to $965 million. 2020 Census Cost Estimate May Not Fully Inform Annual Budget Requests Cost estimates are also used by the Bureau as a tool to inform the annual budget process. However, since the Bureau did not fully follow best practices for developing and maintaining the life-cycle cost estimate, as previously described, annual budget requests based on that cost estimate may not be fully informed. A high-quality cost estimate is the foundation of a good budget. A major purpose of a cost estimate is to support the budget process by providing an estimate of the funding required to efficiently execute a program. Because most programs do not remain static but evolve over time, developing a cost estimate should not be a onetime event but rather a recurrent process. Effective program and cost control requires ongoing revisions to the cost estimate and budget. Using a reliable life-cycle cost estimate to formulate the budget could help the Bureau ensure that all costs are fully accounted for so that resources are adequate to support the program. Credible cost estimates could also help the Bureau effectively defend budgets to the Department of Commerce, OMB, and Congress. Concerns about the soundness of the life cycle cost estimate and the quality of annual budgets related to the 2020 Census are particularly important because the bulk of funds will be obligated in fiscal years 2019 through 2020. In our June 2016 report on the Bureau’s life-cycle cost estimate we made several recommendations with which the Bureau agreed. We will continue to monitor the Bureau’s efforts to address these recommendations. In conclusion, the Bureau has made progress in revamping its approach to the census and testing the new design. However, it faces considerable challenges and uncertainties in (1) implementing the cost-saving innovations; (2) managing the development and security of key IT systems; and (3) developing a quality cost estimate for the 2020 Census. For these reasons, the 2020 Census is a GAO high risk area. Continued management attention is vital for ensuring risks are managed, the Bureau’s preparations stay on-track, and the Bureau is held accountable for implementing the enumeration as planned. We will continue to assess the Bureau’s efforts to conduct a cost-effective enumeration and look forward to keeping Congress informed of the Bureau’s progress. Chairman Gowdy, Ranking Member Cummings, and Members of the Committee, this completes our prepared statement. We would be pleased to respond to any questions that you may have. GAO Contacts and Staff Acknowledgments If you have any questions about this statement, please contact David A. Powner at (202) 512-9286 or by e-mail at pownerd@gao.gov or Robert Goldenkoff at (202) 512-2757 or by e-mail at goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other key contributors to this testimony include Lisa Pearson (Assistant Director); Jon Ticehurst (Assistant Director); Kate Sharkey (Analyst in Charge); Mark Abraham, Dewi Djunaidy; Hoyt Lacy; Andrea Starosciak; Umesh Thakkar; Timothy Wexler; and Katherine Wulff. Staff who made key contributions to the reports cited in this statement are identified in the source products. Appendix I: Status as of August 2017 of Development and Integration Testing for Systems in the 2018 End-to-End Test As part of its 2018 End-to-End Test, the Census Bureau (Bureau) plans to deploy 43 systems incrementally to support nine operations from December 2016 through the end of the test in April 2019. The nine operations are: (1) in-office address canvassing, (2) recruiting for address canvassing, (3) training for address canvassing, (4) in-field address canvassing operation, (5) recruiting for field enumeration, (6) training for field enumeration, (7) self-response (i.e., Internet, phone, or paper) operation, (8) field enumeration operation, and (9) tabulation and dissemination. According to the Bureau, a single system may be deployed multiple times throughout the test (with additional or new functionality) if that system is needed for more than one of these operations. Table 1 describes the status as of August 2017 of development and integration testing for each system in the 2018 End-to-End Test. Specifically, as of August 2017, the Bureau had completed both development work and integration testing for 4 systems, and was in the process of completing development and testing for 39 systems.
Why GAO Did This Study One of the Bureau's most important functions is to conduct a complete and accurate decennial census of the U.S. population, which is mandated by the Constitution and provides vital data for the nation. A complete count of the nation's population is an enormous undertaking as the Bureau seeks to control the cost of the census, implement operational innovations, and use new and modified IT systems. In recent years, GAO has identified challenges that raise serious concerns about the Bureau's ability to conduct a cost-effective count. For these reasons, GAO added the 2020 Census to its high-risk list in February 2017. In light of these challenges, GAO was asked to testify about the Bureau's progress in preparing for the 2020 Census. To do so, GAO summarized its prior work regarding the Bureau's planning efforts for the 2020 Census. GAO also included observations from its ongoing work on the 2018 End-to-End Test. This information is related to, among other things, recent decisions on preparations for the 2020 Census; progress on key systems to be used for the 2018 End-to-End Test, including the status of IT security assessments; execution of the test at three test sites; and efforts to update the life-cycle cost estimate. What GAO Found The Census Bureau (Bureau) is planning several innovations for the 2020 Decennial Census, including re-engineering field operations, using administrative records to supplement census data, verifying addresses in-office using on-screen imagery, and allowing the public to respond using the Internet. These innovations show promise for controlling costs, but they also introduce new risks, in part because they include new procedures and technologies that have not been used extensively in earlier decennial censuses, if at all. GAO's prior work has emphasized the importance of the Bureau conducting a robust testing program to demonstrate that the systems and operations perform as intended under census-like conditions prior to the 2020 Census. However, because of budget uncertainties the Bureau canceled its 2017 field test and then scaled back its 2018 End-to End Test, placing these innovation areas more at risk. The Bureau continues to face challenges in managing and overseeing the information technology (IT) programs, systems, and contracts supporting the 2020 Census. For example, GAO's ongoing work indicates that the system development schedule leading up to the 2018 End-to-End test has experienced several delays. Further, the Bureau has not yet addressed several security risks and challenges to secure its systems and data, including making certain that security assessments are completed in a timely manner and that risks are at an acceptable level. Given that certain operations for the 2018 End-to-End Test began in August 2017, it is important that the Bureau quickly address these challenges. GAO plans to monitor the Bureau's progress as part of its ongoing work. In addition, the Bureau's cost estimate is not reliable and is out-of-date. Specifically, in June 2016, GAO reported that the cost estimate for the 2020 Census did not fully reflect characteristics of a high-quality estimate and could not be considered reliable. Moreover, since the Bureau did not follow cost estimation best practices, its annual budget requests based on the cost estimate may not be fully informed. Additionally, the Bureau has not yet updated its October 2015 cost estimate, but GAO expects that the cost of the current census design (around $12.5 billion in 2020 constant dollars) will increase due to, for example, expected increases in 2020 program IT costs (see figure). GAO made several recommendations to address these concerns, and the Bureau plans to address these recommendations in an updated cost estimate to be released later this fall. What GAO Recommends Over the past 4 years, we have made 33 recommendations specific to the 2020 Census to address the issues raised in this testimony and others. As of October 2017, the Bureau had fully implemented 10 of the recommendations, and was at varying stages of implementing the remaining recommendations. or Robert Goldenkoff at (202) 512-2757 or goldenkoffr@gao.gov .
gao_GAO-18-207
gao_GAO-18-207_0
Background The SBIR program was initiated in 1982 and has four main purposes: (1) use small businesses to meet federal R&D needs, (2) stimulate technological innovation, (3) increase commercialization of innovations derived from federal R&D efforts, and (4) encourage participation in technological innovation by small businesses owned by women and disadvantaged individuals. The STTR program was initiated a decade later, in 1992, and has three main purposes: (1) stimulate technological innovation, (2) foster technology transfer through cooperative R&D between small businesses and research institutions, and (3) increase private-sector commercialization of innovations derived from federal R&D. The SBIR and STTR programs are similar in that participating agencies identify topics for R&D projects and support small businesses, but the STTR program requires the small business to partner with a nonprofit research institution, such as a college or university or a federally funded research and development center. Each participating agency must manage its SBIR and STTR programs in accordance with program laws and regulations and the policy directives issued by SBA. In general, the programs are similar across participating agencies. All of the participating agencies follow the same general process to obtain proposals from and make awards to small businesses for both the SBIR and STTR programs. However, each participating agency has considerable flexibility in designing and managing specific aspects of these programs, such as determining research topics, selecting award recipients, and administering funding agreements. At least once a year, each participating agency issues a solicitation requesting proposals for projects in topic areas determined by the agency. Each participating agency uses its own process to review proposals and determine which proposals should receive awards. The agencies that participate in both SBIR and STTR programs usually use the same process for both programs. Also, each participating agency determines whether to provide the funding for awards as grants or contracts. According to the policy directives, SBA maintains a system that records SBIR and STTR award information—using data submitted by the agencies—as well as commercialization information, such as information about patents, sales, and investments reported by small businesses that received these awards. SBA is to use these data to assess small businesses that received awards against the benchmarks and identify any small businesses that did not meet the benchmarks. SBA is to initially assess the small businesses against the benchmarks and then in April of each year notify those that do not meet the benchmarks so that the businesses can review their award data and work with participating agencies to correct the database if necessary. SBA then is to analyze the award data again to identify, on June 1, those small businesses that still do not meet the benchmarks. These small businesses are then ineligible for certain awards from that date through May 31 of the following year. Data Challenges Have Limited the Implementation of the Benchmarks, and SBA and Participating Agencies Have Provided Inconsistent Information about the Consequence SBA and Participating Agencies Assessed Small Businesses against the Transition Rate Benchmark, but the Assessments Have Been Based on Inaccurate or Incomplete Data Data challenges have limited SBA’s and the 11 participating agencies’ efforts to fully implement the benchmarks. Since 2014, SBA and the participating agencies have regularly assessed small businesses against the Transition Rate Benchmark, but the assessments have been based on inaccurate or incomplete data. SBA and the participating agencies have assessed small businesses against the Commercialization Benchmark only once, in 2014, because of challenges in collecting and verifying the accuracy of data. In addition, SBA and the participating agencies have provided inconsistent information to small businesses about the consequence of not meeting the benchmarks. Since 2014, SBA and the participating agencies have regularly assessed small businesses against the Transition Rate Benchmark, which, in general, measures the rate at which businesses move projects from phase I to phase II. From 2014 through 2017, SBA determined that 4 to 7 small businesses did not meet the benchmark each year and placed those businesses on a list of those ineligible to receive certain additional awards. However, we found instances in which the data used to generate the list were inaccurate or incomplete. For example, we identified an instance in which the data in the awards database changed considerably after SBA’s initial assessment, indicating that the data used for that assessment were inaccurate. SBA’s list of small businesses subject to the benchmark in 2015 showed that a small business received 297 phase I awards during the assessment period. However, data received from SBA officials in August 2017 showed that this small business received only 1 phase I award. Agencies can update their data in the awards database at any time to, for example, submit additional award data or correct previously submitted award data, which is what an SBA official stated may have caused this change. Because the small business received only 1 award, it would not have been subject to the Transition Rate Benchmark. In this case, the change meant that SBA did not miss identifying a small business that should have been ineligible for an award; however, in other instances, changes to the data may lead SBA to miss identifying a small business that should have been ineligible for awards. In addition, we identified instances in which the publicly available data on awards were incomplete, including data that were missing or otherwise unusable. For example, based on our review of the award data from 2007 through 2016, we identified more than 2,700 small businesses that had multiple records with different spellings of the same business’s name. Furthermore, we identified more than 1,400 instances in which a unique identification number had errors, such as having an incorrect number of digits, all zeros, or hyphens. SBA officials told us that the quality of the award information in the database has been an issue, and that accurate information is important because small businesses may avoid being identified as subject to the benchmark if their business names and identification numbers are different across multiple records. For example, if the database contains 18 phase I awards made within the assessment period to a small business with a certain unique identification number but also contains 3 other phase I awards within that period with a different or missing unique identification number, the small business may avoid being identified as subject to the benchmark because the data would suggest it did not meet the threshold of receiving more than 20 phase I awards, even if it did. As a result, it could be difficult to determine which small businesses actually received more than 20 awards and should be subject to the benchmark. Standards for Internal Control in the Federal Government state that management should use quality information to achieve the entity’s objectives, and SBA’s Information Quality Guidelines state that SBA seeks to ensure the quality, utility, and integrity of the information it shares with the public, among other things. SBA’s policy directives for the SBIR and STTR programs state that SBA maintains a system that records SBIR and STTR award information, which is publicly available, and uses this information to calculate small businesses’ performance against the benchmark. SBA officials told us they depend on the accuracy of the data received from the participating agencies to perform SBA’s assessment. These officials also acknowledged that confirming the accuracy of SBA’s annual assessments against the benchmarks has been challenging because agencies can update their data over time. SBA officials stated that they have sought to improve the quality of the data after the data are entered into the database, such as fixing instances in which small businesses’ names were spelled differently across multiple records; however, the officials said that correcting the data already entered in the awards database is an ongoing and time-consuming process. SBA officials told us that there are errors in the database, in part because SBA has not worked with participating agencies to ensure that agencies enter high-quality, accurate data into the database. SBA officials provided us guidance on how to enter data that they said is available to agencies, but the errors we found suggest that agencies are not fully utilizing this guidance. As a result, SBA cannot reasonably ensure the quality and reliability of its award data and therefore cannot reasonably ensure that it has correctly assessed small businesses against the Transition Rate Benchmark. SBA and the Participating Agencies Assessed Small Businesses against the Commercialization Benchmark Only in 2014 The Small Business Act requires agencies to evaluate whether small businesses have met a minimum performance standard for commercializing their technology. SBA and participating agencies do not know the extent to which small businesses are meeting the Commercialization Benchmark because SBA and the agencies have assessed businesses against the benchmark only once, in 2014, when SBA determined that 12 businesses did not meet the benchmark. This is in part because, according to officials from SBA and several agencies, they cannot collect and verify the accuracy of the data needed to implement the benchmark as written. For SBA and participating agencies to assess whether small businesses meet the Commercialization Benchmark, these small businesses must provide data on sales, investments, or patents resulting from the awards. However, agency officials told us about challenges related to obtaining the data they need to implement this benchmark. For example, agency officials told us that the needed data are not consistently applicable across agencies or projects. Specifically, these officials said that an agency may purchase the technology developed as a result of the SBIR or STTR award, while another agency may focus on funding technologies that will be sold on the commercial market, leading to different kinds of data on “sales.” Additionally, officials from SBA and several of the participating agencies told us they have been unable to collect and verify the accuracy of the information from small businesses to assess them against the Commercialization Benchmark. In addition, officials from 2 agencies told us that small businesses can easily circumvent the benchmark by submitting incorrect data. The Small Business Act and the policy directives provide agencies flexibility in how they can implement the Commercialization Benchmark. Officials from participating agencies said that they thought the Commercialization Benchmark should be revised, but they provided differing views on how to do it. Officials from SBA and 2 agencies told us that they would consider having individual agencies develop a benchmark or metric tailored to their agency, in part because the definition of successful commercialization could vary across the agencies. However, officials acknowledged that collecting and verifying the accuracy of the data would still be a concern with this approach. Officials from 2 participating agencies told us that collecting and verifying the accuracy of the data is a significant amount of work, and officials from a third agency added that implementing the benchmark independently is impractical because they do not have the capability to track small businesses’ commercialization efforts. Officials from 1 agency said they preferred to keep a uniform benchmark across the agencies, in part because having varying benchmarks could lead to a small business being eligible to participate in the programs with one agency but not with another. Although views differed across agencies, working together to find a way to implement the benchmark as designed or revising it so that it can be implemented could allow the agencies to fulfill the requirement in the Small Business Act. Officials from 3 agencies told us they would prefer to consider businesses’ prior commercialization experience as part of their overall evaluation of businesses’ proposals, rather than implement the current Commercialization Benchmark. The SBIR and STTR policy directives currently allow agencies to define the benchmark in terms other than revenue or investment, such as using a commercialization scoring system that rates awardees on their past commercialization success. Defining the benchmark in these terms could help agencies to implement the statutory requirement. Officials from SBA said they see the value of allowing reviewers to use professional judgment in determining the commercialization success of applicants, rather than assessing small businesses against standard criteria. Officials from 1 agency said that such a change could help achieve the goal of the benchmark without the challenges of collecting data from all small businesses participating in the programs. Nine of the 11 participating agencies currently consider prior commercialization experience as part of their evaluation when making award selections (see table 2), which shows that evaluating commercialization experience at individual agencies can be feasible. For example, project solicitations from the Department of Agriculture, the Department of Defense, and the National Science Foundation state that these agencies require applicants to provide sales or revenue information for products resulting from SBIR or STTR awards, and the Department of Homeland Security’s solicitation requires applicants to provide a history of previous federal and nonfederal funding and subsequent commercialization of their products. All agencies consider commercialization potential when selecting these awards. SBA and Participating Agencies Have Provided Inconsistent Information to Small Businesses on the Consequence of Not Meeting the Benchmarks The consequence for small businesses not meeting the benchmarks is ineligibility to participate in phase I of the SBIR or STTR program for a year, according to the Small Business Act. SBA officials stated that they and the agencies initially interpreted this to mean that small businesses could not receive awards during the ineligibility period of June 1 through May 31 of the following year, and this is how the consequence is described in the SBIR and STTR policy directives. SBA officials told us that they and the participating agencies sought to change how to implement the consequence of businesses not meeting the benchmarks because of SBA’s and agencies’ difficulties in implementing the benchmarks. Officials from 4 agencies said that they generally evaluate and select awards shortly before SBA releases the list of ineligible companies, leading them to potentially select projects from small businesses that will be on the ineligible list by the time the award period begins. Based on our review of award data from October 2014 to May 2017, we identified 13 phase I awards across 5 small businesses with award start dates during the period that the business was ineligible to receive such awards. According to agency officials, each of these awards was selected before the small business became ineligible to receive the award. SBA and the participating agencies agreed to change how the consequence would be implemented, starting in 2017, so that small businesses that do not meet the benchmarks are ineligible to submit proposals, according to SBA officials. As of November 2017, however, the information available about this new way to implement the consequence was inconsistent because some of the agencies had not updated their project solicitations. Specifically, information in the most recent project solicitations available at that time for 2 agencies and one subunit of an agency stated that businesses that do not meet the benchmarks are ineligible to submit certain proposals, consistent with the revised approach for how to implement the consequence. However, the most recent project solicitations available at that time for 7 other agencies and the other subunit of the agency mentioned above instead stated that those businesses that do not meet the benchmarks are ineligible to receive certain awards, consistent with the prior approach for how to implement the consequence. One other agency directed users to SBA’s website in its solicitation. Table 3 shows the information about the consequence of not meeting the benchmarks that each agency included in its most recent project solicitations as of November 2017. As of November 2017, the SBIR and STTR policy directives stated that the consequence for not meeting these benchmarks is ineligibility to receive certain awards. SBA officials told us they are in the process of updating the policy directives to reflect this change in how the consequence is implemented, but these officials said that it is a long process and they could not provide a timeframe for when the update would be complete. As mentioned earlier in this report, SBA’s Information Quality Guidelines state that SBA seeks to ensure the quality, utility, and integrity of the information it shares with the public, among other things. Until participating agencies update their project solicitations and SBA updates its policy directives to accurately reflect agreed-upon practices about the consequence for small businesses that do not meet the benchmarks, small businesses may be confused about their eligibility to submit proposals and could invest time developing and submitting proposals when they are not eligible to do so. Conclusions Under the SBIR and STTR programs, federal agencies have awarded billions of dollars to small businesses to help these businesses develop and commercialize innovative technologies. SBA and the participating agencies have assessed these small businesses against the Transition Rate Benchmark, but those assessments have been based on inaccurate or incomplete data. Without ensuring the reliability of its data, SBA cannot reasonably ensure that it has correctly assessed small businesses against the Transition Rate Benchmark. SBA and the participating agencies developed a Commercialization Benchmark across all the participating agencies but have not fully implemented it, in part because they have been unable to collect information from the small businesses and verify the accuracy of that information. Working together to implement the benchmark as written or revise it so that it can be implemented could allow the agencies to fulfill the requirement in the Small Business Act to evaluate whether small businesses have met a minimum performance standard for commercializing their technology. Lastly, SBA and the participating agencies have provided inconsistent information to small businesses about the consequence of not meeting the benchmarks. Officials from SBA and the participating agencies had agreed to change how the consequence would be implemented, starting in 2017, because of difficulties implementing the benchmarks. However, as of November 2017, seven agencies, and a subunit of one agency, had not updated their project solicitations and SBA had not updated its policy directives. Without consistent information on the benchmarks, small businesses may be confused about their eligibility to submit proposals and could invest time developing proposals that they are not eligible to submit. Recommendations for Executive Action We are making a total of 11 recommendations, including 3 to SBA and 1 each to the Department of Commerce’s National Oceanic and Atmospheric Administration; the Departments of Defense, Education, Energy, Health and Human Services, and Homeland Security; the Environmental Protection Agency; and the National Science Foundation. Specifically: The Director of the Office of Investment and Innovation within SBA should work with participating agencies to improve the reliability of its SBIR and STTR award data (Recommendation 1). The Director of the Office of Investment and Innovation within SBA should work with participating agencies to implement the Commercialization Benchmark or, if that is not feasible, revise the benchmark so that it can be implemented (Recommendation 2). The Director of the Office of Investment and Innovation within SBA should update the SBIR and STTR policy directives to accurately reflect how the consequence of the benchmarks is to be implemented (Recommendation 3). The SBIR Program Manager of the Department of Commerce’s National Oceanic and Atmospheric Administration should update the agency’s SBIR project solicitation to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 4). The SBIR Program Administrator within the Department of Defense should update the agency’s SBIR and STTR project solicitations to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 5). The SBIR Program Manager within the Department of Education should update the agency’s SBIR project solicitation to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 6). The SBIR Program Manager within the Department of Energy should update the agency’s combined SBIR and STTR project solicitation to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 7). The SBIR/STTR Program Coordinator within the Department of Health and Human Services should update the agency’s SBIR and STTR project solicitations to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 8). The SBIR Program Director within the Department of Homeland Security should update the agency’s SBIR project solicitation to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 9). The SBIR Program Manager within the Environmental Protection Agency should update the agency’s SBIR project solicitation to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 10). The SBIR and STTR Program Manager within the National Science Foundation should update the agency’s SBIR and STTR project solicitations to accurately reflect how the consequence of not meeting the benchmarks is to be implemented (Recommendation 11). Agency Comments and Our Evaluation We provided a draft of this report to SBA and the 11 participating agencies for review and comment. In written comments, the Department of Commerce’s National Oceanic and Atmospheric Administration; the Departments of Defense, Education, Energy, Health and Human Services, and Homeland Security; the Environmental Protection Agency; and SBA agreed with the respective recommendations directed to their agencies. Agencies’ written comments are reproduced in appendixes I through VIII. An official from one agency—the National Science Foundation—stated in an email that the agency concurred with the recommendation and did not have any further comments. Two agencies—the Department of Homeland Security and SBA—also provided technical comments, which we incorporated as appropriate. Three agencies—the Departments of Agriculture and Transportation, and the National Aeronautics and Space Administration—as well as the Department of Commerce’s National Institute of Standards and Technology stated via email that they had no technical or written comments. In its comments, SBA stated that it disagreed with a statement in our draft report that SBA had not worked with agencies to enter high-quality and accurate data into the database and provided us documentation of an instruction guide on entering data that SBA officials said was available to agencies. Based on our review of this information, we clarified the text of the report and modified the draft report’s recommendation by removing the suggested example that SBA provide guidance to the agencies to improve SBIR and STTR award data reliability. SBA agreed with the revised recommendation. After we provided a draft of the report to the agencies for comment, the Departments of Education and Homeland Security took action on their respective recommendations. Specifically, in December 2017, the agencies issued new project solicitations that reflected the updated consequence of not meeting the benchmarks. We agree that these agencies fully implemented the recommendations we made to them in this report. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, and Transportation; the Administrators of the Small Business Administration, the Environmental Protection Agency, and the National Aeronautics and Space Administration; the Director of the National Science Foundation; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. Appendix I: Comments from the Small Business Administration Appendix II: Comments from the Department of Commerce Appendix III: Comments from the Department of Defense Appendix IV: Comments from the Department of Education Error! No text of specified style in document. Appendix V: Comments from the Department of Energy Appendix VI: Comments from the Department of Health and Human Services Appendix VII: Comments from the Department of Homeland Security Appendix VII: Comments from the Department of Homeland Security Error! No text of specified style in document. Appendix VIII: Comments from the Environmental Protection Agency Appendix IX: GAO Contact and Staff Acknowledgments Appendix IX: GAO Contact and Staff Acknowledgments Error! No text of specified style in document. GAO Contacts Staff Acknowledgments In addition to the contact named above, Hilary Benedict (Assistant Director), John Barrett, Natalie Block, Antoinette Capaccio, Tanya Doriss, Justin Fisher, Ellen Fried, Juan Garay, Cindy Gilbert, Perry Lusk, William Shear, and Elaine Vaurio made key contributions to this report.
Why GAO Did This Study Through the SBIR and STTR programs, federal agencies have awarded about 162,000 contracts and grants totaling $46 billion to small businesses to help them develop and commercialize new technologies. Eleven federal agencies participate in the SBIR program, and 5 agencies also participate in the STTR program. Each program has three phases, which take projects from initial feasibility studies through commercialization activities. SBA oversees both programs. In response to the 2011 reauthorization of the programs, SBA and the participating agencies developed benchmarks to measure small businesses' progress in developing and commercializing technologies. GAO was asked to review SBA's and the agencies' efforts related to these benchmarks. This report examines the extent to which SBA and the participating agencies have implemented these benchmarks, including assessing businesses against them and establishing the consequence of not meeting them. GAO analyzed award data and interviewed officials from SBA and the 11 participating agencies. What GAO Found Data challenges have limited the Small Business Administration's (SBA) and the 11 participating federal agencies' efforts to assess businesses against two benchmarks—the Transition Rate Benchmark and the Commercialization Benchmark—of the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs. Transition Rate Benchmark. Small businesses that received more than 20 awards for the first phase of the programs in the past 5 fiscal years—excluding the most recent fiscal year—must have received an average of 1 award for the second phase of the programs for every 4 first phase awards. Since 2014, SBA and the agencies participating in the programs have regularly assessed small businesses against this benchmark. From 2014 through 2017, SBA determined that 4 to 7 businesses did not meet the benchmark each year. SBA officials provided GAO guidance on how to enter data into the programs' awards database they said is available to agencies, but GAO found evidence that suggests agencies are not fully utilizing it. For example, GAO found that the database used to perform the assessments contained inaccurate and incomplete data, such as about 2,700 businesses with multiple records with different spellings of their names and more than 1,400 instances in which a unique identification number had errors, such as an incorrect number of digits, all zeros, or hyphens. Thus, it could be difficult to determine which small businesses should be subject to the benchmark. Commercialization Benchmark. Small businesses that received more than 15 awards for the second phase of the programs in the past 10 fiscal years—excluding the most recent 2 fiscal years—must have received a certain amount of sales, investments, or patents resulting from their efforts. SBA and participating agencies have assessed small businesses against this benchmark only once, in 2014, and identified 12 businesses that did not meet the benchmark. This is, in part, due to challenges in collecting and verifying the accuracy of the data that small businesses report and that are needed to implement the benchmark, according to officials from SBA and several agencies. For example, agency officials told GAO that some needed data, such as for reported sales, are not consistently applicable across agencies or projects. The Small Business Act and policy directives provide flexibility in how the agencies can implement the benchmark. Working together to implement it as designed or revise it so that it can be implemented could allow the agencies to fulfill statutory requirements. SBA and the participating agencies have provided inconsistent information to small businesses about the consequence of not meeting the benchmarks. SBA and the agencies agreed to change how the consequence of not meeting the benchmarks was to be implemented, starting in 2017, from ineligibility to receive certain awards to ineligibility to submit certain proposals. However, as of November 2017, some agencies had not updated this information in their project solicitations. Furthermore, SBA has not updated this information in its policy directives. Without consistent information, businesses may be confused about their eligibility to submit proposals or receive awards and could invest time developing and submitting proposals when they are not eligible to do so. What GAO Recommends GAO is making 11 recommendations to SBA and other agencies to take actions to improve implementation of the benchmarks, including improving the reliability of award data; implementing or revising the Commercialization Benchmark; and updating information about the consequence of not meeting the benchmarks. SBA and these agencies agreed with GAO's recommendations.
gao_GAO-19-211
gao_GAO-19-211_0
Background The appropriation and execution of DOD’s base and OCO amounts is part of the broader federal budget process. In this process, Congress, the President, and federal agencies take a number of steps to formulate a budget, enact appropriation acts, and execute the federal budget for each fiscal year. A summary of the budget process is depicted in figure 1 below. In DOD’s budget process, the military services and defense agencies submit a budget request—known as the Budget Estimate Submission— that addresses their estimated annual funding requirements for both base and OCO activities. In building their OCO budget requests, the military services and defense agencies use criteria that OMB developed in collaboration with DOD, for deciding whether items belong in the base budget or in OCO funding requests. The services also use guidance issued within their own organizations, as well as utilize OCO-specific budget guidance included in DOD’s Financial Management Regulation. Congress then takes action on the budget request and appropriates funding for both base and OCO activities into the same appropriation accounts, such as service-specific O&M accounts. Explanatory statements or conference committee reports accompanying annual appropriations acts provide congressional direction on how OCO and base funding amounts should be obligated. However, the congressional direction for funding is generally not legally binding. Congress also has the discretion to make available amounts for base activities or enduring costs through OCO appropriations, even if DOD considers such costs to be part of the base budget. The Budget Control Act of 2011, amending the Balanced Budget and Emergency Deficit Control Act of 1985, imposes government-wide discretionary spending limits for fiscal years 2012 through 2021 to reduce projected spending by about $1 trillion. All amounts appropriated to DOD are subject to limitations on discretionary spending. Appropriated amounts designated by Congress for OCO that would otherwise exceed the annual limits established for discretionary spending will instead result in an adjustment to the overall spending limit established for a particular fiscal year, and will not trigger a sequestration, which is an automatic cancellation of budgetary resources provided by discretionary appropriations or direct spending laws. Upon enactment of an appropriation, the Secretary of the Treasury issues a warrant to federal agencies, which is an official document that establishes the amount of moneys authorized to be withdrawn from the central accounts that the Department of Treasury maintains. The Treasury does not employ a process to separate OCO funding from base funding in its role in warranting funds to federal agencies, including DOD. After receiving budget authority, agencies make allotments, delegating budget authority to various agency officials allowing them to incur obligations. Agencies then disburse amounts by cash or cash equivalents to liquidate obligations. DOD Components We Reviewed Use Coding and Other Control Activities to Separately Account for OCO and Base Amounts during Budget Execution The DOD components in our review use coding and other internal control activities to separately account for OCO and base amounts in their O&M accounts during budget execution. To record and track OCO and base amounts separately, the DOD components use coding in their financial systems during the allotment, obligation, and disbursement of funds. For example, during the allotment phase, the Army and the Defense Security Cooperation Agency use codes in their financial systems to divide, distribute, and track their appropriated funds into separate categories— including one for OCO and one for base. Army and Defense Security Cooperation Agency officials stated that the separate categories are maintained through the obligation phase. The Air Force, the Marine Corps, and the Navy use specific codes to track OCO transactions within multiple systems they use to allot and obligate OCO and base amounts. For example, the Air Force uses an Emergency and Special Program code to track and record allotments and obligations of OCO amounts within its budgeting and accounting systems. The Marine Corps uses three-digit, alphanumeric codes called Special Interest Codes to track and record costs associated with high-interest activities, such as OCO, during obligation. Figure 2 describes the steps that DOD takes to separate OCO and base amounts. We identified some internal control activities that the DOD components in our review have put into place to ensure separate accounting of OCO and base amounts, such as controls over information processing. A variety of control activities can be used in information processing, including controls incorporated directly into computer applications to ensure accuracy, as well as policies and procedures that apply to information systems. For example, Army and Defense Security Cooperation Agency officials stated that the financial systems they use incorporate system controls that automatically maintain the categories of funding designated during allotment through subsequent actions, including obligation, which ensures an amount in the OCO category maintains its OCO-specific coding throughout the budget execution process. Also, the Army restricts the number of personnel who are able to reassign the coding of funding from one category to another. Navy officials explained that two of three financial accounting systems used by the Navy receive OCO allotments automatically from the Navy’s budgeting information system, which eliminates the need for manual entry of allotment amounts. Also, Marine Corps guidance requires entry of an identifying OCO code in the Marine Corps’ financial system when recording an OCO-related transaction, which can prevent data reporting errors. In addition to controls over information processing, each DOD component in our review incorporates reviews of their OCO execution as one of their internal control activities. Internal control activities also include reviews, such as reviews of data or expected results, by management throughout an organization. The financial management offices of these components periodically review the OCO-related allotments they make within their components to confirm the amounts are properly recorded. For example, the Air Force, the Army, the Marine Corps, the Navy, the Defense Security Cooperation Agency, and U.S. Special Operations Command review OCO-related execution amounts at least monthly to determine if amounts are within their established spending plans and that OCO coding is recorded correctly, among other things. In addition, officials from each service and the Defense Security Cooperation Agency stated that officials review OCO-related obligations and verify they are legitimate OCO expenses. The DOD Inspector General and the services’ audit agencies have found weaknesses in the services’ processes of accounting for OCO costs or in other related internal control activities. For example, in March 2018, the US Army Audit Agency found that while the Army had a strategy and processes to capture and report its financial data for Operation Inherent Resolve for fiscal year 2016, processes to account for some obligation data needed improvement. Moreover, an official from the Office of the Secretary of Defense (Comptroller) stated that, while the DOD components included in our review have processes to separate OCO and base amounts, other DOD components may not have similar processes, and not all components have auditable financial systems. Four Alternatives to the Current Processes That Congress and DOD Use to Separate Funding for OCO and Base Activities Would Entail Tradeoffs We identified at least four alternatives to the processes Congress and DOD use to separate funding for DOD’s OCO and base activities. Each alternative would require action at different phases of DOD’s budget process and entail tradeoffs. Appendix II provides additional information on requirements and costs to implement the alternatives reported by respondents that we summarize, as well as other alternatives to provide funding to DOD that respondents independently identified. In addition, appendix II provides summary information on the positive and negative aspects of Congress’ current process for providing funding for OCO and base activities, as described by respondents. Alternative #1: DOD Could Request Funding for Enduring Costs through Its Base Budget Rather Than Its OCO Budget The first alternative to the current process would be for DOD to request all funding for enduring costs through its base budget rather than its OCO budget. DOD is considering a plan to move enduring costs associated with OCO activities from its OCO budget request into its base budget request for fiscal year 2020. In its budget justification materials for fiscal year 2019, DOD estimated that it would shift between $45.8 billion and $53.0 billion from its OCO request to its base budget request from fiscal years 2020 through 2023. However, moving DOD’s enduring costs to its base budget request may require increased base O&M appropriations provided in annual DOD appropriations acts. Appropriations that are not designated as OCO, such as base O&M amounts, and that exceed annual discretionary spending limits established by the Budget Control Act of 2011, as amended, would trigger a sequestration. Respondents to our questionnaire identified several positive and negative aspects of this alternative, which we summarize in table 1. Alternative #2: Congress Could Add Specific Purpose Language to Annual DOD Appropriations Acts Concerning OCO Amounts The second alternative would be for Congress to specify in annual DOD appropriations acts the purposes—programs, projects and activities—for which OCO amounts may be obligated. As we noted above, DOD currently determines what constitutes OCO activities based on criteria developed in 2010 in coordination with OMB and DOD 7000.14-R, Financial Management Regulation. Explanatory statements and conference committee reports accompanying annual appropriations acts include direction on how OCO amounts should be allocated for specific activities; however, explanatory statements and committee reports are not legally binding unless incorporated by reference into the appropriations act. Either specific purpose language or language incorporating explanatory statement or committee report language could be included in DOD’s annual appropriations. Respondents to our questionnaire identified several positive and negative aspects of this alternative, which we summarize in table 2. Alternative #3: Congress Could Create Separate Appropriation Accounts for OCO and Base Funding The third alternative entails Congress creating separate appropriation accounts for OCO and base funding. Under the current approach, both OCO and base amounts are appropriated into and executed out of the same appropriation accounts. By contrast, under this alternative, Congress would create separate Treasury-level appropriation accounts for funding for OCO and base activities. For example, there could be an O&M appropriation account for the Army’s base activities and an O&M appropriation account for the Army’s OCO activities. Funding for OCO and base activities would no longer be comingled, but could be transferred between accounts with statutory authority. Respondents to our questionnaire identified several positive and negative aspects of this alternative, which we summarize in table 3. Alternative #4: Congress and DOD Could Use a Transfer Account to Fund Contingency Operations Under the fourth alternative, Congress would appropriate funds into a non-expiring transfer account for contingency operations. These funds would be available for DOD’s use during multiple fiscal years. DOD would use its base appropriations to initially fund OCO activities and later use funds from the transfer account, as needed, to reimburse its base appropriation accounts. One example is the Overseas Contingency Operations Transfer Fund, which was originally established by Congress in fiscal year 1997 to meet small-scale, recurring operational demands of the department by transferring amounts to the military services and agencies based on execution needs as the year progresses. Respondents to our questionnaire identified several positive and negative aspects of this alternative, which we summarize in table 4. Each Alternative Would Require Action at Different Phases in the Budget Process and Entail Tradeoffs The four alternatives we identified would require Congress and DOD to take action at different phases within DOD’s budget process. In the first alternative, DOD would move enduring costs to the base budget request during the budget formulation phase. In the second alternative, Congress would specify the activities to be funded by OCO amounts in the annual appropriations acts during the congressional appropriation phase. Similarly, in the third alternative, Congress would create separate appropriation accounts for OCO and base activities during the congressional appropriation phase. In the fourth alternative, using transfer accounts would require actions during two phases—the congressional appropriations phase and the budget execution phase. Congress would appropriate funds into a transfer account during the congressional appropriation phase, and DOD would later use funds from the transfer account, as needed, to reimburse its base appropriation accounts during budget execution. In figure 3, we depict the phase of the budget process in which these alternatives would take place. Each alternative includes tradeoffs that Congress and DOD would have to consider to strike the desired balance between agency flexibility and congressional control. For example, adding specific purpose language would better align obligation of OCO amounts with congressional intent; however, doing so could also reduce DOD’s financial flexibility and responsiveness to changes in operations. Understanding the implications of each alternative is important to avoid unintended consequences. Our summary of the positive and negative aspects of the alternatives reported by respondents could be a reference for Congress and DOD as they consider potential changes to processes for separating the funding of amounts for OCO and base activities. Agency Comments and Our Evaluation We requested comments from DOD, the Department of the Treasury, and provided an informational copy of the draft report to OMB. DOD provided technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Treasury, the Director of OMB; the Under Secretary of Defense for the Comptroller; the Secretaries of the Air Force, the Army, and the Navy; the Commandant of the Marine Corps; the Commanding General of U.S. Special Operations Command, and the Director of the Defense Security Cooperation Agency. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2775 or fielde1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology To describe selected Department of Defense (DOD) components’ use of internal controls to separately account for overseas contingency operations (OCO) and base amounts, we reviewed documentation of the internal controls that DOD organizations in our review have designed to separate these amounts in their operation and maintenance (O&M) account. We focused on the O&M account because Congress provides most of the OCO amounts for DOD in O&M. In addition, we focused on the military services that receive service-specific OCO appropriations, and the two non-service DOD components (U.S. Special Operations Command and the Defense Security Cooperation Agency) that are allotted the most OCO funding appropriated to the O&M Defense-wide account. We collected information for this objective through interviews and written requests for information from financial management officials in the Office of the Secretary of Defense (Comptroller), the offices of the military services, U.S. Special Operations Command, the Defense Security Cooperation Agency, and the Defense Finance and Accounting Service. Our review focused on the design of the internal control systems and did not assess the effectiveness of these internal controls. To identify alternatives to separate funding for DOD’s OCO and base activities, we searched for relevant literature from 2001 through July 2018. Specifically, we searched for alternative processes that (1) DOD could use to separately account for OCO funding or (2) Congress could use to provide separate OCO funding to DOD because both DOD and Congress could be involved in implementing alternatives to separate funding for OCO and base activities. We started with 2001, because this was the first year that funds were appropriated for the Global War on Terror (GWOT), now known as OCO. We conducted searches of various databases and websites, such as ProQuest and the National Academy of Sciences website. Our literature search identified 235 sources, which primarily consisted of journal articles, reports, and news articles. Two analysts independently reviewed the full text of the literature sources to determine which were relevant. When they disagreed, a third analyst independently reviewed the full text of a source to make the final determination. We determined that 22 sources were relevant. We did not identify any sources that described alternative processes for DOD to separately account for OCO funding; therefore, we do not address this in our report. We did identify three alternatives related to how Congress provides OCO funding to DOD and how DOD requests OCO funding from Congress. We summarized these alternatives and obtained feedback from our internal subject matter experts familiar with Congress’ process for providing funding for OCO and DOD’s process for separating OCO and base funds. We revised the wording of the alternatives based on their feedback to ensure that we described them accurately. Our internal subject matter experts suggested a fourth congressional alternative. We summarized all four alternatives in our report. In collaboration with a survey specialist, we developed a questionnaire to solicit opinions from knowledgeable individuals (“respondents”) regarding Congress’ and DOD’s current processes and the four alternatives. Our internal subject matter experts also provided feedback on the draft questionnaire. We included the summaries of all processes and asked respondents to identify the positive and negative aspects, as well as the costs and requirements, associated with each. We also asked respondents to describe any additional alternatives apart from the four we described in the questionnaire. We identified questionnaire respondents within and outside DOD who were sufficiently knowledgeable about Congress’ and DOD’s current processes in several ways. We identified respondents within DOD by emailing the engagement points of contact, who were budget and financial management officials in the headquarters for the military services and other DOD components included in our review. To identify respondents outside of DOD, we contacted individuals identified by an internal subject matter expert and contacted additional individuals identified in our literature review. We provided respondents with a brief summary of the questionnaire and asked them if they would be able and willing to respond to questions on these topics. We also asked respondents to recommend additional knowledgeable individuals at the end of the questionnaire. Respondents identified were current officials in DOD financial management offices, former DOD officials, and defense budget analysts from think tanks. In addition, we contacted officials from the Congressional Research Service and the Congressional Budget Office, whom we identified as assigned to analyze defense budget issues related to OCO. We included questions at the start of the questionnaire to determine if respondents were sufficiently knowledgeable about either the current congressional process, the current DOD process—or both—to offer perspectives on the alternatives presented. We sent the questionnaire as a Microsoft Word form via email to 23 respondents, including 10 within DOD and 13 outside DOD. We began sending the questionnaires on August 1, 2018, and continued as we identified more respondents. We sent up to two reminder emails with a copy of the questionnaire to anyone who had not yet responded. We received the last questionnaire on September 10, 2018. We received a total of 19 questionnaires back from respondents. We excluded two completed questionnaires from our analysis based on our screening criteria for determining if respondents were sufficiently knowledgeable about Congress’ and DOD’s current processes. Therefore, we included 17 questionnaires in our analysis—10 from DOD officials and 7 from respondents outside DOD—for a response rate of 81 percent. We calculated the response rate using a total possible number of 21 questionnaires instead of 23 to account for the two questionnaires we excluded from the analysis. Fifteen of the 17 respondents to our questionnaire were current or former DOD officials. Results of this questionnaire are not generalizable beyond our respondents. To enable us to provide the information to Congress within the time frames required by the mandate, we did not pretest the questionnaire. However, we believe that the questionnaire was a sufficiently valid data collection tool for reporting positive and negative aspects identified by respondents. We developed the questionnaire with assistance from a survey specialist, and we revised the questionnaire content based on feedback from our internal subject matter experts. Most respondents provided answers that indicated they correctly interpreted the questions as stated in the questionnaire. In addition, we took steps to provide clarification to the few respondents who misunderstood questions and excluded responses we could not reasonably assure were understood. Four of the 23 original recipients of the questionnaire requested clarification or misunderstood two questions in our questionnaire. We provided clarification to those respondents via email and requested that they update their questionnaire responses based on this new information. Two did so. The other two respondents did not reply to our clarification email, and we excluded their responses to the misunderstood questions. Not all respondents provided answers to all questions in our questionnaire. We extracted the data from the Word questionnaires and imported them into Excel for qualitative analyses. We inspected the Excel files to ensure that data were not missing or were not imported incorrectly and made iterative corrections to the process to ensure accurate data were analyzed. Because we did not pretest the questionnaire, we do not report the number of respondents who provided any answers but rather we present qualitative positive and negative aspects based on the responses. We conducted a content analysis in which two analysts independently categorized each response from each questionnaire to identify similarities. For our purposes, similarities existed when two or more respondents gave the same or very similar answers to a particular question. The summaries of the responses we developed were based on comments from two to nine respondents. The analysts discussed any discrepancies in their categorizations until they reached agreement. Subsequently, an internal subject matter expert provided feedback on the summary. Using that feedback, the analysts consolidated summaries that were related and clarified the wording of all the summarized responses. We identified positive and negative aspects for questions regarding the current processes and the four alternatives presented in the questionnaire. We did not summarize positive and negative aspects for questions regarding the additional alternatives described by respondents. We did not include this information because although two respondents described similar alternatives, they did not identify similar positive and negative aspects about this alternative. In addition, none of the remaining questionnaires included similar responses. We list any additional alternatives identified by respondents in appendix II. The verbatim wording from key sections of the questionnaire we administered is presented in appendix III. In addition, section 1523 of the National Defense Authorization Act for Fiscal Year 2018 contained additional provisions for us to review other processes related to the execution of OCO funds. In particular, section 1523 contained a provision for us to review the processes the Department of the Treasury employs to separate expenditures of amounts appropriated for OCO from expenditures of all other amounts appropriated for DOD. We assessed the steps that the Department of the Treasury takes in the execution of the federal budget after funds have been appropriated and determined that the Department of the Treasury does not employ a process to separate OCO funding from base funding in its role in making appropriations available to DOD. In addition, section 1523 of the act included another provision for us to compare the processes DOD and the Department of Treasury use to separate expenditures of OCO amounts to the generally accepted accounting principles. The Federal Accounting Standards Advisory Board issues federal financial accounting standards and provides guidance on federal generally accepted accounting principles. The Federal Accounting Standards Advisory Board’s Handbook of Federal Accounting Standards and Other Pronouncements, as Amended (Current Handbook) is the most up-to-date, authoritative source of generally accepted accounting principles developed for federal entities. However, the Current Handbook does not address the separation of OCO from non-OCO appropriations, obligations, and disbursements. Therefore, it is not possible to compare the processes DOD and the Department of the Treasury use to the generally accepted accounting principles based on existing standards and guidance. We conducted this performance audit from March 2018 to January 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Additional Information on the Current Process and Alternatives to Provide OCO Funding Additional information from our questionnaire is provided below, including information about (1) the positive and negative aspects of Congress’ current process for providing funding for the Department of Defense’s (DOD) overseas contingency operation (OCO) and base activities, (2) the requirements and costs to implement the four alternatives we discussed earlier, and (3) other alternatives for providing funding to DOD. Positive and Negative Aspects of Congress’ Current Process to Provide Funding for OCO and Base Activities We asked respondents to report on the positive and negative aspects of Congress’ current process for providing funding for DOD’s OCO and base activities. We summarize those aspects in table 5. Requirements and Costs to Implement the Four Alternatives Respondents reported on the requirements and costs to implement the four alternatives in our questionnaire. The requirements respondents identified to implement the four alternatives are summarized in table 6. Regarding the costs, respondents reported that two alternatives would require minimal or no additional costs, while the other two alternatives would involve higher costs to DOD. The costs respondents identified to implement the four alternatives are summarized in table 7. Alternatives for Providing Funding to DOD that Respondents Independently Identified We also asked respondents to describe any other alternatives for separating funding for DOD’s OCO and base activities, apart from the four alternatives described above. Respondents identified several alternatives for providing funding to DOD, including alternatives that would not provide separation of OCO and base funding. The other alternatives that respondents described are shown in table 8. Appendix III: Key Questions from GAO’s Questionnaire on Separation of OCO and Base Amounts Below we show the verbatim wording of the descriptions of the alternatives to separate amounts for DOD’s OCO and base activities as summarized in the questionnaire. Each description was presented separately in the questionnaire followed by a standard set of questions that are all presented below these descriptions. We also show the verbatim wording of any clarification text sent via email to respondents who misunderstood the description of the alternative. DOD could move requests for funding of enduring activities from its OCO budget to its base budget request. Enduring activities are those that began in response to contingency operations but have continued after these operations ended. An example of an enduring cost would be maintaining residual headquarters staff at U.S. Central Command in Qatar to train, advise, and assist as missions have evolved from contingency to ongoing activities. We understand that in the in FY 2020, the Department plans to move funding for enduring activities from its OCO budget to its base budget request. DOD’s OCO funding request would then reflect only the incremental costs of existing contingency operations. The Congress could specify activities for which DOD should use OCO amounts within the annual appropriations acts. Currently, DOD determines what activities constitute OCO activities based on criteria developed in 2010 in coordination with OMB. Under this alternative, explicit purpose language designating specific funds for specific activities would be added directly into the appropriations acts or the explanatory statement, then incorporated into the appropriations act by reference. “Under the current approach, funds are designated for specific sub-activities in the explanatory statement. However, these designations are generally not legally binding unless incorporated by reference into the appropriations act itself. Under this alternative approach , specific purpose language or language of incorporation would be included in the appropriations act. The distinction between the current approach and the alternative presented here is that legally binding language concerning specific amounts for specific OCO activities would appear in the appropriation act.” The Congress could create separate appropriation accounts for amounts designated for OCO and amounts designated for base activities. “In the current approach, amounts are designated for OCO and base activities within a single appropriation account. In the alternative proposed in Question 5, the Congress would create two separate appropriation accounts for OCO and base activities amounts. For example, there would be one appropriation account for OCO amounts for O&M, and another appropriation account for base activity amounts for O&M.” DOD could use a transfer account (such as the Overseas Contingency Operations Transfer Fund, or OCOTF) through which the Department could meet operational demands by transferring funds to the military services and agencies based on execution needs as the year progresses. The Congress would appropriate funds into a transfer account. These funds would not expire and be available for DOD’s use during multiple fiscal years. DOD would use its base activities appropriations to fund OCO activities and later draw from the transfer account as needed to reimburse its base appropriation accounts. Below we show the verbatim wording from key sections of the questionnaire we administered. We used Questions 2 and 3 as screening questions to help determine if respondents were sufficiently knowledgeable about the current congressional or DOD processes. Question 4 and its sub-questions below were repeated for each alternative presented above (i.e., as Questions 4 through 7 in the questionnaire). We also asked sub-questions “b” through “e” in Question 4 for the current approaches Congress and DOD use (presented in Questions 2 and 3). Finally, we asked respondents to identify up to five additional alternatives in Questions 8 through 12. 2. Are you familiar with any of the current approaches that the military services or DOD organizations use to separate operation and maintenance (O&M) amounts designated for Overseas Contingency Operations (OCO) from amounts designated for base activities during the allotment, obligation, and/or disbursement phases? Please check one box.  Please continue to “a” through “e”  Please skip to Question 3  Please skip to Question 3 3. Are you familiar with the current approach that Congress uses to designate amounts for OCO in the appropriations process for DOD? Please check one box.  Please continue to “a” through “e”  Please skip to Question 4  Please skip to Question 4 4. GAO has identified the following as a possible alternative to the current approach for separating amounts designated for OCO from amounts designated for base activities in the appropriations process: a. Were you aware of this alternative before completing this questionnaire? Please check one box. Please continue to “b” through “e” b. What are the positive aspects associated with this alternative, if any? Please consider factors impacting both taxpayers and the DOD. The box will expand as you type. c. What are the negative aspects associated with this alternative, if any? Please consider factors impacting both taxpayers and the DOD. The box will expand as you type. d. What are the costs associated with this alternative, if any? Please consider costs impacting both taxpayers and the DOD. The box will expand as you type. e. What are the requirements associated with implementing this alternative? Consider factors such as: changes to existing systems, policies, or processes; new systems, policies, or processes; new budget estimations; required training; etc. These could be requirements for DOD or the Congress. The box will expand as you type. 8. Are you aware of any alternative approaches for separating amounts designated for OCO from amounts designated for base activities other than the ones listed above? Please consider both approaches DOD could implement on its own (such as approaches to separating OCO from base in the O&M account or changes that make that unnecessary) and legislative approaches the Congress could take. We are aware of the Enterprise Resource Planning (ERP) systems listed above. For this question, we are interested in the implementation of new potential alternatives other than the ERP system. Please check one box.  Please continue to “a” through “e” to tell us about one alternative. If you are aware of more than one, you will be able to tell us about others in Questions 9-12.  Please skip to Question 13 Please skip to Question 13 a. If yes, please briefly describe the first alternative approach. The box will expand as you type. Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Elizabeth Field, (202) 512-2775 or fielde1@gao.gov. Staff Acknowledgments In addition to the contact named above, Richard K. Geiger, Assistant Director; Arkelga Braxton, Assistant Director; Rebekah Boone; Amie Lesser; Felicia Lopez; James P. Klein (Analyst-in-Charge); Shylene Mata; Sheila Miller; Richard Powelson; and Michael Silver made key contributions to this report.
Why GAO Did This Study Since 2001, DOD has received more than $1.8 trillion in OCO funds. DOD defines “contingency operations” as small, medium, or large-scale military operations, while “base” activities include operating support for installations, civilian pay, and other costs that would be incurred, regardless of contingency operations. Congress separately appropriates amounts for base and OCO activities into the same appropriation accounts and directs how funds are to be spent by designating amounts in conference reports or explanatory statements accompanying the annual appropriations acts. The National Defense Authorization Act for Fiscal Year 2018 included a provision for GAO to report on the feasibility of separating OCO expenditures from other DOD expenditures. This report (1) describes internal controls that selected DOD components use to separately account for OCO and base amounts during budget execution and (2) identifies and examines alternatives that Congress or DOD could use to separate funding for OCO and base activities. GAO reviewed documentation of DOD internal controls for separating OCO and base amounts in the O&M account, interviewed financial management officials, and, among other things, conducted a literature review to identify alternatives that Congress or DOD could use to separate funding for OCO and base activities. Also, GAO administered a questionnaire to DOD and non-DOD officials to identify positive and negative aspects of these alternatives. What GAO Found Selected Department of Defense (DOD) components use coding and other internal control activities to separately account for overseas contingency operations (OCO) and base amounts in their operation and maintenance (O&M) accounts during budget execution. To record and track OCO and base amounts separately, the military services, U.S. Special Operations Command, and the Defense Security Cooperation Agency use coding in their financial systems. These DOD components also have instituted some internal control activities to help ensure separation of OCO amounts. For example, Army and Defense Security Cooperation Agency officials stated that the financial systems they use incorporate system controls that automatically maintain the categories of funding, such as OCO, designated during allotment through subsequent actions to ensure the OCO coding remains throughout budget execution. GAO identified at least four alternatives to the processes used to separate funding for DOD's OCO and base activities: Move enduring costs to the base budget . DOD could request funding for enduring costs—costs that would continue in the absence of contingency operations—through its base budget rather than its OCO budget. Use specific purpose language . Congress could use legally binding language in the annual DOD appropriations acts to specify the purposes—programs, projects and activities—for which OCO amounts may be obligated. Create separate appropriation accounts . Congress could create separate appropriation accounts for OCO and base funding. Use a transfer account . Congress could appropriate funds for OCO into a non-expiring transfer account. DOD would fund OCO with its base budget and later reimburse its base accounts using funds from a transfer account. Implementing these alternatives would require Congress and DOD to take action in different phases of the budget process (see figure). Each alternative includes tradeoffs that Congress and DOD would have to consider to strike the desired balance between agency flexibility and congressional control. The alternatives, and GAO's summary of their positive and negative aspects identified by questionnaire respondents, could be a reference for Congress and DOD as they consider potential changes to processes for separating the funding of amounts for OCO and base activities.
gao_GAO-18-557
gao_GAO-18-557_0
Background Working Capital Funds DOD uses working capital funds to focus management’s attention on the total costs of carrying out critical business operations and encourage DOD support organizations to provide quality goods and services at the lowest cost. The ability of working capital funds to operate on a break- even basis depends on accurately projecting workload, estimating costs, and setting rates to recover the full costs of producing goods and services. Generally, customers use appropriated funds to finance orders placed with working capital funds. DOD sets the rates charged for goods and services during the budget preparation process, which generally occurs approximately 18 months before the rates go into effect. To develop rates, working capital fund managers review projected costs such as labor and materials, as well as projected customer requirements. The rates are intended to remain fixed during the fiscal year in accordance with DOD policy. DOD’s stabilized price policy serves to protect customers from unforeseen inflationary increases and other cost uncertainties and better assures customers that they will not have to reduce programs to pay for potentially higher-than- anticipated prices. Because working capital fund managers base rates charged on assumptions formulated in advance of rates going into effect, some variance is expected between projected and actual costs and revenues. Transportation Working Capital Fund The TWCF is dedicated to TRANSCOM’s mission to provide air, land, and sea transportation for DOD in times of peace and war, with a primary focus on wartime readiness. Specifically, TWCF is used to provide air transportation and services for passengers or cargo in support of DOD operations or along established routes. The TWCF is also used to finance Air Force and joint training requirements. Examples of joint capabilities supported by the TWCF are depicted in figure 2. The TWCF uses rates for airlift services that do not cover the full cost of airlift operations. The military services may choose between TRANSCOM and commercial service providers along established routes. Thus, fund managers set rates for some airlift services to remain competitive with commercial airlift carriers, which historically, do not result in revenue sufficient to cover the full cost of airlift operations. DOD must maintain airlift capacity and must remain ready and available to support mobilization for war and contingencies. Providing an incentive for customers to use DOD airlift capacity helps TRANSCOM maintain military airlift capabilities not available from commercial providers. TWCF cash balances are managed as a component of the Air Force Working Capital Fund. Although the TWCF is managed on a day-to-day basis by TRANSCOM, it is part of the Air Force Working Capital Fund for cash management purposes. The relationship of the TWCF to the Air Force Working Capital Fund provides a cash management benefit. According to Air Force officials, retaining the TWCF within the Air Force Working Capital Fund for cash management purposes provides flexibility while minimizing the need for additional funding. According to month-end cash balance data, the TWCF has been able to operate using cash available in the Air Force Working Capital Fund when no funds were available in the TWCF. For example, the TWCF month-end cash balance was negative fifteen times during fiscal years 2007-2017, but there was sufficient cash in the Air Force Working Capital Fund to allow the TWCF to continue to operate and execute its missions. For more information on the cash balances of the Air Force Working Capital Fund and the TWCF see appendix II. Roles and Responsibilities for Managing the Transportation Working Capital Fund Multiple DOD organizations have roles in managing various aspects of the TWCF: The Office of the Under Secretary of Defense (Comptroller)/Chief Financial Officer is generally responsible for coordinating DOD budget preparation, issuing guidance, issuing working capital fund annual financial reports, and overseeing the implementation of working capital funds across DOD. The Office of the Under Secretary of Defense (Comptroller)/Chief Financial Officer is also responsible for approving rates developed for the budget process and charged to the military services. The Air Force assumed responsibility for TWCF cash management in fiscal year 1998 and the TWCF cash balance is included in the Air Force Working Capital Fund cash balance. The Air Force is also responsible for developing Operations and Maintenance budget requests that include requests for funds to pay TRANSCOM for airlift services financed through the TWCF and the ARA. The Assistant Secretary of the Air Force (Financial Management and Comptroller) is responsible for directing and managing all comptroller, programming, and financial management functions, activities, and operations of the Air Force. TRANSCOM is responsible for the day-to-day financial management of the TWCF and has financial reporting responsibility for the TWCF, including setting rates for airlift services. TRANSCOM is also responsible for providing defense components with transportation services to meet national security needs; providing guidance for forecasting; and providing guidance for the standardization of rates, regulations, operational policies, and procedures. Air Mobility Command is a major Air Force command and is responsible to TRANSCOM for providing airlift services paid for by the TWCF. To fulfill its responsibility for providing airlift services to defense components, TRANSCOM and Air Mobility Command use a combination of military and commercial aircraft. Billions of Dollars Were Requested, Allotted, and Expended for the Airlift Readiness Account for Fiscal Years 2007-2017, and Annual Amounts Varied The Air Force requested, allotted, and expended billions of dollars for ARA for fiscal years 2007 through 2017. These amounts varied annually, in some cases, by hundreds of millions of dollars. Our analysis of Air Force and TRANSCOM budget and financial information showed that for fiscal years 2007 through 2017, the Air Force requested $2.8 billion from Congress for ARA requirements, as part of its annual Operations and Maintenance appropriation. The Air Force allotted $2.8 billion (i.e., directed the use of the appropriated funds) and expended $2.4 billion of the ARA appropriated funds). During this period, the total allotted amount was about $400 million dollars more than the expended amount. According to Air Force officials, this $400 million was used to pay for other Air Force readiness priorities. ARA amounts requested, allotted, and expended for fiscal years 2007 through 2017 are shown in figure 3. In five fiscal years (2008-2009, 2013-2014, and 2017) the Air Force allotted less than the amount ultimately expended for the ARA. In these fiscal years, Air Force officials stated that they used available Operations and Maintenance appropriations to support the ARA. For example, in fiscal year 2013, the Air Force requested and allotted less than a million dollars for the ARA. However, the Air Force expended $294 million for the ARA in fiscal year 2013. According to Air Force officials, the Air Force used Air Force Operation and Maintenance mobilization funding to provide the ARA funds to the TWCF to cover this gap. Furthermore, in five fiscal years (2010-2012 and 2015-2016) the Air Force did not expend the total amounts allotted for the ARA, because the allotments exceeded ARA funding needs. According to Air Force officials, they expended amounts initially allotted for ARA requirements to support other readiness priorities, such as training and sustainment requirements. For additional information related to TWCF costs and revenues for airlift services see appendix III. Based on our analysis and interviews with Air Force and TRANSCOM officials, we determined that the Air Force’s ARA budget request, the ARA amount allotted, and the amount expended by the Air Force can vary for a number of reasons. For example, Workload variations occurred due to changes in the global security environment, natural disasters, and force structure changes: For example, in fiscal year 2010, airlift services workload increased 8 percent over the previous year’s level and 39 percent over budgeted levels as a result of force structure changes in Iraq and Afghanistan. This occurred because during fiscal year 2010 the number of U.S. armed forces personnel in Iraq declined by about 81,000, and the number of U.S. armed forces personnel in Afghanistan increased by about 34,000. These changes required additional airlift services, and resulted in more revenue than was originally estimated for the TWCF. The TWCF also received additional funding from the military services to offset increased fuel costs. As a result, TRANSCOM did not issue a bill for the ARA for fiscal year 2010, and the Air Force used the $262 million allotted for ARA requirements for other readiness priorities. ARA budget requests and subsequent expenditures in the fiscal year of availability may be affected by other revenue sources: From fiscal years 2007 through 2017, the TWCF received $6.5 billion from other revenue sources, such as amounts from cash recovery charges, fuel supplement charges, and cash transfers from the Air Force. For example, cash recovery charges were paid by the military services, including the Air Force, using Overseas Contingency Operations funding to cover cash shortages in the TWCF in the early part of the Global War on Terrorism. TRANSCOM charged its customers cash recovery charges in fiscal years 2007 through 2014, with the exception of 2010. ARA expenditures in the fiscal year of availability may be more or less than budgeted: For example, in fiscal year 2015, TRANSCOM did not receive revenue from other sources, resulting in the Air Force expending $404 million dollars more from its Operations and Maintenance funds than requested to cover the ARA bill for that fiscal year. On the other hand, in the fiscal year 2016 Air Force Operations and Maintenance budget request, the Air Force requested $657 million for the ARA, and subsequently allotted $406 million to the ARA—about $251 million less than requested. This occurred because the cost of fuel declined in fiscal year 2016, and TRANSCOM did not bill the Air Force for the full amount allotted for ARA by the Air Force. As a result, the Air Force contributed $122 million of the $406 million to the TWCF and used the remaining available amount for other readiness priorities. DOD and its components have considerable flexibility in using Operation and Maintenance funds and can redesignate funds appropriated among activity and subactivity groups in various ways. Since Fiscal Year 2010, Air Force Budget Requests Have Omitted Complete Airlift Readiness Account Information and Have Not Been Informed by Estimates Air Force Budget Requests Include Some Information on the Airlift Readiness Account but Omit Details Provided In Budget Requests Prior to Fiscal Year 2010 Air Force budget requests include some information on the ARA but omit details provided in budget requests prior to fiscal year 2010. Air Force budget officials stated the ARA budget information that was included for fiscal years 2007 through 2009 was changed for the fiscal year 2010 budget request as part of a DOD initiative to reduce the overall number of budget line items. For fiscal years 2007 through 2009 Air Force Operations and Maintenance budget requests, the amounts requested by the Air Force for the ARA were explicitly stated in the budget justification documents as part of a separate subactivity group line item. For fiscal years 2010 through 2017, the ARA amount was bundled with funding requests for other training requirements in the Air Force Operations and Maintenance budget justification documents, thus omitting specific details with respect to the ARA. Specifically, Air Force budget justification materials included the amount the ARA changed from one fiscal year to the next, but did not include the total ARA amount. In the annual President’s budget request submission, DOD requests specific amounts for Operations and Maintenance activities and includes information about (1) amounts for the next fiscal year for which estimates are submitted, (2) revisions to the amounts for the fiscal year in progress, and (3) reports on the actual amounts allotted to a particular activity or subactivity for the last completed fiscal year. The Standards for Internal Control in the Federal Government state that management should communicate the necessary quality information (internally and externally). According to Air Force budget officials there is no requirement from the Office of the Under Secretary of Defense (Comptroller)/Chief Financial Officer to separately identify the ARA amount and related details in the Air Force Operations and Maintenance annual budget requests. Nevertheless, officials from the Air Force and the Office of the Under Secretary of Defense (Comptroller)/Chief Financial Officer agreed that it would be helpful to include additional information in the budget, because of DOD and congressional interest. Without establishing specific requirements to present detailed ARA information in the annual Air Force Operations and Maintenance budget request, DOD and congressional decision-makers do not have sufficient information to make informed decisions about the level of funding necessary to cover airlift costs not recovered by the rates charged by TRANSCOM. U.S. Transportation Command Has Not Provided Airlift Readiness Account Estimates in Time to Inform Air Force Budget Requests TRANSCOM has not provided ARA estimates in time to inform Air Force budget requests. Air Force officials stated that they need to have TRANSCOM’s estimates by mid-June to be able to conduct analysis to strengthen confidence in the ARA budget request and obtain senior leadership approval. The Air Force submits its Operations and Maintenance annual budget request to DOD in early July. However, TRANSCOM was not providing its ARA estimate until August. As a result, Air Force officials stated they have been developing their own ARA estimate based on historical average trends because they have not received information from TRANSCOM on time. TRANSCOM and Air Force officials agree that TRANSCOM—as the provider of transportation services—is in the best position to understand transportation workload demands. The Standards for Internal Control in the Federal Government state that management should use quality information that is appropriate, current, complete, accurate, accessible, and provided on a timely basis to achieve the entity’s objectives. Furthermore, management should use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks and should design control activities, such as policies, procedures, techniques, and mechanisms as needed to enforce management’s directives. In October 2017, Air Force and TRANSCOM officials told us they were working on a memorandum of understanding to improve the timing and communication of budgetary information from TRANSCOM to support the Air Force ARA budget request. Officials stated that the memorandum of understanding is expected to be completed by the end of fiscal year 2018. However, in May 2018, the draft memorandum that the Air Force provided for our review consisted of a 2-page template with a list of potential topics, and no substantive details regarding formalizing processes. Without developing sufficient detail on the formal processes and subsequently finalizing the memorandum of understanding, the Air Force and TRANSCOM will not be able to reasonably assure that the timing and communication of budgetary information from TRANSCOM are sufficient to support the Air Force Operations and Maintenance ARA annual budget request. TRANSCOM Has a Rate-Setting Process for Airlift Services, but Producing Accurate Workload Forecasts Is Challenging TRANSCOM has a rate-setting process for airlift services, but producing accurate workload forecasts is challenging. Our analysis of TRANSCOM data showed that the airlift forecasting process produced increasingly inaccurate projections of actual workload. Producing accurate forecasts is challenging because TRANSCOM has not fully implemented: (1) an effective process to gather workload projections from customers, (2) forecasting goals and metrics and the review of its performance, and (3) an action plan to improve workload forecasts. U.S. Transportation Command Has a Rate- Setting Process for Airlift Services TRANSCOM has a rate-setting process for airlift services that is generally established to be competitive with commercial airlift services, according to DOD guidance. Specifically, TRANSCOM operates five categories of airlift services, and according to documents and TRANSCOM officials the rate-setting process for each category is as follows: Channel Cargo rates apply to military air cargo along established routes. The rates for this category generally cover about 65 percent of the cost to provide airlift cargo services, and do not vary based on the type of aircraft used. Rates are benchmarked against commercial prices based on the weight of cargo using the following step-by-step process. Initially, International Heavyweight Air Tender price data from the prior year are checked for commercial rates on various routes. If no data are available for some routes, data from the closest country are used to develop average country-to-country rates or a weighted average when there is more than one country-to-country combination. Once rates are developed they are adjusted based on budget exhibits. The TRANSCOM Operations and Plans directorate is responsible for Channel Cargo forecasts to inform rate-setting for this category of service. Channel Passenger rates apply when military and civilian passengers are flying on established routes. The rates are benchmarked against commercial prices, recover about 85 percent of costs, and do not vary based on the type of aircraft used. Channel passenger rate-setting guidance also uses a step-by-step process. General Services Administration city pairs are checked for comparable prices. If no General Services Administration rate is found, the Defense Travel System is checked. If the Defense Travel System does not have a rate, online travel websites are checked. If the online travel sites do not have a rate, then a prior standard rate per mile for that route is adjusted based on budget exhibits. The TRANSCOM Strategic Plans, Policy, and Logistics directorate is responsible for channel passenger forecasts to inform rate-setting. Special Assignment Airlift Missions/Contingency rates apply for the use of full-plane charters performing and providing exclusive services for specific users. Rates are generally determined by the type of aircraft and those rates recover about 91 percent of costs for military aircraft and 100 percent of costs for commercial aircraft. Flight hour rates for military aircraft, flight length (miles), and capacity used for commercial aircraft are considered in the rate determinations. The TRANSCOM Operations and Plans Directorate is responsible for Special Assignment Airlift Missions/Contingency workload forecasts to inform rate-setting for this category of service. Joint Exercise Transportation Program rates apply to airlift services in support of realistic operational joint training. Rates are generally set in the same manner as the rates for the Special Assignment Airlift Missions/Contingency category, except that the TRANSCOM Operations and Plans Directorate is responsible for workload forecasting for the Joint Exercise Transportation Program. Training rates apply to those activities used to conduct programmed flying training, which generally includes a required number of sorties, flying hours, and aircrew training to support readiness. Rates are set to recover 100 percent of the recorded costs because the Air Force is the sole customer for these missions, according to TRANSCOM and Air Force officials. Training rates are generally based on the type of aircraft, and the cost per flight hour. According to TRANSCOM officials, the Air Mobility Command Air, Space and Information Operations Directorate is responsible for the flying hour model that determines requirements for this category of airlift services. Producing Accurate Workload Forecasts Is Challenging, and TRANSCOM Improvement Efforts Have Not Been Sustained TRANSCOM produces a forecast of its airlift workload to inform the development of the ARA budget request. According to TRANSCOM’s guidance, workload forecasts are to be developed using future demand derived from a combination of statistical methods and necessary adjustments for expected operational conditions. The basic principles used for workload forecasting are generally the same for all five categories of airlift services. According to TRANSCOM officials, forecasting methods are applied with some variation. This practice is allowed under the forecasting instruction, depending on the category, and which TRANSCOM or Air Mobility Command entity is responsible for developing the forecast. For example, forecasts for the Joint Exercise Transportation Program and Training are affected more by requirements to support readiness and funding constraints. On the other hand, the basic forecasting process for Channel Cargo, Channel Passenger, and Special Assignment Airlift Missions/Contingency are affected by the transportation needs of the military services and combatant commands and generally based on historical workload. Based on our analysis, workload forecasts have been increasingly inaccurate for fiscal years 2007 through 2017. Specifically, we found that forecast inaccuracy (i.e., the variance between the forecast and the actual workload amounts aggregated across all five workload categories) averaged about 25 percent and was trending upward in absolute value for fiscal years 2007 through 2017, as shown in figure 4. In addition to the aggregate workload forecast being increasingly inaccurate, the accuracy of the workload forecasts across each of the five categories varies from year to year. For example, In fiscal year 2008, channel cargo actual workload was about 17 percent lower than the forecast, and Special Assignment Airlift Missions/Contingency actual workload was about 12 percent higher than the forecast; and In fiscal year 2016 Special Assignment Airlift Missions/Contingency actual workload was about 116 percent higher than the forecast and the Joint Exercise Transportation Program actual workload was about 45 percent lower than forecasts. For fiscal years 2007 through 2017, the workload categories with the largest absolute forecast inaccuracy include Special Assignment Airlift Missions/Contingency, Channel Cargo, and the Joint Exercise Transportation Program. Two of these categories (Special Assignment Airlift Missions/Contingency and Channel Cargo) also have the largest share of airlift services. However, all five workload categories had forecast inaccuracy of more than 15 percent in at least three of the eleven years we reviewed. The variance of forecasted workload from actual workload by airlift service category is presented in figure 5 below. Based on our analysis and discussions with TRANSCOM officials, TRANSCOM has not taken sustained actions to improve forecasting accuracy. Specifically, we found that TRANSCOM has not fully implemented (1) an effective process to collect projected airlift workload information from its customers (i.e., military services) to inform its forecasts, (2) metrics and goals for measuring and reviewing forecast accuracy, and (3) an action plan to improve workload forecasting. Specifically, TRANSCOM has not implemented an effective process for collecting projected airlift workload information: TRANSCOM officials told us they use historic workload data to establish a baseline, and perform statistical analysis to estimate averages and trends according to their instructions. Next, forecasters use information from the military services and combatant commands that may affect each category of workload, if available, and adjust workload estimates as needed. However, according to TRANSCOM officials, personnel conducting forecasts have limited visibility over factors that may influence forecasts, such as demand for transportation services, due to the lack of information obtained from their customers (i.e., the military services and Combatant Commands). Attempts to collect information from the military services and combatant commands have been made on an ad hoc basis. For example, in April 2016 TRANSCOM’s Commander solicited information from the military services’ senior leadership regarding their future transportation requirements, including airlift needs. The message emphasized the importance of forecasting to inform budget requests and management decisions to improve operational efficiency. However, according to TRANSCOM officials, the Air Force—who is TRANSCOM’s largest customer for airlift services—was the only military service that provided the requested information in response to the TRANSCOM’s Commander’s one-time request. According to TRANSCOM officials, the other military services have not provided the requested information for workload projections because the services do not understand how they would benefit from providing the information and TRANSCOM’s terminology and processes are not familiar to the services. As a result, TRANSCOM’s ad hoc approach has not obtained quality information from its customers to use in forecasting workload. Standards for Internal Control in the Federal Government state that management should use quality information that is appropriate, current, complete, accurate, accessible, and provided on a timely basis to achieve the entity’s objectives. Furthermore, we found other defense organizations have provided a mechanism for customers to routinely communicate projected workload information. For example, the Defense Logistics Agency and their customers work together to evaluate historical demand data for spare parts and tailor forecast plans for those spare parts based on projected future usage. To this end, communications with customers are expected to be consistent and to use terminology shared in common with customers. Options are presented in a manner that is readily understood by customers in a format determined by customers’ needs to encourage the most efficient and effective solutions available. TRANSCOM no longer uses forecast accuracy metrics and has not established forecast accuracy goals: In 2012, TRANSCOM developed a forecasting process, and according to officials started providing forecast performance metric briefings to TRANSCOM senior leadership on a quarterly basis in fiscal year 2014. TRANSCOM’s overall forecast accuracy improved slightly in 2015. However, according to TRANSCOM officials, these forecast briefings were canceled after the first quarter of fiscal year 2016 because they were viewed as minimally useful for budgeting, and were not used to position airlift capacity to meet operational needs. In addition, TRANSCOM officials stated that they no longer measure forecast performance. We found that overall forecast inaccuracy was higher for fiscal years 2016 and 2017 than any other year we reviewed, as indicated above in figure 4. However, TRANSCOM’s January 2015 forecasting instruction requires forecast accuracy metrics to be developed to support management decisions and forecast variance from actual workload to be reviewed. Furthermore, the Standards for Internal Control in the Federal Government state that management should define objectives in specific and measurable terms to enable the design of internal control for related risks, establish activities to monitor performance measures and indicators, and assess performance against plans, goals, and objectives set by the entity. TRANSCOM does not have a corrective action plan for improving workload forecasts: TRANSCOM officials acknowledge that workload forecasting needs improvement, and told us that TRANSCOM does not have an action plan to improve its forecasting processes to inform budgetary and operational decisions. In October 2013, TRANSCOM considered, but did not adopt, a process designed to help ensure senior management has visibility over issues, including forecasting, known as Sales and Operations Planning (S&OP). We reported that the Army implemented this process in 2013 after Army officials concluded that they could leverage commercial best practices to improve logistics performance (see sidebar). We discussed the S&OP process with TRANSCOM officials, and they told us that the possibility of adapting the process to military logistics was not readily accepted at TRANSCOM because of organizational resistance to change. Initial organizational resistance to change was also experienced by the Army, as discussed in our prior report. However, according to the Army, the benefits of implementing S&OP resulted in a 50 percent reduction in forecast error, and a decision was made to deploy the S&OP process for use across all Army depots and arsenals by the end of fiscal year 2018. Adopting a corrective action plan, or approach such as S&OP can help TRANSCOM focus and improve planning efforts resulting in improved and more accurate workload forecasting. Furthermore, according to TRANSCOM’s January 2015 forecasting instruction, opportunities to improve forecasts should be assessed. Additionally, Standards for Internal Control in the Federal Government state that management should complete and document corrective actions to remediate internal control deficiencies on a timely basis to achieve established objectives. Our prior work has also shown that organizations benefit from corrective action plans for improvement. TRANSCOM officials told us that producing accurate workload forecasts is challenging, and we agree that there are some inherent difficulties in accurately forecasting airlift workload on an annual basis. However, our prior work on aviation forecasting has noted that forecasting is inherently uncertain, but managing the risk related to that uncertainty is essential to making informed decisions. Improved forecasting by addressing the weaknesses identified could allow for more effective financial planning and enable more efficient airlift operations. For example, TRANSCOM estimated needing an ARA amount of $772 million for fiscal year 2016. However, according to our analysis of TRANSCOM financial records, the TWCF did not require support from ARA funds because actual revenue from airlift services exceeded its costs by $148 million in fiscal year 2016. Inaccurate forecasts can lead to unreliable budget requests and hinder effective and efficient operational planning necessary to provide customers with the service they need. For example, according to a 2017 Air Force Audit Agency report, flying channel passenger flights at 85 percent of capacity may result in estimated savings of about $30 million over a 6-year period. Our past work also shows that underutilization of cargo airlift capacity is a longstanding issue. Improving forecast accuracy would help TRANSCOM manage airlift services more efficiently, make better use of budgetary resources to maximize airlift capacity more effectively, and result in an ARA budget estimate that is more accurate. In response to our findings and discussions, TRANSCOM officials stated they plan to begin reviewing TRANSCOM’s workload forecasting process and determine a path ahead in June 2018. However, the outcome and timeframes for this review are uncertain. Furthermore, TRANSCOM leadership still must approve and fully implement changes to forecasting processes, metrics, and goals. Unless TRANSCOM fully implements an effective process to obtain projected workload requirements from its customers on a routine basis, uses forecast accuracy metrics and establishes goals, and develops an action plan, airlift workload forecasting will not improve. We acknowledge that eliminating volatility entirely in the ARA budget request is unlikely given that there will be unexpected and unpredictable workload adjustments due to changes in the global security environment or natural disasters. We also understand improving workload forecasts through the use of goals, metrics, and an action plan for improvement will not eliminate the inherent volatility associated with the ARA budget request amount. However, these improvements would allow TRANSCOM to better manage the inherent risks associated with the accuracy of forecasts and improve ARA estimates used to inform future Air Force Operations and Maintenance budget requests. Conclusions Each year DOD spends billions of dollars on airlift services flying personnel and cargo worldwide. The clarity of budget estimates and the accuracy of forecasts for airlift services are essential for Congress and DOD to make informed decisions. Accordingly, Congress would benefit from detailed ARA information in its budget requests, and this information would be improved by TRANSCOM providing timely information on the annual ARA estimate to the Air Force. Additionally, TRANSCOM continues to face challenges in forecasting its workload, which is a key factor in estimating the ARA. Until TRANSCOM establishes a process to collect projected workload information from its customers, uses forecast accuracy metrics and goals to monitor its performance, and implements a corrective action plan, forecast accuracy and ARA estimates are not likely to improve. Recommendations for Executive Action We are making a total of five recommendations to DOD. The Secretary of Defense should ensure that the Undersecretary of Defense (Comptroller)/Chief Financial Officer establishes requirements to present details related to the ARA in the annual Air Force Operations and Maintenance budget request including (1) amounts for the next fiscal year for which estimates are submitted, (2) revisions to the amounts for the fiscal year in progress, and (3) the actual amounts allotted for the last completed fiscal year. (Recommendation 1) The Secretary of Defense should ensure that the Secretary of the Air Force and the Commander, U.S. Transportation Command, in collaboration, develop sufficient detail on the formal processes and finalize their memorandum of understanding to improve the timing and communication of budgetary information to support the Air Force Operations and Maintenance Airlift Readiness Account annual budget request. (Recommendation 2) The Secretary of Defense should ensure that the Commander, U.S. Transportation Command, fully implements a process to obtain projected airlift workload from the military services and Combatant Commanders on a routine basis to improve the accuracy of its workload forecasts. (Recommendation 3) The Secretary of Defense should ensure that the Commander, U.S. Transportation Command, uses forecast performance metrics and establishes forecast accuracy goals for the airlift workload. (Recommendation 4) The Secretary of Defense should ensure that the Commander, U.S. Transportation Command, develops a corrective action plan to improve the accuracy of its workload forecasting. (Recommendation 5) Agency Comments We provided a draft of this report to DOD for review and comment. In written comments, which are reprinted in appendix IV, DOD concurred with our recommendations and stated that it plans to take specific actions in response to our recommendations. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Diana Maurer at (202) 512-9627 or maurerd@gao.gov, or Asif Khan at (202) 512-9869, or khana@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix V. Appendix I: Scope and Methodology To determine the extent to which ARA funds were requested, allotted, and expended by the Air Force from fiscal years 2007 through 2017, we analyzed Air Force budget request documents and underlying support documentation. We also analyzed information from the Air Force’s Automated Budget Interactive Data Environment Systems to determine the appropriated amounts allotted for ARA activities. Furthermore, we analyzed summary-level documents detailing expenditures from the Air Force and TRANSCOM for fiscal years 2007 through 2017 to establish trends. Moreover, we reviewed TRANSCOM’s procedures and supporting documentation for billing the Air Force for payment of the ARA. Lastly, we interviewed DOD, Air Force and TRANSCOM officials to gain an understanding of general reasons variances from year to year occurred or between the requested and expended amounts. To determine the extent to which the Air Force provided ARA information in its budget request to Congress and informed its request with information from TRANSCOM, we analyzed Air Force Operations and Maintenance budget justification documents to determine the type of ARA information (i.e., total budget request amount, changes from year to year, and other information) provided in the fiscal years 2007 through 2017 President budget submissions. To understand the differences, if any, between the ARA information provided from year to year, we interviewed Air Force budget officials to obtain an explanation for changes in the reported information. In addition, we analyzed Air Force Operations and Maintenance budget justification documents, and Transportation Working Capital Fund budget documents to determine if the ARA was based on available information. We also discussed with Air Force and TRANSCOM officials future plans to change their procedures and the information considered in the development of the ARA estimate. Further, we compared the Air Force and TRANSCOM processes and procedures against Standards for Internal Controls in the Federal Government, specifically standards regarding internal and external reporting and mechanisms to enforce management directives. To determine the extent to which TRANSCOM has implemented a process to set rates for airlift services and use workload forecasts to estimate the annual ARA funding request, we analyzed the processes TRANSCOM used to set rates it charges customers in various airlift workload categories for fiscal years 2007 through 2017. We also reviewed forecasting procedures and analyzed supporting documents provided by TRANSCOM; interviewed TRANSCOM officials to gain an understanding of how they implement these rate setting and forecasting procedures; and analyzed forecast and actual workload data provided by TRANSCOM for the same timeframe. We compared TRANSCOM’s processes against rate-setting and forecasting guidance and reviewed whether TRANSCOM used quality information to establish workload projections, established any performance measures and goals for forecasting its workload, and developed any efforts to improve its forecasting of workload. In addition, we interviewed TRANSCOM and Air Mobility Command officials and reviewed supporting documentation to gain an understanding of challenges that exist to producing accurate workload forecasts, and the relationship with the rate-setting and budgeting process. We obtained revenue, cost, workload, and ARA data in this report from budget documents, accounting reports, and Air Force and TRANSCOM records for fiscal years 2007 through 2017. We assessed the reliability of the data by (1) interviewing Air Force and TRANSCOM officials to gain an understanding of the processes used to produce the cash, revenue, cost, workload and ARA data; (2) reviewing prior work to determine if there were reported concerns with TRANSCOM’s data; (3) comparing cash balances, revenue, costs and workload data provided by TRANSCOM to the same data presented in the Air Force Working Capital Fund budgets for fiscal years 2007 through 2017; and (4) comparing ARA data to Air Force and TRANSCOM supporting documentation, or to Air Force Operations and Maintenance budget execution reports to support ARA reported amounts for fiscal years 2007 through 2017. On the basis of these procedures, we have concluded that these data were sufficiently reliable for the purposes of this report. To address all of our objectives, we conducted a site visit to U.S. Transportation Command Headquarters and Air Mobility Command at Scott Air Force Base, Illinois, and interviewed officials with the Office of the Undersecretary of Defense (Comptroller)/Chief Financial Officer, the Assistant Secretary of the Air Force (Financial Management and Comptroller), the U.S. Transportation Command, and the Air Mobility Command. We conducted this performance audit from August 2017 through September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Air Force Working Capital Fund and Transportation Working Capital Fund Monthly Cash Balances for Fiscal Years 2007-2017 The Air Force Working Capital Fund maintained a positive monthly cash balance throughout fiscal years 2007 through 2017. The Transportation Working Capital Fund (TWCF) is a part of the Air Force Working Capital Fund for cash management purposes. DOD working capital funds are authorized to charge amounts necessary to recover the full costs of goods and services provided. However, the TWCF is authorized to establish airlift customer rates to be competitive with commercial air carriers. Due to mobilization requirements, the resulting revenue does not always cover the full costs of airlift operations provided through the TWCF. To the extent customer revenue is insufficient to support the costs of maintaining airlift capability the Air Force shall provide appropriated funds. The Air Force Working Capital Fund and TWCF monthly cash balances are depicted in figure 6 below. Appendix III: Transportation Working Capital Fund Costs and Revenues for Airlift Services Total costs for airlift services for fiscal years 2007 through 2017 were less than revenue collected for airlift services. Revenue came from rates charged to customers for services performed (workload related revenue), the Airlift Readiness Account (ARA), and other revenue sources. For seven of the eleven years we reviewed, revenues exceeded costs, and for four of the eleven years, costs exceeded revenue. For the eleven year period we reviewed, workload related revenue ($73 billion) was not sufficient to pay for the full costs of airlift services. The remaining revenue included $2 billion from the ARA and $7 billion from other revenue sources. Appendix IV: Comments from the Department of Defense Appendix V: GAO Contacts and Staff Acknowledgments GAO Contacts Diana Maurer, (202) 512-9627 or maurerd@gao.gov, or Asif A. Khan, at (202) 512-9869, or khana@gao.gov. Staff Acknowledgments In addition to the contacts named above, John Bumgarner (Assistant Director), Doris Yanger (Assistant Director), John E. “Jet” Trubey (Analyst In Charge), Pedro Almoguera, John Craig, Jason Kirwan, Amie Lesser, Felicia Lopez, Keith McDaniel, Clarice Ransom, and Mike Silver made key contributions to this report.
Why GAO Did This Study TRANSCOM reported spending about $81 billion flying personnel and cargo worldwide in fiscal years 2007-2017. TRANSCOM manages the Transportation Working Capital Fund (TWCF) to provide air, land, and sea transportation for the Department of Defense (DOD). TRANSCOM sets some rates it charges below costs to be competitive with commercial air service providers. The Air Force generally pays for expenses not covered by TWCF rates through the ARA. A House Report accompanying the National Defense Authorization Act for Fiscal Year 2018 included a provision for GAO to review the ARA and the TWCF. GAO's report discusses the extent to which (1) ARA funds were requested, allotted, and expended for airlift activities; (2) the Air Force provided ARA information in its budget requests and informed its requests with information from TRANSCOM; and (3) TRANSCOM has implemented a rate-setting process for airlift services and uses workload forecasts to estimate the annual ARA funding request. GAO analyzed ARA funds and costs and revenues for airlift services for fiscal years 2007-2017; interviewed officials about the ARA budget preparation process; and analyzed TRANSCOM rate-setting and forecasting guidance and results. What GAO Found For fiscal years 2007 through 2017, the Air Force requested $2.8 billion from Congress for Airlift Readiness Account (ARA) requirements, as part of its annual Operations and Maintenance appropriation. The Air Force allotted $2.8 billion (i.e., directed the use of the appropriated funds) and expended $2.4 billion of these funds for the ARA. U.S. Transportation Command (TRANSCOM) uses ARA funds to support airlift operations. Specifically, the Air Force requests ARA funds in its annual Operations and Maintenance budget request and subsequently provides these funds to TRANSCOM to assist in paying for airlift services (see figure). Amounts requested, allotted, and expended varied from year-to-year, in some cases by hundreds of millions of dollars, in part due to changes in the amount of airlift services provided by TRANSCOM. The Air Force has not been including specific ARA information in its budget requests since fiscal year 2010. For fiscal years 2007 through 2009, Air Force budget requests explicitly stated ARA amounts. Air Force officials stated their budget presentation was changed to reduce the overall number of budget line items. In addition, TRANSCOM has not been providing cost estimates in time to support Air Force budget preparations. Specifically, TRANSCOM has been providing this information 2 months later than the Air Force needs it to support budget deliberations. The Air Force and TRANSCOM have taken some initial steps to address this issue, but these efforts lack substantive details regarding formalizing the necessary processes to ensure timely information. Until the Air Force and TRANSCOM resolve this issue, Congress will not have sufficient and complete information to inform its decisions on appropriating funds for ARA. TRANSCOM has a rate-setting process, but faces challenges producing accurate workload forecasts. To provide information to its customers during the annual budget development process, TRANSCOM sets airlift rates in advance of the fiscal year of expenditure. Workload forecasts influence the rate-setting process. Inaccurate forecasts can lead to unreliable budget requests and hinder effective and efficient operational planning. GAO found that forecast inaccuracy (i.e., the variance between the forecast and the actual workload) averaged 25 percent and was becoming increasingly inaccurate since fiscal year 2007. GAO found that TRANSCOM has several workload forecasting challenges. Specifically, TRANSCOM lacks an effective process to gather workload projections from customers. It also no longer uses forecasting accuracy metrics and has not established forecast accuracy goals to monitor its performance. Furthermore, TRANSCOM does not have an action plan to improve its increasingly inaccurate workload forecasts. Taking steps to address these issues would enable TRANSCOM to improve the accuracy of workload forecasts. What GAO Recommends GAO is making five recommendations to DOD, including improving the clarity and completeness of budget estimates, and taking steps to improve the accuracy of airlift workload forecasts. DOD concurs with GAO's recommendations.
gao_GAO-18-389
gao_GAO-18-389_0
Background During a disease outbreak, including the Zika virus, HHS is the lead federal agency for public health and medical response, and it leverages national public health and medical resources to prepare for and respond to the outbreak. Zika Virus Transmission and Prevention The Zika virus is primarily transmitted to humans by infected mosquitoes, but can also be transmitted from mother to child during pregnancy or around the time of birth, or from person-to-person through sexual contact or blood transfusion. According to CDC, once an individual has been infected with the Zika virus, they are likely to be protected from future infections. The Aedes aegypti mosquitoes are reportedly the primary mosquito spreading the Zika virus, while the Aedes albopictus mosquitoes, which share many of the same traits as Aedes aegypti, also have the ability to spread the virus. Local transmission of the virus has occurred in American Samoa, Florida, Puerto Rico, Texas, and the U.S. Virgin Islands. Travel-associated cases of Zika virus infection have been reported in every state, with the largest numbers of cases reported in California, Florida, New York, and Texas. There is no vaccine to prevent the Zika virus, so CDC guidance recommends preventing the spread of the virus by protecting against mosquito bites by wearing protective clothing, using insect repellant, and staying in places with air conditioning and window and door screens to keep mosquitoes outside, among other actions. Mosquito control in the United States is implemented and overseen at the state and local levels by entities such as mosquito control districts and health agencies. Federal agencies support such control entities with funding and subject matter experts, and may regulate some control methods such as pesticides. Zika Funding Prior to September 2016 In April 2016, the Office of Management and Budget and the Secretary of Health and Human Services announced that they had identified $589 million—$510 million of it from existing Ebola virus disease resources within HHS, the Department of State, and the U.S. Agency for International Development—that could quickly be redirected and spent on immediate efforts to control and respond to the spread of the Zika virus. According to HHS, out of the $589 million, $374 million was redirected to domestic Zika virus control activities. HHS reports that almost all of this funding ($354 million) was distributed to three HHS agencies, as follows: CDC received $222 million for various activities including field staff, state response teams, Zika virus testing, tracking of pregnant women who were infected with the Zika virus, and grants for mosquito control and other Zika prevention activities; BARDA received $85 million for private sector development of Zika vaccines, treatments, and technologies to protect the blood supply, and other countermeasures; and NIH received $47 million for Zika medical countermeasure development, including clinical trials on the leading Zika vaccine candidate. Additionally, according to HHS officials, in August 2016, the Secretary of Health and Human Services notified Congress of the department’s intent to redirect an additional $81 million in unobligated HHS funds for Zika vaccine development activities. Of this amount, $34 million was drawn from accounts at NIH and $47 million was drawn from funds transferred from other HHS agencies and reprogrammed from within PHSSEF. From these redirected funds, $34 million (i.e., the amount drawn from other NIH accounts) was to be used by NIH to continue clinical trials on its lead Zika vaccine candidate. The remaining $47 million was to be used by BARDA for continued private sector Zika vaccine development. September 2016 Zika Supplemental Funding In September 2016, Congress appropriated $932 million to HHS and its agencies in the Zika Response and Preparedness Act. Of that amount, $394 million was appropriated directly to CDC and $152 million was appropriated directly to NIH. The remainder was appropriated to HHS’s PHSSEF, from which HHS allocated $245 million to BARDA within the Office of the Assistant Secretary for Preparedness and Response, $75 million to CMS, and $66 million to HRSA. (See fig. 1.) The Zika supplemental funding remained available for obligation until September 30, 2017, for the following purposes: CDC: to prevent, prepare for, and respond to the Zika virus, health conditions related to the virus, and other vector-borne diseases, domestically and internationally. NIH: for research on the virology, natural history, and pathogenesis of the Zika virus infection, and preclinical and clinical development of vaccines and other medical countermeasures for the Zika virus and other vector-borne diseases, domestically and internationally. PHSSEF: for various activities, including to prevent, prepare for, and respond to the Zika virus, health conditions related to the virus and other vector-borne diseases, domestically and internationally; and to develop necessary countermeasures and vaccines, including the development and purchase of vaccines, therapeutics, diagnostics, necessary medical supplies, and administrative activities. BARDA: HHS allocated funding to BARDA to support further development of Zika vaccine candidates, diagnostics, and pathogen reduction technologies initiated in fiscal year 2016 to advance these projects toward licensure or approval by the Food and Drug Administration. CMS: HHS allocated funding to CMS for expenses to support states, territories, tribes, or tribal organizations with active or local transmission cases of the Zika virus, as confirmed by CDC. The funding was allocated to reimburse the costs of health care for health conditions related to the Zika virus, other than costs that are covered by private health insurance, of which not less than $60 million were for territories with the highest rates of Zika virus transmission. HRSA: HHS allocated $20 million for projects of regional and national significance in Puerto Rico and other U.S. territories, $40 million to expand the delivery of primary health services in Puerto Rico and the other territories, and $6 million to be used to assign National Health Service Corps members to Puerto Rico and the other territories to provide primary health services in areas affected by the Zika virus or other vector-borne diseases through the National Health Service Corps Loan Repayment Program. Agencies have until September 30, 2022, to disburse the Zika supplemental funding appropriated by the Zika Response and Preparedness Act. HHS Agencies Have Obligated Nearly All of the Zika Supplemental Funding; Disbursements Are Ongoing Agencies Obligated Nearly All of Their Zika Supplemental Funding as of September 30, 2017, Primarily Through Cooperative Agreements, Grants, and Contracts We found that as of September 30, 2017—the end of the Zika supplemental appropriation’s period of availability—nearly all Zika supplemental funding had been obligated, primarily through cooperative agreements, grants, and contracts. BARDA obligated 100 percent of its Zika supplemental funding, while CDC, CMS, HRSA, and NIH obligated over 99 percent of their funding. (See table 1.) Three of the five agencies had obligated over half of their Zika supplemental funding by January 31, 2017, 4 months after enactment of the appropriation. For example, according to CDC officials, using cooperative agreements with application processes familiar to the awardees helped enable the agency to obligate its funding soon after receiving the appropriation. Some agencies began obligating later in the one-year obligation time frame based on their approach to obligating the Zika supplemental funding. For example, CMS withheld a portion of its supplemental funds in the event additional awardees became eligible for funding within the obligation time frame—eligibility included having active or local transmission of the Zika virus. Agency officials told us that they used cooperative agreements, grants, and contracts to award Zika supplemental funding to existing and new awardees. The agencies also used other mechanisms to obligate the Zika supplemental funding, such as interagency agreements, intramural research awards, and funding used within the agency for travel and other expenses. According to officials, agencies used these mechanisms to award Zika supplemental funding in the following ways. BARDA executed new contracts and modified existing contracts through the agency’s typical contracting process, officials said, for research in the areas of Zika clinical diagnostics and vaccine development. BARDA did not use any Zika supplemental funding to support internal administrative or personnel costs. (See app. I for the contracts BARDA awarded with its Zika supplemental funding.) CDC generally obligated Zika supplemental funding to current awardees through existing cooperative agreements, according to agency officials. (See app. II through VII for the cooperative agreements CDC used to award Zika supplemental funding to existing awardees.) CDC also awarded funding through contracts and interagency agreements, and obligated about $24 million for internal CDC expenses, such as salaries and benefits, travel, supplies, and equipment. (See app. VIII for the contracts and interagency agreements CDC awarded with its Zika supplemental funding.) CMS created a new program—the Zika Health Care Services Program—to award its Zika supplemental funding through cooperative agreements, according to agency officials. The purpose of the 3-year program is to support prevention activities and treatment services for women (including pregnant women), children, and men adversely or potentially affected by the Zika virus. CMS awarded funding through the Zika Health Care Services Program to those states, territories, tribes, or tribal organizations with active or local transmission of the Zika virus, as confirmed by CDC. CMS awarded funding to the health departments in American Samoa, Florida, Puerto Rico, and the U.S. Virgin Islands in January 2017. In June 2017, CMS awarded funding to the health department in Texas, the only new state or territory with local transmission of the Zika virus. In both CMS award rounds, only states and territories received awards, because they were the only areas with active or local transmission of the Zika virus. CMS retained about $3.6 million of the Zika supplemental funding to use for administrative support services, as well as for travel for monitoring and oversight. (See app. IX for the awards CMS made with its Zika supplemental funding.) HRSA generally obligated Zika supplemental funding through grants to existing awardees, according to agency officials. HRSA did not retain any Zika supplemental funding for internal activities. (See app. X for the grants HRSA awarded.) NIH used grants and contracts to award its Zika supplemental funding to new and existing awardees. NIH also used about $95 million of the Zika supplemental funding for internal activities—studies conducted by NIH researchers. According to NIH officials, the somewhat unique aspects of the Zika virus as an arbovirus infectious disease led NIH to focus on vaccines as a priority, along with development of diagnostics, therapeutics, vector control, and surveillance. (See app. XI for NIH’s Zika supplemental awards.) For more information on the funding provided by CDC, CMS, and HRSA—the only agencies that provided funding for states and territories—and the number of reported Zika cases by state or territory, see an interactive graphic at https://www.gao.gov/products/GAO-18-389. Officials from all five agencies cited coordination initiatives through regular interagency or organizational teleconferences and participation in working groups. According to CMS officials, during the Zika virus response, CDC, CMS, HRSA, and other federal partners held interagency Zika coordination calls to discuss ongoing developments related to the Zika virus. Additionally, because CMS and HRSA were both awarding funding for perinatal health care services, officials said they collaborated to ensure that activities available through the CMS grants complemented the activities available through HRSA’s Special Projects of Regional and National Significance. In addition, HRSA officials reported conducting joint site visits with CDC and CMS, as well as streamlining reporting requirements to reduce grantee reporting burden. Furthermore, BARDA officials said that they awarded and administered a contract for CDC on the development of a vector control product. CDC provided the funding and topical subject matter expertise for the award, and BARDA provided management services for the contract, because of BARDA’s experience with these types of contracts. BARDA and NIH officials also reported collaborating on vaccine development. BARDA officials explained that while the vaccine development process requires that different agencies support multiple vaccine development candidates, the two agencies coordinated to avoid redundancy. Agencies Had Disbursed About 21 Percent of the Zika Supplemental Funding as of December 31, 2017 We found that as of December 31, 2017, the HHS agencies had disbursed about 21 percent (approximately $195.5 million of $932 million) of the Zika supplemental funding. According to agency officials and selected awardees we spoke with, various factors can affect the disbursement of funding after obligation. These factors include time to draw down funding from the federal agencies, allowing for program implementation and a planning period, and awardees’ internal administrative processes and unique characteristics, as described below. Drawdown procedures. CDC officials said that awardees draw down federal funding on their individual schedules based upon how they manage their federal funding. Some awardees draw down on a daily basis, as needed, while others draw down on a biweekly or monthly basis. Additionally, drawdowns for personnel costs coincide with payroll schedules, which could be biweekly or monthly. For example, in the case of monthly payroll, two awardees told us that the federal funding for a particular month’s expenses would be drawn down the following month. Furthermore, selected awardees we spoke with said that they draw down federal funding after they have incurred an expense, such as when they receive an invoice. For example, Los Angeles County officials reported that in order to draw down the funds for the organization that is responsible for servicing their vector control activities, they have to first receive an invoice from the organization, which the county pays with its own funds. Only then can the county draw down the federal funding. This process usually results in at least a 3-month period between receiving the invoice and drawing down the federal funding, officials said. Program implementation and planning period. According to CMS officials, the agency awarded funding to health departments in American Samoa, Florida, Puerto Rico, Texas, and the U.S. Virgin Islands from the Zika Health Care Services Program, which was a new collaboration between CMS and these specific awardees. The officials said that the steps awardees needed to take to stand-up new programs—such as budget review and approval processes, selection of key personnel to administer the grant, grant activities related to contracting, and hiring and procurement—can delay start-up and implementation of the grant programs. Additionally, CMS gave awardees in the Zika Health Care Services Program a 3-month planning period after they received their notices of award to amend their activities. For example, Texas officials reported that they used the 3-month planning period to work on executing contracts with the local health departments in three counties bordering Mexico. Texas officials explained that collaborating with the local health departments entailed determining the greatest potential benefit of the use of the funds, because the award itself was not enough to cover all of the costs of direct health care services associated with the Zika virus. Awardees’ processes and characteristics. Local administrative processes for spending federal supplemental funds can result in varied disbursement time frames. For example, California received Zika supplemental funding for an award that required an amendment to an administrative contract, which state officials said takes about 7 to 8 months for internal state approval. Additionally, certain awardees’ characteristics affect disbursements. For example, Houston officials said the city was eligible for and was directly awarded a CDC cooperative agreement, but because it does not conduct vector control activities itself, the city had to negotiate a contract with the surrounding county to conduct these activities, adding additional time before it could begin disbursements. Officials also noted that awardees’ personnel hiring issues can affect disbursement time frames. For example, CMS officials said that some territories experienced delays in carrying out activities due to provider shortages, particularly among specialists needed to care for children with developmental delays and birth defects caused by the Zika virus. CMS officials noted that island jurisdictions, such as the U.S. territories, can find hiring more difficult due to a shortage of health care professionals available within the territory, thus requiring individuals to be recruited from outside the territory, which adds time to the process and raises costs. In addition, Florida officials in Miami-Dade County reported that the necessary staff surge during the Zika response was challenging to fill, noting that it was particularly difficult to find phlebotomists and nurses, because they were in high demand. Standard vaccine development processes also influenced the rate of disbursement. Due to the long duration of the vaccine development process, BARDA officials said, disbursements to certain awardees have occurred at varying intervals. For example, some contract invoices are received on a monthly basis, or twice a month if the company is a small business. The invoices are then reviewed and if deemed acceptable, they are processed for payment. The 2017 hurricane season may have affected certain awardees’ use of their Zika supplemental funding, which prompted agencies to respond by approving various types of short-term relief for administrative, financial management, and audit requirements for awardees affected by the hurricanes. Three agencies—CDC, CMS, and HRSA—awarded Zika supplemental funding to areas affected by hurricanes Harvey, Irma, and Maria in 2017: Florida, Puerto Rico, Texas, and the U.S. Virgin Islands. CDC officials told us, for example, that because of the hurricanes they granted extensions at the request of the awardees for submitting financial and progress reports, and continuation of activities. Similarly, CMS offered hurricane-affected awardees of the Zika Health Care Services Program the option to extend the deadline for deliverables, if necessary. CMS officials told us that grant activities had been affected by the hurricanes, and all grantees had communicated the intent to fully resume activities as soon as they are able to do so. Due to the 3-year project period for grantees, CMS officials said that the affected entities can still accomplish programmatic responsibilities, even if there is a temporary halt in project activities. Furthermore, HRSA officials said that they provided Puerto Rico and the U.S. Virgin Islands with extensions on required program, financial, and audit reports. Selected Awardees Undertook Multiple Activities with Zika Supplemental Funding, and Had Varying Experiences Applying for and Managing Funds Selected Awardees Used Zika Supplemental Funding for Activities Including Surveillance, Vector Control, Laboratory Capacity, and Health Care Services Selected awardees we spoke with used Zika supplemental funding for a variety of activities. Collectively, the activities included four primary types: medical surveillance, vector control, laboratory capacity building, and providing health care services, as described below. Medical surveillance activities include identifying and reporting Zika virus disease cases to CDC, as well as reporting Zika virus infections in pregnant women and infants to the U.S. Zika Pregnancy Registry. Vector control activities include detecting and monitoring Aedes aegypti and Aedes albopictus mosquito distribution and mosquito control, and monitoring of insecticide resistance and management. Laboratory capacity building activities include developing laboratory capacity to perform Zika virus testing. Health care service activities for those selected awardees that received funding from CMS (Florida and Texas) included increasing access to contraceptive services for men and women; increasing access to and reducing barriers to diagnostic testing, screening, and counseling for pregnant women and newborns; and increasing access to appropriate specialized health care services for pregnant women, children born to mothers with maternal Zika virus infection, and their families. Table 2 provides examples of the types of activities funded by the selected awardees we interviewed. This table does not include a comprehensive list of all of the awardees’ Zika activities—see appendixes II through VI, and appendix IX for more information on the Zika supplemental funding CDC and CMS awarded to states, territories, and local jurisdictions. Of the awardees we spoke with, Florida and Texas were the only states that had experienced local mosquito-borne transmission of the Zika virus. Other selected awardees—which included Arizona, Los Angeles County, and Louisiana—were primarily responding to travel-related cases of Zika virus disease. The following are additional examples of activities funded using Zika supplemental funding. For more information on the types of activities authorized under each award, see appendixes II-VI and IX. Florida. Florida, which has a centralized health department with county- based offices, used Zika supplemental funding for laboratory capacity and vector control activities, among other activities. According to state officials, funding for state-run laboratories was used for purchasing materials, such as those used for testing urine for the Zika virus, and funding staff located in counties to assist with handling Zika samples and testing, data entry, and result reporting to surveillance networks. Additionally, Florida used Zika supplemental funding for local vector control activities. For example, Miami-Dade County officials said that they purchased mosquito traps and removed mosquito breeding grounds, including plants, tires, and other objects that can hold standing water. (See fig. 2 for examples of the mosquito control activities in Miami-Dade County.) Through CMS’s Zika Health Care Services Program, Florida received funding for, among other things, two part-time advanced registered nurse practitioners to provide consultation and technical assistance in family planning clinics, and assist in the prescribing and management of various birth control methods. Florida also funded a health educator for Zika prevention and response duties, which included assisting local health care organizations in the development of educational programming to ensure that health care services are provided in accordance with CDC guidelines. The health educator’s duties also included ensuring that pregnant women with the Zika virus and infants with congenital Zika infection are referred to proper care and other available programs and resources. Texas. Texas officials said that the state used a CDC award to rapidly identify cases and conduct data analysis of Zika-related birth defects, to enhance surveillance of Zika virus-related birth defects by improving the Texas Birth Defects Registry database, and to facilitate remote access to electronic records. Texas also disseminated prevention materials and interviewed mothers of children with Zika-related birth defects about their experience in dealing with the health system in order to help identify developmental outcomes of the children. Texas intends to use its CMS Zika Health Care Services Program funding—awarded on June 30, 2017—to increase clients’ access to contraceptives; for care management, including counseling on Zika virus testing for pregnant women and their families; and for counseling to refer clients for services and support. State officials provided the following information on some of the activities intended for the program. 1. Increasing clients’ access to contraceptives: Community health workers and case management staff will assist clinic providers with informing women and their partners about contraceptive availability and about the potential Zika virus risks during pregnancy. They will also work with the women to determine what messages work best with their partners regarding contraceptives. 2. Care management that includes pre- and post-test counseling on Zika virus testing for pregnant women and their families: Officials said that this activity is important because the CDC testing algorithm is complex, the results from various tests can be confusing, and there can be false positives from the tests. Generally, doctors do not have the time to go through the complexities of these issues with clients, such as how to understand the laboratory tests and results. 3. Counseling to refer clients for services and supports: This can include counseling about various types of resources to support clients pre- delivery, after delivery, and during the infant’s first year of life. Arizona. According to state officials, Zika supplemental funding was used to create an action plan with counties, and increase the state’s ability to raise public awareness about the threat of the Zika virus, its transmission routes, and prevention measures. Officials stated that Arizona’s border with Mexico makes communicating about Zika more complex, because individuals frequently cross the border for a variety of reasons including work, school, and to visit family, and do not necessarily consider themselves to be travelers. Additionally, the state used funding to increase the amount of personal protective equipment for the vector surveillance staff, and set up vector control contracts that could be accessed if the Zika virus spread locally, and in the event that vector control could not be handled at the local level. However, this contracting mechanism was not used, because there was no local transmission of the Zika virus in Arizona. According to state officials, Arizona plans to ask for an extension to use the funding in the next mosquito season. The state health department also sponsored training on mosquito identification. Los Angeles County. County officials said that some funding was used to support personnel involved with Zika surveillance, testing, and case management. This included the detection of cases—individuals diagnosed with Zika infection—and also the dissemination of information to Los Angeles County’s Maternal and Child Health group, which follows pregnant women through delivery and then transfers the cases to the county’s Children’s Medical Services group. For example, according to officials, once a case is identified, information is shared with the relevant vector control district about the location of the case, and the vector control district can then conduct inspection and abatement activities to reduce the risk of a local outbreak. Los Angeles County officials found that this process takes about one week from finding out about a case to completion of inspection and abatement. This included 1 day to get information to the vector control district and 1 to 5 days for completing inspection and abatement. The funding was also used to provide funds to the vector control districts to augment Aedes mosquito detection efforts and support outreach activities, according to county officials. Louisiana. Louisiana officials said they used a CDC award, in part, to provide equipment and mileage reimbursement for nurses, who served as clinical liaisons between the birth defects surveillance program and hospitals and physicians statewide, to help enable rapid surveillance activities. Awardees also funded other activities, such as outreach campaigns. See figure 3 for examples of outreach funded with Zika supplemental funding. Selected Awardees Had Mixed Experiences Applying for and Managing the Zika Supplemental Funding While a majority of the 12 selected awardees we spoke with reported positive experiences with the process of applying for and managing the Zika supplemental funding, some awardees cited aspects of the process that were challenging. The awardees we spoke with received much of their supplemental funding from CDC and noted that the process went well: there was good communication with CDC officials; CDC’s Epidemiology and Laboratory Capacity for Infectious Diseases cooperative agreement process to apply for Zika supplemental funds was more streamlined than the regular application process; and awardees said they were familiar with the mechanisms, which helped them navigate the process. Awardees we spoke to also cited some challenges to applying for and managing the Zika supplemental funding. These awardees noted that various time frames among multiple awards and restrictions on authorized activities under the awards added administrative burdens that officials had to deal with while responding to the outbreak. Florida officials said that the state received funding from different federal agencies, from different cooperative agreement awards, with different deadlines, and different rules on what the funding could be used for. For example, CDC distributed Zika supplemental funds to states and certain localities and territories through five cooperative agreements—some of which had multiple application rounds. Florida officials said that they had to track funding separately and identify the activities that could be funded under each award—administrative requirements that were burdensome during an emergency response. Figure 4 presents the period of time Florida had to use the Zika supplemental funding from multiple awards received from CDC. In addition, awardees we spoke with cited challenges with adjusting their plans when federal funding was more or less than anticipated. For example, CDC officials said that they provided average award amount ranges as guidance for awardees as part of the application process for one of CDC’s cooperative agreements. Los Angeles County officials said that they applied for an amount that was near the limit, and county officials said that they had to adjust the activities they planned to fund when they received less than what they applied for. Iowa officials said that without knowing exactly how much funding would be available it was difficult to know what to apply for and made staffing changes difficult. Iowa officials had to adjust their initial plan when they later received additional unexpected funding. In October 2017, CDC issued a new notice of funding opportunity that, according to agency officials, was intended to help minimize the administrative burden on these awardees (e.g., preparing applications and other paperwork) during significant public health emergencies by pre- approving public health departments in these jurisdictions to rapidly receive future awards. This new notice of funding opportunity will be used to establish a list of awardees, with existing emergency preparedness and response capacity, that would be pre-approved for funding by CDC when a public health threat occurs, including infectious disease threats. It requires that awardees have structures and plans in place to receive funding, as well as plans to respond to a public health threat. According to CDC officials, awards could potentially be provided within 2 weeks to pre-approved awardees after supplemental appropriations are enacted. According to CDC officials, as of February 2018, the agency had approved all 64 applicants for the notice of funding opportunity. This means that CDC will consider these approved applicants for future funding if an emergency occurs and funding becomes available. Agency and Third- Party Comments We provided a draft of this report to HHS for review and comment. HHS provided technical comments, which we incorporated as appropriate. We also provided relevant draft portions of this report to the Zika supplemental funding awardees we interviewed. Specifically, we provided the excerpts to officials in Alaska; Arizona; California; Colorado; Florida; Houston, Texas; Iowa; Kansas; Los Angeles County, California; Louisiana; Oklahoma; and Texas. All but one awardee responded. Awardees provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, and to other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XII. Appendix I: Biomedical Advanced Research and Development Authority’s Zika Supplemental Awards The Biomedical Advanced Research and Development Authority (BARDA), within the Department of Health and Human Services’ Office of the Assistant Secretary for Preparedness and Response, executed contracts to obligate its Zika supplemental funding for research in the areas of (1) vaccine development, (2) diagnostic development, and (3) pathogen reduction systems. Table 3 presents information for each award as it was provided to us by BARDA. Appendix II: Centers for Disease Control and Prevention—Epidemiology and Laboratory Capacity for Infectious Diseases Awardees This appendix presents information on Zika supplemental funding awards made by the Centers for Disease Control and Prevention (CDC) through the Epidemiology and Laboratory Capacity for Infectious Diseases (ELC) cooperative agreement. CDC awarded Zika supplemental funding for the ELC cooperative agreement for the following activities: Zika vector surveillance and control, Zika epidemiology and laboratory surveillance, and U.S. Zika Pregnancy Registry. The Zika supplemental funding awarded through the ELC cooperative agreement was to further support and strengthen activities to protect the public’s health, especially pregnant women, through epidemiologic surveillance and investigation, improving mosquito control and monitoring, and strengthening laboratory capacity. The funding will also support participation in the U.S. Zika Pregnancy Registry to monitor pregnant women with the Zika virus disease and their infants. For each award, we present information as it was provided to us by CDC, as well as the activities funded. Table 4 provides information on ELC Zika supplemental funding awarded to states and territories, and table 5 presents information on awards to local health departments. In addition to states and territories, six large city and county local health departments—Chicago, the District of Columbia, Houston, Los Angeles County, New York City, and Philadelphia—received ELC Zika supplemental awards. Appendix III: Centers for Disease Control and Prevention—Birth Defects Awardees This appendix presents information on Zika supplemental funding awards made by the Centers for Disease Control and Prevention (CDC) through the Surveillance, Intervention, and Referral to Services Activities for Infants with Microcephaly or other Adverse Outcomes Linked with the Zika Virus (birth defects) cooperative agreement. The Zika supplemental funding awarded through the birth defects cooperative agreement was to provide additional resources to better establish, enhance, and maintain rapid population-based surveillance of microcephaly and other adverse outcomes (especially central nervous system defects) possibly linked to Zika virus infection during pregnancy using an active case-finding methodology; participation in centralized pooled clinical and surveillance data projects; ensuring affected infants and families are referred to services; and assessing health and developmental outcomes of these children. Table 6 presents information for each award as it was provided to us by CDC. Appendix IV: Centers for Disease Control and Prevention—Behavioral Risk Factor Surveillance System Awardees This appendix presents information on Zika supplemental funding awards made by the Centers for Disease Control and Prevention (CDC) through the Behavioral Risk Factor Surveillance System (BRFSS) cooperative agreement. The Zika supplemental funding awarded through the BRFSS cooperative agreement was to conduct a rapid population-based assessment of women and couples using or in need of contraceptives in order to provide comprehensive contraceptive services related to Zika virus exposure. Table 7 presents information for each award as it was provided to us by CDC. Appendix V: Centers for Disease Control and Prevention—Pregnancy Risk Assessment Monitoring System Awardees This appendix presents information on Zika supplemental funding awards made by the Centers for Disease Control and Prevention (CDC) through the Pregnancy Risk Assessment Monitoring System (PRAMS) cooperative agreement. The Zika supplemental funding awarded through the PRAMS cooperative agreement was to assess maternal behaviors and experiences related to Zika virus exposure among recently pregnant women who deliver a live- born infant in the United States. Table 8 presents information for each award as it was provided to us by CDC. Appendix VI: Centers for Disease Control and Prevention—Public Health Preparedness and Response Awardees This appendix presents information on Zika supplemental funding awards made by the Centers for Disease Control and Prevention (CDC) through the Public Health Preparedness and Response (PHPR) Cooperative Agreement for All-Hazards Public Health Emergencies. According to CDC officials, the Zika supplemental funding awarded through the PHPR cooperative agreement was to enable identified state, territorial, and local jurisdictions to address Zika virus disease planning and operational response gaps. Table 9 presents information for each award as it was provided to us by CDC. Appendix VII: Centers for Disease Control and Prevention—Other Cooperative Agreements’ Awardees This appendix presents information on Zika supplemental funding awards made by the Centers for Disease Control and Prevention (CDC) through additional cooperative agreements. Tables 10-17 present information for each award as it was provided to us by CDC. Administrative Support for the Zika Supplemental for Sentinel Enhanced Dengue Surveillance System Project The Zika supplemental funding awarded through the Sentinel Enhanced Dengue Surveillance System Project cooperative agreement was to support sites working to provide new information on dengue and other acute febrile illnesses in Puerto Rico, which is located in the subtropics and where dengue epidemiology is similar to dengue endemic areas worldwide. The Zika supplemental funding was for two studies: (1) pregnant women with Zika infection, and (2) postnatal Zika infection by following 0-5 year old children. Vector-Borne Disease Regional Centers of Excellence The Zika supplemental funding awarded through the Vector-Borne Disease Regional Centers of Excellence cooperative agreement is to establish regional centers of excellence aimed at building the capacity to address the problem of emerging and exotic vector-borne diseases in the United States, including Zika virus infection. Enhancing Capacity for Vector Surveillance and Control to Prevent Zika, Dengue and Chikungunya Infection in Puerto Rico The Zika supplemental funding awarded through the Enhancing Capacity for Vector Surveillance and Control to Prevent Zika, Dengue and Chikungunya Infection in Puerto Rico cooperative agreement is to fund activities to increase the surveillance and control of vectors, specifically Aedes aegypti mosquitoes (the vector of dengue, chikungunya, and Zika). The purpose of the program is to establish a vector control unit to oversee and implement comprehensive vector control activities in Puerto Rico. Immunization Grants-CDC Partnership: Strengthening Public Health Laboratories The Zika supplemental funding awarded through the Immunization Grants-CDC Partnership: Strengthening Public Health Laboratories cooperative agreement is to promote quality and safe public health laboratory practice, improve public health laboratory infrastructure, strengthen the public health laboratory system, and develop a well-trained public health laboratory workforce. Building Capacity of the Public Health System to Improve Population Health through National, Nonprofit Organizations According to CDC officials, the Zika supplemental funding awarded through the Building Capacity of the Public Health System to Improve Population Health through National, Nonprofit Organizations cooperative agreement is to ensure national capacity for responding to the Zika outbreak and meeting the needs of those affected, such as by reaching out to specialized constituents to ensure they were informed on epidemiology and practice guidelines. Strengthening the Public Health System in the U.S.- Affiliated Pacific Islands The Zika supplemental funding awarded through the Strengthening the Public Health System in the U.S.-Affiliated Pacific Islands cooperative agreement is to provide capacity building assistance through a regional, nonprofit organization to strengthen the U.S.-Affiliated Pacific Islands’ public health leadership, workforce, and public health systems and infrastructure in response to Zika virus within the U.S. Pacific territories. Pan American Health Organization: Building Capacity and Networks to Address Emerging Infectious Diseases in the Americas The Zika supplemental funding awarded through the Pan American Health Organization: Building Capacity and Networks to Address Emerging Infectious Diseases in the Americas cooperative agreement is for various activities including technical assistance, such as to develop standard operating procedures for diagnostic and integrated surveillance activities, as well as to support the development, implementation, and evaluation of diagnostic and surveillance guidelines. Global Health Security Partner Engagement: Expanding Efforts and Strategies to Protect and Improve Public Health Globally According to CDC officials, the Zika supplemental funding awarded through the Global Health Security Partner Engagement: Expanding Efforts and Strategies to Protect and Improve Public Health Globally cooperative agreement is for enhanced surveillance for pregnant women in Colombia, including laboratory testing and case investigations. Appendix VIII: Centers for Disease Control and Prevention—Contracts and Interagency Agreements This appendix presents information on Zika supplemental funding awards made by the Centers for Disease Control and Prevention (CDC) through additional contracts and interagency agreements. Tables 18 and 19 present information for each award as it was provided to us by CDC, as well as the activity funded. Appendix IX: Centers for Medicare & Medicaid Services—Zika Health Care Services Program Awards This appendix presents information on Zika supplemental funding awards made by the Centers for Medicare & Medicaid Services (CMS) through the Zika Health Care Services Program. The Zika Health Care Services Program is aimed at supporting prevention activities and treatment services for women (including pregnant women), children, and men adversely or potentially affected by the Zika virus. According to CMS documentation, the Zika Health Care Service Program is intended to address four critical components of a comprehensive response to Zika: 1. Increase access to contraceptive services for women and men. 2. Increase access to and reduce barriers to diagnostic testing, screening, and counseling for pregnant women and newborns. 3. Increase access to appropriate specialized health care services for pregnant women, children born to mothers with maternal Zika virus infection, and their families. 4. Improve provider capacity and capability. CMS awarded funding through the Zika Health Care Services Program, in two award rounds, to states, territories, tribes, or tribal organizations with active or local transmission of the Zika virus, as confirmed by the Centers for Disease Control and Prevention. In January 2017, CMS awarded funding to American Samoa, Florida, Puerto Rico, and the U.S. Virgin Islands. In June 2017, CMS awarded funding to Texas, the only new area with local transmission of the Zika virus. Table 20 presents the awards CMS made through its Zika Health Care Services Program. Appendix X: Health Resources and Services Administration’s Zika Supplemental Awards This appendix presents information on Zika supplemental funding awards made by the Health Resources and Services Administration (HRSA) to health centers and for Special Projects of Regional and National Significance. Health centers: HRSA provided awards to health centers through supplemental grant awards to support existing health centers in Puerto Rico and other territories in their efforts to expand the delivery of health care services, including the prevention of Zika and prevention and treatment of Zika-related illness. HRSA also provided supplemental grant awards to existing Health Center Program cooperative agreement awardees for efforts to provide training and technical assistance for Zika-related health center expansion activities. Special Projects of Regional and National Significance: HRSA provided awards for Special Projects of Regional and National Significance to support public health departments and other entities in Puerto Rico and other territories in efforts to ensure access to recommended services for pregnant women, infants, and children infected by the Zika virus in the prenatal, perinatal, and neonatal period. Activities include early identification through developmental screening, regular assessments and monitoring, telemedicine, care coordination, enabling services, family engagement and family-to- family support; purchasing of diagnostic equipment and health information technology; and the training of health care providers, care coordinators, and other health care and public health professionals to ensure delivery of comprehensive, interdisciplinary health and social services for this population. Tables 21 and 22 present information for each award as it was provided to us by HRSA. Appendix XI: National Institutes of Health Zika Supplemental Awards The National Institutes of Health (NIH) awarded Zika supplemental funding to support research to better understand Zika and its complications, and inform the development of new interventions. The three primary activities of funding include (1) vaccine development; (2) Zika in Infants and Pregnancy study; and (3) diagnostics, therapeutics, vector control, and other interventions. NIH used contracts, grants, intramural research awards, and other awards to provide funding for research on the Zika virus and its complications. Tables 23-26 present information for each award as it was provided to us by NIH. Appendix XII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Karen Doran (Assistant Director), Sarah Resavy (Analyst-in-Charge), and Hannah Grow made key contributions to this report. Also contributing were Muriel Brown, Christine Davis, and Drew Long.
Why GAO Did This Study Zika—a virus primarily transmitted through mosquito bites—can cause symptoms that include fever, rash, and joint and muscle pain. In pregnant women, the Zika virus can be passed to the fetus and cause severe brain defects. In response to an outbreak in the United States and its territories, Congress appropriated $932 million in September 2016 through the Zika Response and Preparedness Act to HHS and its agencies to prevent, prepare for, and respond to the Zika virus and its related health conditions, and conduct related research. The act also included a provision that GAO study the activities supported with the appropriated funds. This report describes (1) the status of funds obligated and disbursed from the Zika supplemental funding appropriated to HHS and its agencies; and (2) how selected awardees used their Zika supplemental funding, and their experiences with applying for and managing the funding. To do this work, GAO reviewed agency documents on Zika supplemental funding and activities, and interviewed officials from the HHS agencies and selected awardees. To select awardees, GAO identified states based on the amount of initial Zika supplemental funding they received from CDC, the Centers for Medicare & Medicaid Services, and the Health Resources and Services Administration; and selected states with the highest and lowest funding. In total, GAO selected 12 awardees: 10 states, as well as one county and one city from 2 of the 10 states. GAO provided a draft of this report to HHS. In response, HHS provided technical comments, which were incorporated as appropriate. What GAO Found As of September 30, 2017, Department of Health and Human Services’ (HHS) agencies had obligated nearly all of the $932 million of Zika supplemental funding Congress appropriated in 2016 through the use of multiple funding mechanisms, including cooperative agreements, grants, and contracts. Four HHS agencies had small unobligated balances as of the September 30, 2017, obligation deadline; these balances cannot be used to incur new obligations, but may be used to adjust award amounts in future years. Disbursement of the obligated funds was ongoing, with about 21 percent of the Zika supplemental funding (approximately $195.5 million) disbursed as of December 31, 2017. The agencies have until September 30, 2022, to disburse the remainder. The 12 awardees GAO interviewed—officials from 10 states and two local entities—funded multiple activities with their Zika supplemental funding, and had varying experiences applying for and managing the funds. Awardees told GAO that they used their funding to support such activities as collection of information about individuals affected by the Zika virus (human surveillance), mosquito control activities, laboratory capacity building, public outreach, and health care services. For example, Florida used Zika supplemental funding in its state-run laboratories to purchase materials for testing Zika virus-related specimens. A majority of the awardees GAO spoke with reported positive experiences applying for and managing the Zika supplemental funding, including good communication with agency officials and awardees’ familiarity with the mechanisms used to make the awards. However, some awardees noted challenges, such as time frames to use the funding that varied among multiple awards and identifying the activities that could be funded. These challenges added administrative burdens to applying for and managing the Zika supplemental funding while officials were responding to the outbreak, according to the awardees. In October 2017, the Centers for Disease Control and Prevention (CDC) issued a new notice of funding opportunity that agency officials said is intended to help minimize the administrative burden on states and certain localities during emergencies—such as preparing applications—by pre-approving public health departments in these jurisdictions to be eligible to rapidly receive future awards.
gao_GAO-18-91
gao_GAO-18-91_0
Background Title 5 Special Payment Authorities Generally, federal agencies have seven broadly applicable special payment authorities available government-wide under Title 5 of the United States Code (hereinafter “Title 5”)—listed below in table 1—for recruitment and retention. Table 1 describes each authority’s legal reference, purpose, payment ranges, and whether an agency must seek OPM approval prior to use. Mission-Critical Skills Federal agencies face mission-critical skills gaps that pose a risk to agencies’ ability to cost effectively serve the public and achieve results. Agencies can have skills gaps for different reasons. For example, skills gaps may arise in the form of: (1) staffing gaps, in which an agency has an insufficient number of individuals to complete its work; (2) competency gaps, in which an agency has individuals without the appropriate skills, abilities, or behaviors to successfully perform the work; or (3) both staffing and competency gaps. Mission-critical skills gaps may be broad— affecting several agencies—or may be specific to a given agency. We, and others including OPM and federal agencies, have identified and reported on mission-critical skills gap areas across the government and within specific agencies. In 2015, OPM and the CHCO Council worked with agencies to refine their inventory of government-wide and agency- specific skills gaps. They identified 6 government-wide and 48 agency- specific mission-critical skills gap areas for closure. The six government- wide areas identified were Cybersecurity; Acquisition; Human Resources; Auditing; Economics; and Science, Technology, Engineering, and Mathematics (STEM). Some of the agency-specific skills gaps included border patrol agents at the Department of Homeland Security and nurses at the Veterans Health Administration and the Department of Health and Human Services. Skills gaps played a contributing role in 15 of the 34 high-risk areas identified in our most recent report on government operations with greater vulnerabilities to fraud, waste, abuse, and mismanagement, or that are in need of transformation to address economy, efficiency, or effectiveness challenges. Office of Personnel Management OPM is responsible for performing a number of functions to assist agencies in using the compensation flexibilities, including issuing regulations and, as necessary, providing approval authority, to ultimately help agencies build successful, high-performance organizations. OPM provides agencies with guidance and assistance on using special payment authorities via individual consults, memorandums, its website, training, and initiatives that focus on specific issues. OPM’s website has guidance on each special payment authority including references to regulations. For special payment authorities requiring OPM approval, OPM regulations provide agencies instruction concerning the information needed for OPM to review and decide whether to approve or deny requests. Although the information needs vary by authority, generally agencies are to submit information and evidence reflecting recruitment and retention challenges for the specific position(s), previous efforts to address the problem, and the basis for requested payment amounts. OPM is responsible for oversight of the federal government’s use of special payment authorities to ensure agencies are acting in accordance with applicable requirements. For example, as a part of its delegated examination audits and human resource management evaluations, OPM reviews selected samples of agency’s personnel actions to assess how well they complied with statutory and regulatory requirements. In cases where agencies used certain special payment authorities, OPM uses a checklist to guide its review of documents agencies must develop and maintain to justify their uses. OPM also is responsible for reporting to Congress on the federal government’s use of certain special payment authorities. Annually, OPM requests or receives data from agencies and reports to Congress on agency use of two authorities—critical position pay and the student loan repayment. The reports include information on the use of these authorities each calendar year—such as data showing how many received payments—and the total dollar or relative amounts of special payments. OPM was required to annually report to Congress on agencies’ use of the recruitment, relocation, and retention (3R) incentives in calendar years 2005-2009. Chief Human Capital Officers Council The Chief Human Capital Officers Act of 2002 established the CHCO Council to advise and coordinate the activities of member agencies on such matters as the modernization of human resources systems, improved quality of human resources information, and legislation affecting human resources operations and organizations. The Director of OPM is the Chairperson of the CHCO Council, and the Deputy Director for Management in OMB is the Vice Chairperson. The council includes CHCOs of the executive departments and any other members designated by OPM. It serves to coordinate and collaborate on the development and implementation of federal human capital policies. For example, the CHCO Council manages the Human Resources University (HRU) website, which is a web-based platform to share knowledge, training, best practices, and resources across agencies. Agencies Reported Using Special Payment Authorities to Varying Degrees but for Few Employees in Fiscal Years 2014-2016 Agencies Reported Using a Range of Special Payment Authorities CHCO agencies used a range of special payment authorities to recruit and retain employees. Our analysis of CHCO agency data found that for six selected authorities, 20 or more agencies used each between fiscal years 2014-2016, as shown in figure 1. Seven agencies reported having used the critical position pay authority. Agency Data Show, Overall, Few Employees Received Compensation from Special Payment Authorities We found that CHCO agencies reported using the seven authorities for a small number of federal employees overall. For example, in fiscal year 2016, less than 6 percent of the over 2 million federal employees at CHCO agencies received compensation under at least one of the seven special payment authorities, as shown in figure 2. Moreover, many agencies reported using most of these authorities for a limited number of employees each year. For example, of the 24 agencies that reported using superior qualifications and special needs pay setting—the authority reportedly used by the highest number of CHCO agencies—over half (13 agencies) reported using the authority for fewer than 100 employees per year. In addition, of the 23 agencies that reported using recruitment incentives in fiscal years 2014-2016, 11 agencies reported using the authority for 10 or fewer employees per year. As shown in table 2, agencies reported that more employees received compensation from the special rates authority, followed by use of retention incentives in fiscal years 2014-2016. Specifically, agencies reported using special rates for over 74,000 employees, of the over 2 million CHCO agency employees, in each of these fiscal years. On the other end of the spectrum, agencies reported using the critical position pay authority for fewer than 40 employees in each of these years. Special rates: Although CHCO agencies reported that more employees received special rates compensation than the other authorities in fiscal years 2014-2016, our analysis showed usage generally declined from between 2001- 2005, when over 139,000 employees received a special rate. An OPM official said that over time agencies have relied less on these special rates due to the introduction of locality pay. For example, in its 2005 annual review of special rates, OPM reported that 14 special rates schedules would be terminated because higher locality rates applied at all steps of each covered grade. Critical position pay: We found that the critical position pay authority was used for the lowest number of employees of these authorities each year in fiscal years 2014-2016. The authority’s lower use relative to the other authorities is to be expected to some extent because of the government-wide cap of 800 positions for this authority. Agencies Reported Spending about $800 Million on 3R Incentives and Student Loan Repayments Our analysis of CHCO agency questionnaire responses found that these agencies reported spending about $805 million total on 3R incentives and the student loan repayment authorities in fiscal years 2014-2016. In addition, we found that these agencies reported spending more on retention incentives than on the other three authorities in each of these years, as shown in figure 3. Specifically, over 40 percent (about $333 million) of this total reported spending was for retention incentives. In addition, the agency reported use of recruitment and relocation incentives increased in each year. Overall, recruitment and relocation incentives were about $174 million and $149 million, respectively, of the total approximately $805 million in reported spending between fiscal years 2014-2016. OPM officials stated that until recently agency spending on 3R incentives had been frozen, and many agencies had to limit their use of these incentives. Finally, agency reported use of the student loan repayment authority increased in each of these years and was approximately $148 million of the total approximately $805 in reported spending. Agencies Used the Range of Authorities to Address Skills Gaps CHCO Agencies Reported Using Authorities to Help Address Different Skills Gaps, Particularly for STEM Occupations All 26 CHCO agencies reported using special payment authorities to support mission-critical skills gap areas in fiscal years 2014-2016. We found that the number of CHCO agencies that used each of these authorities varied by skills gap area, as shown in table 3. For example, we found that superior qualifications and special needs pay setting was the authority used by the largest number of CHCO agencies in two of the five skills gap areas—STEM and Cybersecurity. We also found that 19 or more agencies reported using at least one authority to support four skills gap areas—STEM, Cybersecurity, Acquisitions, and Human Resources. Some CHCO agencies reported that certain skills gap areas were not mission critical for them. Specifically, 11 agencies reported that healthcare was not a skills gap area for them as compared to 2 or 3 agencies each for the other skills gap areas. STEM: Our analysis of the CHCO agency data found that, of the five skills gap areas, more agencies generally reported using the special payment authorities to support STEM occupations. Of the 21 agencies that reported using at least one authority to support the STEM area, we found that 18 agencies reported using the superior qualifications and special needs pay setting authority for these occupations in fiscal years 2014-2016. The Department of Agriculture (USDA), for example, reported that this authority had been a valuable tool in recruiting for critical STEM positions from a small and highly competitive Ph.D. applicant pool. The Department of the Treasury (Treasury) reported using special payment authorities generally to match private-sector salaries or to help mitigate disparities between private- and public-sector compensation for STEM occupations. Cybersecurity: Similarly, of the 21 agencies that reported using at least one authority to support the cybersecurity area, 16 reported using superior qualifications and special needs pay setting to support these positions in fiscal years 2014-2016, and 13 agencies reported using recruitment incentives. For example, the Small Business Administration reported that the superior qualifications and special needs pay setting authority has helped to attract top cybersecurity talent by narrowing the gap between public- and private-sector salaries. Acquisitions: Of the 20 agencies that reported using at least one authority to support the acquisitions area, 14 agencies reported using the student loan repayment authority, and 13 agencies reported using the superior qualifications and special needs pay setting authority for these positions. For example, the Department of Education reported using student loan repayments to help retain acquisitions employees, and in one instance, had retained an expert in multiple functional areas of government contracting. Other agency-identified skills gap areas: We also found that 20 of the 26 CHCO agencies reported using special payment authorities to varying degrees to help address other or agency-specific skills gap areas. For example, Treasury reported using recruitment incentives for auditors, while the Department of Homeland Security (DHS) reported using multiple authorities, including the 3R incentives for law enforcement positions in fiscal years 2014-2016. Agencies Frequently Used the Student Loan Repayment Authority to Support Mission- Critical Occupations Our analysis of OPM’s Federal Student Loan Repayment Program Calendar Year 2015 Report on government-wide use found that agencies frequently used the student loan repayment authority for employees in mission-critical occupations (MCOs). Specifically, we found that for the five agencies that most frequently used student loan repayments that year—the Departments of Defense (DOD), Veterans Affairs (VA), Justice (DOJ), and State (State), and the Securities and Exchange Commission (SEC)—over 50 percent of the employees at each agency who received these benefits were in agency-specific MCOs. For example, SEC reported to OPM that approximately 72 percent of its student loan repayments were made to employees in MCOs such as accountants, attorneys, and securities compliance examiners. We also found that other agencies used the authority for employees in MCOs. For example, the Department of the Interior (Interior) reported to OPM that using the authority has been helpful in filling MCOs such as petroleum engineers, geophysicists, and biologists. Our analysis of OPM’s 2015 report also found that the 32 agencies that had used the authority that year did so for over 200 occupations. Overall, we found that agencies most frequently used student loan repayments for attorney-, engineer-, and contracting-related occupations, as shown in figure 4. Our review of OPM’s report to Congress on critical position pay in calendar year 2015 found that, as of calendar year 2015, all four positions that received the critical position pay authority were for director or other senior executive positions. For example, the positions of Administrator of the Transportation Security Administration and the Director of the National Institutes of Health received compensation under this authority in calendar year 2015. Since OPM’s 2015 report, OPM officials told us that they had approved 68 additional positions for the critical position pay authority for certain Medical Center Director positions at VA. According to data provided by VA in response to our questionnaire, the agency reported using its recently approved authority in fiscal year 2016 for 27 of these positions. Agencies Generally Reported Positive Impacts but Few Documented Their Assessments of Special Payment Authorities Agencies Generally Reported Positive Impacts CHCO agencies generally reported that special payment authorities positively affected areas of operation. More specifically, these agencies reported the authorities somewhat or very positively affected at least one of seven areas we identified in our questionnaire such as staff retention, ability to meet staffing needs, or ability to fill mission-critical positions (see appendix III for the results for the other special payment authorities). For example, the 19 agencies that reported using the special rates authority said it had somewhat or very positively impacted their ability to meet their staffing needs, and 17 reported the same for staff retention and achieving their missions (see table 4). CHCO agencies generally reported that special payment authorities somewhat or very positively affected their ability to fill mission critical positions. Agencies provided specific examples of the positive impact of special payment authorities and ways they responded to challenges using special payment authorities: Student loan repayment authority: The Department of Commerce reported that multiple components found this authority useful for competing with the private sector and for building a pipeline of top talent given that most of these employees were at the beginning of their careers. Relocation incentives: The Department of Energy reported using these incentives to relocate employees to meet emergency needs, including a shutdown of the Waste Isolation Plant Project in Carlsbad, New Mexico. Recruitment incentives: DOD reported that it would not have been able to effectively recruit individuals in several career fields, including engineering and nursing, without these incentives. Moreover, the Social Security Administration credited these incentives for its success in hiring experts from major corporations for cybersecurity and other program policy area positions. Retention incentives: The Environmental Protection Agency credited a retention incentive for successfully retaining a senior research scientist, thereby addressing a mission-critical skills gap and allowing the agency’s mission to continue uninterrupted and at significant savings. Moreover, State explained how using a retention incentive helped to address its Bureau of Medical Services’ severe staffing shortages due to uncompetitive base salaries. VA also stated that these incentives helped create a smooth transition of institutional knowledge to newer employees and facilitate continuity of operations. DHS responded to its need to attract and retain employees in information technology (IT) and cybersecurity by developing a unique retention incentive plan that focused on specialized certification for employees in these fields. The Department of Health and Human Services’ (HHS) Centers for Disease Control and Prevention (CDC) used retention incentives to retain employees that might have moved to the private sector. Superior qualifications and special needs pay: HHS’s CDC reported it has been successfully using this authority to attract IT specialists, an occupation series designated as “hard-to-fill.” HHS credited this authority with attracting highly qualified applicants who would otherwise have accepted higher starting salaries outside the federal government. USDA included use of this authority in its approach to addressing challenges recruiting and retaining employees in the remote oil boom Bakken region in North Dakota and Montana. Most Agencies Reported Assessing Special Payment Authorities, but Few Agencies Documented Their Effectiveness Assessments Twenty-five of 26 CHCO agencies reported assessing the effectiveness of at least one special payment authority used in fiscal years 2014-2016. However, our analysis found in many cases agencies did not document their assessments. Moreover, agencies often did not assess the effectiveness of all authorities they used. For example, 4 agencies reported having no assessments for the majority of the special payment authorities they used, and 11 agencies reported not assessing at least one of the authorities they used. As seen in table 5, overall, CHCO agencies reported conducting informal effectiveness assessments more often than documenting assessments of their uses of special payment authorities. Our analysis of CHCO agency responses found the extent to which these agencies documented assessments of effectiveness varied by payment authority. For example, agencies reported most frequently documenting assessments for recruitment incentives and the student loan repayment authority. On the other hand, 3 of the 24 agencies using the superior qualifications and special needs pay setting authority reported documented assessments. For each of the authorities, a small number of agencies reported not assessing effectiveness at all. For instance, 5 of the 21 agencies using retention incentives did not assess their effectiveness. OPM said it did not document assessments of the effectiveness of the authorities the agency used for its own employees because meaningful analyses were not possible due to the few employees who received compensation under the authorities OPM used. CHCO agencies that reported documenting assessments identified the various impacts they assessed for the special payment authorities they used. More specifically, of the 10 CHCO agencies that reported having documented assessments, agencies most frequently reported evaluating the impact of these authorities on meeting staffing needs and on their effectiveness relative to other human capital flexibilities. This included DOD, which reported documenting assessments for five authorities— special rates, the 3Rs, and student loan repayments—on its operations. We requested copies of documented assessments from the 10 CHCO agencies that reported having them and 9 responded. Three of the nine responding agencies provided documents with information on authorities’ effectiveness, such as the impact on meeting staffing needs. Specifically, Interior provided documentation that showed the agency tracked workforce data, such as the number of vacancies and turnover rates related to using the 3Rs and special rates focused on oil and gas extraction. DOD and DHS included information on the student loan repayment authority in their annual reports to OPM, and credited student loan repayments with helping to retain highly qualified employees. Six of the nine responding agencies provided documentation that justified or reported on the use of special payment authorities rather than documentation that assessed the impacts that using authorities had on agency operations. For example, three of these six agencies provided examples of reviews or information addressing compliance with regulations relevant to the use of special payment authorities. Three other of these six agencies provided documents to justify and request approval to use 3R incentives, such as to show applicants’ qualifications or current employees’ performance appraisals. The Most Frequently Identified Challenge Was Insufficient Resources, but Most Agencies Reported Rarely or Never Experiencing Other Types of Challenges CHCO agencies reported that, among the six potential challenges we identified in our questionnaire, insufficient resources was the most common challenge they experienced in using special payment authorities. Most CHCO agencies reported they rarely or never experienced other challenges. With respect to insufficient resources, 13 of the 26 agencies said they regularly or always experienced this challenge (see figure 5). According to three of these agencies, budget constraints prevented them from using special payment authorities more frequently or limited their use to filling only the most critical vacancies. Four CHCO agencies said they regularly or always experienced challenges with burdensome documentation or complex approval process when using special pay authorities. We also sought feedback on certain agencies’ experiences with OPM’s approval processes for special payment authorities. Below are details of challenges that agencies provided. Interior stated that the department and its components were required to provide a significant amount of historical data to justify the need for special salary table, and that publicly-available data on market analyses and trends should drive the special pay rate process, thereby making it easier for agencies to submit requests and to adapt to current conditions. HHS reported that documenting special payment authorities was overly complicated for some of its divisions with smaller human resources (HR) staffs. The Department of Transportation (DOT) commented on the timeliness of OPM and OMB approvals for using the critical position pay authority. DOT said this delay–approximately 5 months–could have been a driving factor that negatively affected recruitment for a position, as other agencies could negotiate to offer the candidate higher salaries. Interior similarly cited a concern with approval process timeliness due to OPM’s limited staff and expertise coordinating between all the involved federal agencies with which it must deal. Interior suggested that when an agency must request pay flexibilities that can be approved by only OPM, OPM should train the agency’s HR staff and managers on the processes and materials needed to justify their requests, and should provide a clear understanding of timelines for approvals. However, multiple CHCO agencies reported only rarely or never experiencing documentation or process challenges. For example, Interior credited OPM with collaborating to establish special rates to address challenges in competing with the oil and gas industry for the talent needed to meet Interior’s mission. DOD and DOJ also conveyed positive views on OPM’s approval process, crediting it with expediting a waiver request for a group retention incentive limitation and use of special rates, respectively. Agencies Reported That Manager Training Would Likely Improve Use of Special Payment Authorities CHCO agencies most frequently said training for agency managers is a change that would very likely or certainly improve the agency’s ability to effectively use special payment authorities (see figure 6). Conversely, about a quarter of responding agencies said legislative changes very likely or certainly would improve their ability to use special payment authorities. CHCO agencies provided examples of how potential changes would improve their ability to effectively use special pay authorities. VA responded that its central HR office was developing a pay authority toolkit to provide information on processes and procedures for using the authorities and related training for HR specialists and managers. According to VA, the toolkit, mandatory training, and regularly-scheduled refresher training were likely to increase staff’s knowledge and ease with using pay flexibilities to develop competitive compensation packages to help recruit and retain quality talent and fill critical positions. HHS also expressed concerns about the use of special payment authorities in the context of ongoing budget constraints. Specifically, HHS noted that budget restraints over the last several years have led to retirements and resignations among its more experienced HR staff. This resulted in a loss of institutional knowledge on complex pay and leave authorities, including those affecting special payments. HHS officials said the loss of experienced HR staff diminishes the agency’s internal capacity to train remaining staff. In addition, budgetary controls result in fewer resources for external training. Also, Interior commented that, for special payment authorities that can be approved by only OPM, OPM should provide training for their HR staff and managers responsible for using them. OPM Has Provided Some Guidance and Collects Some Data, but Has Not Assessed Effectiveness or Documented Approval Processes OPM Provided Guidance and Other Assistance OPM has taken a number of steps to provide agencies with additional guidance and assistance on using special payment authorities. For example, in April 2015, OPM and the CHCO Council held a web-based, virtual human resources conference for agency officials which included a session on special payment authorities for recruitment and retention. Moreover, in January 2016, OPM issued a memorandum to agency CHCOs that stated that OPM recognized the 3Rs are essential pay flexibilities for agencies facing serious staffing challenges. The memorandum provided guidance on exceptions to spending limits on 3R incentives and OPM website links to related guidance on using the authorities. In August 2017, OPM posted a web-based training course for agency officials on special payment authorities and other flexibilities, including examples of their use and resources for additional information. OPM has also pursued initiatives that focus attention on addressing mission-critical skills gaps areas. As part of government-wide efforts to develop and strengthen the cybersecurity workforce, in November 2016, OPM issued a memorandum and guidance to CHCOs on strategic and cost-effective use of the various flexibilities agencies may employ to recruit and retain employees in cybersecurity positions. The guidance included checklists of steps agencies need to complete to use various special payment authorities, and described ways to combine use of special payment authorities, when appropriate, to make federal agencies more competitive in recruiting and retaining cybersecurity employees. OPM also formed Federal Agencies Skills Teams (FASTs) for occupations as an effort to help agencies address mission-critical skills gaps areas. As part of FASTs, OPM collected and reviewed information from agencies on the root causes of skills gaps and found that compensation levels play a role in skills gaps, in some cases. As of August 2016, OPM’s approach included a strategy to hold agencies accountable for closing skills gaps in their MCOs, and to monitor metrics and progress through fiscal year 2020. In January 2015, we reported that the measures agencies had in place limited OPM and the CHCO Council’s ability to track progress in closing skills gaps government- wide. Accordingly, we recommended that OPM strengthen the approach and methodology for addressing skills gaps by working with the CHCO Council to develop targets that are clear, measurable, and outcome oriented. OPM partially concurred with the recommendation. OPM Does Not Track Special Payment Authorities to Assess Whether Using Them Improves Recruitment and Retention It is important to identify the necessary data and establish measures to track a program’s effectiveness, as well as establish a baseline to measure changes over time and assess the program in the future. We have reported that agencies can use these measurements to help them determine if a program is worth the investment, and to distinguish which of the available human capital flexibilities is better suited to address recruitment and retention needs. Standards for Internal Control in the Federal Government also state that management should obtain relevant data from reliable sources that can be used to effectively monitor programs. We have reported that understanding the relative effectiveness of various flexibilities can help identify any changes needed for agencies to more effectively use them. As we also recently reported, collecting and using data to assess the effectiveness of authorities would be a critical first step in making more strategic use of flexibilities to effectively meet hiring needs. OPM collects agency data on the use of special payment authorities via annual reporting on certain authorities and EHRI, but has not analyzed whether the payment authorities help agencies to improve recruitment and retention government-wide. Nor has OPM assessed trends and factors that can affect the use of these authorities. As required by statute, OPM annually collects data from agencies to report to Congress on the use of the student loan repayment and critical position pay authorities. Student loan repayment: OPM collects some information from agencies about their use of student loan repayment authority and invites agencies to provide details about their experiences in administering the authority, but has not conducted an analysis of the authority’s effectiveness for addressing recruitment and retention needs. As discussed previously, we analyzed the government-wide use of this authority by occupation and identified those occupations for which agencies most frequently used the authority (see figure 4). In addition, we used OPM’s 2015 report and information from its FASTs skills gap initiative to identify the use of this authority for MCOs. These are two examples of analyses that OPM could perform to help understand how agencies are using this authority. Critical position pay: OPM collects data on agency use of critical position pay from the agencies with existing OPM approval, but the information does not help OPM understand how the payment authority supports recruitment or retention. OPM collects data that it is required by statute to report to Congress such as who received the higher rate and the rate paid, but does not include information on the impact on recruitment and retention. OPM has stopped regularly collecting and analyzing data for the 3R incentives, except on the use of retention incentives for employees likely to leave for other federal agencies, and does not collect and analyze data for special rates, leaving a void for conducting government-wide analysis that would help determine whether special payment authorities help address agency recruitment and retention needs: 3R incentives: In a February 2010 memorandum to agency CHCOs on 3R incentives, OPM called for it and agencies to more actively manage the program and track data. OPM said that validated data would help OPM and agencies to understand the nature and trends of use of the incentives and better track incentives on an ongoing basis. OPM and agencies would also, if necessary, be better able to investigate any 3R data anomalies and take corrective actions. Based on its request to agencies in October 2011, OPM prepared a draft report on its analysis of agency-provided data and information on use of 3R incentives in calendar years 2010 and 2011, including what agencies reported as barriers to using the authorities and whether 3R improved recruitment and retention. However, OPM did not distribute the report or take action on it. Although OPM said it planned to conduct periodic reviews on an ongoing basis, since the 3R reporting requirement expired and OPM’s October 2011 agency data request, OPM does not regularly collect and review government- wide information on the level of use and potential barriers. Special rates: In conducting its annual review of special rates, each year OPM asks that agencies review their respective applicable special pay rate tables to determine whether the rates should be terminated, reduced, or increased. OPM considers requests to make changes based on the agency reviews, but according to OPM officials, in recent years, agencies have not identified any needed changes to special rates during that annual review process. Moreover, OPM has not used its EHRI data to better understand trends in the use of these authorities government-wide and how agencies are using them to address their recruitment and retention challenges. As an example, we analyzed EHRI data to describe government-wide use of selected authorities by occupational family in fiscal year 2014. From that analysis, we identified differences in use across various occupational families that could be helpful in understanding how agencies are using these authorities. For example, we found that the Medical, Hospital, Dental, and Public Health family was the top occupational family for four of the five authorities. See appendix IV for additional information. OPM also has not explored trends in agency use of the critical position pay authority. OPM has not pursued reasons why agencies have not requested approval for over 750 available slots or why agencies have used only 4 of the 36 authorized positions as of calendar year 2015. As part of an initiative to close skills gaps for the STEM workforce, in October 2014, the White House Office of Science and Technology Policy, OPM, and OMB identified the critical position pay authority as a potentially underused flexibility. However, according to the 2015 OPM report, OPM had authorized critical position pay for 36 positions in 10 agencies as of calendar year 2015. And, only four of those agencies reported using the critical position pay authority in 2015 for four current employees (see table 6). In July 2017, the Treasury Inspector General for Tax Administration (TIGTA) recommended that the Internal Revenue Service (IRS) pilot the use of critical position pay authority to recruit highly-qualified experts to lead IRS’s cybersecurity and related specialized functions. According to TIGTA, IRS would enhance its recruitment efforts by using the authority. OPM officials attribute low use of critical position pay to: (1) agencies’ views that the approval process is cumbersome; (2) management resistance or cultural issues based on views about pay inequity between employees, or employees receiving higher salaries than their managers; and (3) agencies using other compensation flexibilities that do not require prior OPM approval. OPM is not tracking government-wide data on the use of the range of special payment authorities to better understand whether or how various authorities improve recruitment and retention. OPM officials said the information they collect on special payment authorities depends on reporting requirements for the specific payment authority. For example, they said they collect information on assessing the effectiveness of student loan repayments because of the reporting requirements in the law. However, the reporting requirement does not include assessments to examine effectiveness or impediments to help OPM determine whether potential changes may be needed to address recruitment and retention challenges. Instead, OPM’s annual data request memorandums invite agencies to provide additional details on their experiences in administering the student loan repayments. OPM officials said they may sometimes perform ad hoc analyses of EHRI data on certain authorities but do not regularly analyze EHRI data on the use of the various authorities government-wide. For example, OPM officials said they have queried the EHRI database on the use of selected special payment authorities for cybersecurity employees and found the numbers of 3R incentives and agencies using them increased from fiscal year 2015 to 2016, while use of student loan repayments decreased during that period. However, OPM does not regularly conduct such analyses on this or other uses of special payment authorities to understand how they are used to address skills gaps. By not tracking and analyzing data on the use of special payment authorities, OPM and the CHCO Council do not have the information they need to help determine what potential changes may be needed, and have limited assurance that special payment authorities are helping agencies meet their needs and achieve recruitment and retention goals. OPM May Be Missing Opportunities to Promote Strategic Use of Special Payment Authorities Standards for Internal Control in the Federal Government requires that agency management design and implement control activities, which are the actions management puts in place through policies and procedures to achieve objectives and respond to risks. We have reported on OPM’s important leadership role and the CHCO Council’s support in assisting agencies with identifying and applying human capital flexibilities across the federal government. In its most recent strategic plan, OPM reported it would lead federal human capital management by partnering with its stakeholders—including federal agencies—to develop and implement effective and relevant human resources solutions to build an engaged, productive, and high-performing workforce and develop effective compensation packages, among other things. OPM also has acknowledged its leadership role in strategically promoting the effective use of at least one special payment authority—student loan repayment— and assisting agencies in the strategic use of this and other recruitment and retention tools as necessary to attract and retain a well-qualified federal workforce and support agency mission and program needs. We have also previously reported on the lack of awareness among federal managers about using flexibilities to address human capital challenges. In 2014, we reported that in a forum of CHCO Council agencies we convened, CHCOs said they wanted OPM to do more to raise awareness and assess the utility of tools and guidance it provides to agencies to address key human capital challenges. Accordingly, we recommended that OPM evaluate the communication strategy for and effectiveness of relevant tools, guidance, or leading practices created by OPM or the agencies to help ensure agencies are getting the guidance and tools that they need. OPM concurred with the recommendation. Guidance on Assessing Effectiveness OPM does not provide guidance on assessing effectiveness of special payment authorities in the agency’s handbook on human capital flexibilities for any of the authorities we reviewed. For example, OPM does not offer examples of assessments to illustrate what data are needed and what methodologies are available for determining whether special payment authorities improve recruitment and retention. OPM has provided supplemental information on assessing effectiveness in some student loan repayment authority annual reports and provided links to those reports on its website, but has not done so for other payment authorities. OPM officials said that they believe agencies are in the best position to collect and analyze data to determine which special payment authorities are effective for addressing recruitment and retention needs at their agency. However, we found that CHCO agencies often did not document assessments for the special payment authorities they used. The documents agencies prepared were most often focused on justifying or reporting on use of authorities rather than on evaluating their effectiveness in improving recruitment and retention. As mentioned previously, documents addressed compliance with regulations and justifications that agencies prepared to request approval for using the authorities. Our review found examples of data and methodologies that agencies could use to help assess whether an authority helped improve recruitment and retention. For example, DOD used data from interviews with employees hired into its entry-level developmental trainee programs to gather feedback on student loan repayment. The feedback consistently indicated that the program was a major contributing factor in employees’ decisions to accept these positions. Also, Interior collected and monitored data on retention rates to assess the effectiveness of special rates in retaining its oil and gas workforce. State said that it conducted some informal assessments of its use of retention incentives but could add a question to its employee exit survey to collect data on how these incentives affect attrition. Some CHCO agency questionnaire responses included examples of other types of data or analyses that could be used to assess special payment authorities such as (1) the reduced level of resignations in a department that had experienced staffing shortages; (2) an increased rate of filling certain hard-to-fill positions; and (3) counts of the numbers of employees successfully recruited into mission-critical skills gap areas. Tools and Guidance to Support Strategic Decisions OPM does not provide consistent information via tools and guidance to support effective use of special payment authorities. OPM’s website guidance for the student loan repayment authority provides agencies with tools including best practices, sample agency plans, and answers to frequently asked questions. However, its website guidance for superior qualifications and special needs pay setting authority, for example, only has fact sheets which generally restate and reference the related regulations. Table 7 summarizes the various types of tools and guidance information on the agency’s website about special payment authorities. In addition to the tools and guidance noted above, the student loan repayment website includes links to OPM annual reports which include details that could support agency use of this authority. In OPM’s annual requests for data on student loan repayments, OPM regularly invites agencies to submit information for these reports, including on these topics: establishing a business case, program impediments, and ways to improve the student loan repayment program. OPM’s guidance on using special pay authorities to address cybersecurity skills gaps illustrates how OPM provides useful information which could be applied in other mission-critical skills areas. In assisting agencies on ways to combine authorities to hire cybersecurity specialists, the guidance includes hypothetical scenarios where desirable job candidates have competing job offers or are currently employed, and provides example competitive compensation packages for entry-, mid-, and senior/expert-level employees. Such information could be useful for other government-wide or agency-identified mission-critical skills gaps or other positions where agencies face serious recruitment or retention challenges. OPM officials said they recognized tension between any effort to promote use of special payment authorities and OPM’s role of providing oversight of special payment authorities. OPM officials said the agency promotes the use of the authorities when agencies seek OPM’s help, rather than undertaking efforts to more broadly ensure agencies are fully aware of the potential benefits and innovative ways to use authorities. Further, OPM has not worked with the CHCO Council to gather and disseminate illustrative examples of data needed and methodologies to assess the effectiveness of the authorities. With guidance to assess effectiveness and consistent tools and guidance across the range of authorities, OPM and CHCO agencies could more fully support strategic use of special payment authorities to improve recruitment and retention across the federal government. OPM Approval Processes Are Not Fully Documented Standards for Internal Control in the Federal Government require management to design and implement control activities through policies to achieve objectives and respond to risks. Documentation and periodic review of policies and procedures are important parts of the standards and are necessary to design, implement, and operate controls effectively. Documentation provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel. Documentation is also evidence that controls can be monitored and evaluated. As we previously reported, streamlining administrative processes is a key practice for effectively using human capital flexibilities. Since agency officials must view administrative processes as worth their time compared to the expected benefit to be gained, perceived burdens and slow approval processes could dissuade them from seeking approval to use special payment authorities that could address recruitment and retention needs. OPM regulations implementing the statutory provisions set forth the basic criteria for OPM approval of certain special payment authorities, but OPM does not have documented procedures to guide OPM staff in assessing agency requests for approval. For example, OPM does not have documented criteria to assess the sufficiency of the information to support the request, such as the soundness of the methodology or reliability of underlying data for calculating payment amounts, or the sufficiency of prior agency efforts to recruit and retain employees without having to resort to additional pay. OPM officials noted that the complexity and nature of recruitment and retention difficulties can vary significantly between agencies and the authority requested. To make decisions about an agency’s request for approval, OPM officials said they apply the criteria in law, regulations, and guidance posted on OPM’s website. Our analysis shows that, since January 2009, OPM generally took 4 to 6 months to make approval decisions on CHCO agency special rates and critical position pay requests. OPM officials said they have conversations with agency officials about their views on the process, but do not have procedures to systematically monitor or evaluate the process, such as to seek agency feedback on whether the approval processes are burdensome, complex, and a barrier to wider use. As noted earlier, we sought feedback on certain agencies’ experiences with OPM’s approval processes for special payment authorities. Although some had positive comments, others expressed concerns about the timeliness of the process, including that the length of the process may lead to missed opportunities to hire desirable candidates. The July 2017 Treasury Inspector General for Tax Administration (TIGTA) report also said that lengthy approval processes for using critical position pay is a reason for low overall use of the authority. If pursued by IRS, the approval process would include getting the request cleared internally, approved by the Secretary of the Treasury and then, in turn, by OPM and OMB. OPM has not established a time frame within which agencies could expect a decision from OPM and OMB. OPM officials estimated that it may take several weeks or up to several months to complete the approval process, according to TIGTA. As part of its recommendation to IRS, TIGTA recommended tracking in detail the time and effort to get the request for approval cleared internally and approved by the Secretary of the Treasury, OPM, and OMB. OPM officials said they do not have documented procedures with criteria for approving use of special payment authorities because the complexity and nature of the requests vary significantly between agencies and the authority requested. OPM officials noted that in reviewing applications they need to be able to take into account relevant and important variables necessary to make fact-specific and reasonable determinations to help an agency find the most appropriate solution to its staffing problems. They said there is no “one-size-fits-all” formula for approving or denying requests. However, without documented procedures for assessing requests for approval, OPM lacks a means to review and assure that approval processes achieve their objectives. Without such documents, OPM also increases the likelihood of inconsistent decisions to grant or decline approval for the use of special pay authorities. Moreover, it also increases the risk of losing organizational knowledge of the personnel with expertise in assessing requests. Additionally, by not periodically examining the procedures, OPM is not well-positioned to consider alternatives for streamlining the approval process. Conclusions To deal with staffing challenges resulting from skills gaps, reduced budgets, and the upcoming wave of retirements, agencies have compensation tools at their disposal that can be coupled with other flexibilities to produce an attractive package for potential and current employees. CHCO agencies generally reported that special payment authorities positively contributed to areas such as employee retention, applicant quality, and ability to meet staffing needs, among others. OPM has acknowledged its leadership role in strategically promoting the effective use of at least one special payment authority—student loan repayment—and assisting agencies in strategic use of this and other human capital tools. However, OPM has not tracked or analyzed the government-wide data on agencies’ use of various special payment authorities to better understand whether or how various authorities improve recruitment and retention. By tracking and analyzing these data, OPM could have information it needs to determine what potential changes may be needed, and have better assurance that special payment authorities are helping agencies meet their needs and achieve recruitment and retention goals. Moreover, OPM has not been consistent in providing guidance on assessing effectiveness of the range of special payment authorities in attracting and retaining a well-qualified federal workforce to support agency mission and program needs. Few agencies are documenting assessments. OPM has not worked with the CHCO Council agencies to provide illustrative examples of data needed and methodologies to assess the effectiveness of the authorities. By providing guidance on assessing effectiveness of these authorities, OPM and CHCO agencies could be better positioned to know whether use of the authorities is improving recruitment and retention or what changes might be needed to improve their effectiveness. Agency officials may also perceive documentation and approval processes as time consuming or burdensome barriers to using compensation tools. Perceived delays or inefficiency in OPM’s approval processes could discourage agencies from seeking to use Title 5 special payment authorities that could address recruitment and retention challenges. OPM also had not documented procedures for assessing the sufficiency of the information agencies submit to request approval. By establishing documented procedures and periodically reviewing them, OPM would increase the likelihood of consistent decisions to grant or decline agency requests for approval to use these authorities. Recommendations for Executive Action We are making the following three recommendations to OPM. The Director of OPM, together with the CHCO Council, should track government-wide data to establish a baseline and analyze the extent to which the seven Title 5 special payment authorities are effective in improving employee recruitment and retention, and determine what potential changes may be needed to improve the seven authorities’ effectiveness. (Recommendation 1) The Director of OPM, together with the CHCO Council, should provide guidance on assessing effectiveness and tools—such as best practices or frequently asked questions—for the range of Title 5 special payment authorities. (Recommendation 2) The Director of OPM should establish documented procedures to assess special payment authority requests requiring OPM approval and periodically review approval procedures to consider ways to streamline them. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to OPM for review and comment. We also provided relevant draft report excerpts to CHCO agency officials for comment in cases where we more extensively reported an agency’s illustrative examples, or where an agency’s views were more significant in context of the report. OPM provided written comments, which are reproduced in appendix V and summarized below. Of our three recommendations, OPM concurred with one and partially concurred with the other two. OPM also outlined its planned steps to implement the recommendations. OPM and CHCO Council agency officials also provided technical comments, which we incorporated as appropriate. In response to our first recommendation, OPM partially concurred and outlined its plans to track data that cover a limited period to analyze agencies’ use of certain Title 5 special payment authorities. OPM said it planned to analyze both student loan repayment authority data by occupation for one calendar year (2016) and the most recently available data for five of the other six special payment authorities covered in this report. This includes use for government-wide mission-critical occupations. While these actions may provide some degree of insight into the extent to which and how agencies use some of the special pay authorities, examining only recent and available data will not support establishing a baseline to measure changes over time, tracking effectiveness, or determining any changes needed in future years. We made revisions to the recommendation to clarify the value of tracking data over time for the seven special payment authorities. OPM stated that tracking government-wide workforce data available to them will not provide a complete assessment of the effectiveness of the special payment authorities because agencies are in the best position to analyze such information. We agree that agencies have first-hand information on use of special payments. Agencies also have data that can inform discussions between OPM and the CHCO Council on potential strategies for a government-wide approach to enhance strategic use of these authorities to address mission-critical skill gaps. By working with the agencies through the CHCO Council, OPM is better positioned to track government-wide data to analyze the extent to which Title 5 special payment authorities improve employee recruitment and retention and determine what potential changes may be needed to improve authorities’ effectiveness. In response to our second recommendation, OPM concurred and outlined plans such as issuing guidance with examples of assessments to illustrate what data are needed and what methodologies are available for determining whether special payment authorities help improve recruitment and retention. We believe OPM could also assist agencies by providing tools or other guidance for the authorities that OPM does not approve—such as on establishing a business case, best practices, answers to frequently asked questions, or lessons learned—to help ensure consistent information is shared with agencies to support effective use for the range of Title 5 special payment authorities. OPM could also provide agencies with tools and guidance for other mission-critical skills areas similar to those shared for addressing cybersecurity skills gaps. Such tools and guidance could include hypothetical recruitment scenarios, checklists of required steps, and examples of competitive compensation packages. OPM stated it would work on any guidance that the CHCO Council identifies to improve use of special payment authorities. With consistent tools and guidance across the range of authorities, OPM and CHCO agencies can be positioned to fully support strategic use of special payment authorities to improve recruitment and retention across the federal government. In response to our third recommendation, OPM partially concurred and commented that there is no “one-size-fits-all” formula for approving or denying agency requests. It added that applying a rigid formula could result in unwarranted disapprovals. OPM also stated that it would document additional procedures to guide staff in evaluating agency requests and periodically review the procedures. We believe establishing documented procedures would guide staff in considering such complex factors such as the soundness of the methodology and the reliability of underlying data for calculating payments amounts. Documentation of policies and procedures is an important part of internal control standards. By documenting procedures to review requests, OPM will help ensure consistency in approval decisions and retain organizational knowledge of personnel with expertise in assessing requests. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 14 days from the report date. At that time, we will are send copies of this report to appropriate congressional committees, the Acting Director of OPM, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2717 or jonesy@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report (1) describes what is known about how much Chief Human Capital Officer (CHCO) Council agencies used selected special payment authorities in fiscal years 2014-2016; (2) assesses the extent to which CHCO agencies evaluate the effectiveness of these authorities and identifies challenges, if any, the agencies reported facing in using the authorities to address mission-critical skills gap areas; and (3) evaluates how the Office of Personnel Management (OPM) has helped agencies address federal recruitment and retention needs. We limited our scope to the seven broadly available special payment authorities generally available government-wide under Title 5 of the United States Code to address federal agencies’ recruitment and retention issues: (1) special rates, (2) recruitment incentives, (3) relocation incentives, (4) retention incentives, (5) superior qualifications and special needs pay setting, (6) student loan repayments, and (7) critical position pay. To describe what is known about how much CHCO Council agencies used selected special payment authorities in fiscal years 2014-2016, we developed and administered a questionnaire to the 27 CHCO agencies to collect their fiscal years 2014-2016 data—to the extent available—on frequency of use, dollars spent, and whether they used the authorities to help address recruitment and retention needs in mission-critical skills gap areas. All 26 CHCO agencies that reported use of the authorities responded to our questionnaire. We asked agencies to not report information related to agency-specific or non-Title 5 authorities. In our report, we use the aggregate CHCO agency reported data by authority. Federal employees may receive compensation under more than one authority in a given fiscal year, and in these instances would be counted for each authority received. For example, an employee who received a recruitment incentive and student loan repayments in the same fiscal year would be counted once for each authority in that year. We did not verify the amounts agencies reported spending. In addition, we used CHCO reported data to determine the total use of these authorities and OPM Enterprise Human Resources Integration (EHRI) personnel data, which contains personnel action and workforce data for most federal civilian employees, to identify the approximate percentage of employees who received at least one of these seven authorities in fiscal year 2016. In calculating the percentage, we used CHCO agency reported data for the numerator and OPM EHRI data for the total number of federal employees at the 26 CHCO agencies—as of September 30, 2016—for the denominator. We also analyzed OPM EHRI personnel data for fiscal year 2014 to describe the government-wide use of certain authorities by occupational family. To do so, we calculated the number of unique employees who received a certain authority in each fiscal year. We included federal employees on permanent and nonpermanent appointments, and all work schedules (seasonal, nonseasonal, intermittent, and full-time and part- time). Individual employees who switched occupational families during a fiscal year could be counted more than once if they received a special payment authority under both occupational families. We primarily relied on the following EHRI data variables to describe agencies’ use of certain authorities: Special rates: We used the “pay rate determinant” to identify employees who were receiving a special rate as of the end of the applicable fiscal year, and then used the “special pay table identifier” to limit our analysis to special rates authorized under 5 U.S.C § 5305. OPM officials provided a list of all authorized Title 5 special rate tables active during the fiscal years included in our review. 3R incentives: We used the “legal authority” and “nature of action” codes to identify employees for whom 3R incentives were authorized during the fiscal year. Superior qualifications and special needs pay setting: We used the “pay rate determinant” and “nature of action” codes for new appointments to identify the number of employees who had received a superior qualifications and special needs pay setting authority during the fiscal year. We reviewed OPM documentation, including OPM’s Guide to Data Standards—the guidance document that describe data elements in EHRI—to identify the specific codes used to designate employees who had received these authorities. We also analyzed OPM calendar year 2015 reports—the most recently available at the time of our review—on the student loan repayment authority and the critical position pay authority to describe agencies’ use of these two authorities by occupation. For the student loan repayment authority, we calculated the top occupations series that received this authority government-wide. We aggregated the 18 engineering-related occupations into one engineering series. In addition, for the agencies that most frequently used the authority, we calculated the approximate percentage of occupations that received the authority that were identified as mission critical by these agencies as part of OPM’s and the CHCO Council’s initiative to close skills gaps. To assess the reliability of the CHCO agency reported data and OPM data, we compared frequencies from the various data sources by agency for fiscal year 2014 (the one year of available overlapping data); reviewed OPM documentation; and interviewed OPM officials. We determined that the data were sufficiently reliable to present agency use of special payment authorities over this time period. To assess the extent to which CHCO agencies evaluate the effectiveness of special payment authorities and to identify challenges they reported facing in using the authorities, in our questionnaire to agencies we asked about their views on the impacts of each authority used in fiscal years 2014-2016. We analyzed and summarized closed-ended question response data agencies reported on authorities’ impacts on agency operations, including the extent of positive or negative effects in areas such as employee retention, applicant quality, and ability to achieve the agency mission. We also asked whether and how they assessed each authority’s effectiveness. We summarized closed-ended question response data on whether agencies had done documented, informal, or no effectiveness assessments of authorities in impact areas such as agency mission, meeting staffing needs, or addressing mission-critical skills gap areas. We contacted the 10 agencies that reported having documented assessments for one or more authorities to request copies of them. Nine agencies provided requested documents. We analyzed the documents to determine the type of information they provided, including whether they had information on how use of the authority had been effective in the impacts the questionnaire asked about. To learn more about agencies views on authorities’ effectiveness, we also asked an open-ended question for agencies to provide examples of how authorities helped address mission-critical skills gaps. We reviewed the narratives agencies provided to identify and report examples appropriate to illustrate the various effects agencies reported. We also asked agencies about their views on any challenges they experienced in using special payment authorities and potential changes to operations or procedures to help improve effective use of authorities. We analyzed and summarized the closed-ended question response data agencies reported on how often they experienced certain challenges, including insufficient resources, management resistance, burdensome documentation, and complex approval process. For the two most common challenges agencies reported other than insufficient resources— burdensome documentation and the complex approval process—we followed up with the three agencies that reported regularly or always experiencing both challenges. Two agencies responded. We also asked an open-ended question for agencies to provide narrative examples of how they identified and responded to challenges in using these authorities. We analyzed the content of the narrative responses to identify and report examples appropriate to illustrate various challenges and responses to challenges agencies reported. To learn more about agencies’ experiences with OPM’s approval processes for special payment authorities, we analyzed OPM’s data on agency requests to use the special payment authorities that OPM approves. In addition to our CHCO agency questionnaire response follow- up, we contacted selected agencies that OPM data identified as having requested approval to use a special payment authority since 2009. We asked seven agencies to provide narrative of their views on such topics as what worked well, challenges experienced, and any suggestions for improving the process. To provide an opportunity to learn how approval processes could affect agency decisions to not seek such approvals, we included two agencies—EPA and HHS—that had not made such requests to ask for narrative explanations of why they had not sought such approvals. All agencies provided their views. We analyzed agencies’ narrative responses to illustrate examples of the experiences agencies reported. We also analyzed and summarized the closed-ended question response data agencies reported on how likely potential changes would improve use of special payment authorities. We followed up with five agencies that identified the three most common potential changes that would very likely or certainly improve their ability to effectively use special pay authorities— changes to training for agency managers, training for agency human resources employees, and OPM regulations. We asked them to provide narrative descriptions of changes they had in mind and how changes would improve their agency’s effective use of special payment authorities. Four agencies responded. We reviewed the narratives agencies provided to identify examples appropriate to illustrate the various views on potential changes agencies reported. To evaluate how OPM has helped agencies address federal recruitment and retention needs, we interviewed OPM officials and reviewed OPM’s procedures to collect and analyze data on agency use of special payment authorities, including through automated systems (EHRI) and information requests and reporting. We reviewed the procedures to assess whether OPM tracks data to assess the level and effective use of the payment authorities to improve recruitment and retention. We also reviewed and summarized the various ways OPM provides agencies information on special payment authorities, including through OPM’s memorandums, opm.gov website tools and guidance on each special payment authority, and guidance for using special payment authorities to address cybersecurity skills gaps. We compared the types of information and consistency of the various ways OPM provides information to promote the strategic use of special payment authorities to include supporting agency effectiveness assessments and increased awareness and strategic decision making on the use special payment authorities. We reviewed procedures to collect information on the use of critical position pay authority and a July 2017 Treasury Inspector General for Tax Administration report on the topic. We also interviewed OPM officials and reviewed available documents on OPM’s processes to review and approve agencies’ requests to use certain special payment authorities, and analyzed OPM data to determine the average months it took OPM to make approval decisions on CHCO agency requests received from January 2009 through January 2017. We compared OPM’s procedures for collecting, analyzing, and providing information on the effective, strategic use of special payment authorities, and its procedures for approving use of special payment authorities, to criteria identified in our related reports on federal human capital management and in Standards for Internal Control in the Federal Government, including standards that agency management design and implement controls and document procedures. We conducted this performance audit from September 2016 to December 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Special Payment Authorities Questionnaire Sent to Chief Human Capital Officer Agencies Section I: Definitions 1. Mission critical skills gaps are one or more of the following and may impede the federal government from cost-effectively serving the public and achieving results: staffing gap in which an agency has an insufficient number of individuals to complete its work; and/or a competency gap in which an agency has individuals without the appropriate skills, abilities, or behaviors to successfully perform the work. Mission critical skills gaps may be identified broadly as affecting several agencies or may be specific to a given agency (such as mission-critical occupations agencies have identified to the Office of Personnel Management (OPM) for skills gap closure). Section 2: Use of Special Pay Authorities at Your Agency 1. For the special pay authorities below, does your agency have agency- specific guidance (including documented policies or plans) on the use of the special pay authorities below? (Check all that apply) agency level(s) 2. In the last three fiscal years (2014-2016), how many federal employees in your agency received compensation under the following special pay authorities? (if none, enter zero) (FY 2014) (FY 2015) (FY 2016) 3. In Fiscal Years 2014-2016, what was your agency’s total spending (in dollars) for the following special pay authorities? 4. In the last three fiscal years (2014-2016), how often has your agency experienced the following challenges related to using special pay authorities? 5. In your opinion, how likely would changes in the following areas improve your agency’s ability to effectively utilize special pay authorities at your agency? Instructions: If your agency utilized during FY2014-2016, please complete this section, otherwise continue onto the next Section. 1. In Fiscal years 2014-2016, did your agency use to support the following mission critical skills gap areas? at my agency (not mission critical) Acquisitions (e.g. Contract Specialist) Healthcare Professionals (non-Title 38) 2. Does your agency assess the following to determine the effectiveness of using ? 3. In your opinion, how has the use of impacted the following? Section 10: Agency- specific Examples of Special Pay Authorities Use 1. In what ways have special pay authorities helped your agency to successfully address mission critical skills gaps? (Please provide at least one specific example) 2. In what ways has your agency identified and responded to challenges related to the use of special pay authorities? (Please provide at least one specific example) Appendix III: CHCO Agencies Reported Use of Special Payment Authorities Affecting Selected Areas of Operation The following six tables present data on the responses reported by CHCO agencies on the impacts on selected areas of operation from using the following special payment authorities—superior qualifications, critical position pay, recruitment incentives, retention incentives, relocation incentives, and student loan repayment. Appendix IV: Office of Personnel Management Data on Use of Special Payment Authorities by Occupational Family Our analysis of OPM data found that, overall, agencies used five special payment authorities—special rates; superior qualifications and special needs pay setting; and the recruitment, relocation, and retention (3R) incentives—to varying extents for different occupational families. When we analyzed OPM data to identify the top five occupational families for each of these five special payment authorities, we found certain occupational families appeared among the top groups for multiple authorities (see those highlighted in table 14). Specifically, we found that two occupational families—(1) Medical, Hospital, Dental, and Public Health; and (2) Engineering and Architecture—were among the top five families for four and five of these special payment authorities, respectively. The Medical family was the top occupational family for four of the five authorities—superior qualifications and special needs pay setting, and the 3R incentives. Further, we found that certain occupational families were among the top five for one or two authorities but not for the other authorities. For example, the Information Technology and Copyright, Trademark, and Patent occupational families were among the top five families for special rates and superior qualifications and special needs pay setting, but not for the other three authorities. Appendix V: Comments from the Office of Personnel Management Appendix VI: GAO Contact and Staff Acknowledgements Contact Yvonne D. Jones, Director, (202) 512-2717 or jonesy@gao.gov. Staff Acknowledgments In addition to the contact named above, Signora May, Assistant Director; Ronald W. Jones, Analyst-in-Charge; Melinda Cordero, Ann Czapiewski, Sara Daleski, Christopher Falcone, Karin Fangman, Kerstin Hudon, John Hussey, Steven Putansu, Alan Rozzi, and Albert Sim contributed to this report.
Why GAO Did This Study Federal agencies can provide additional compensation by using seven broadly available special payment authorities to recruit and retain employees to address needed skills. Though special payments can help fill mission-critical skills gaps, agencies also face constrained budgets, which underscores the importance of cost-effective use of authorities. OPM and the CHCO Council play important roles in assuring effective federal human capital management. GAO was asked to examine agency use, challenges, and improvements needed, if any. This report 1) describes CHCO agencies' use of special payment authorities in fiscal years 2014-2016; 2) assesses to what extent CHCO agencies examined effectiveness; and 3) evaluates how OPM has helped agencies address recruitment and retention needs. GAO obtained information from CHCO agencies on use of authorities through a questionnaire. GAO also analyzed OPM personnel data and agency documents, and interviewed agency officials. What GAO Found Generally, federal agencies have seven broadly available government-wide special payment authorities to help address recruitment and retention challenges. Chief Human Capital Officer (CHCO) Council agencies reported using these authorities to varying degrees but overall for few employees in fiscal years 2014-2016. For example, in fiscal year 2016, less than 6 percent of the over 2 million CHCO agencies' employees received compensation from at least one of the authorities (see figure). The two most frequently used—special rates and retention incentives—were used for over 74,000 employees and over 13,000 employees, respectively, each year. The least-used—critical position pay—was used for as few as seven employees a year. CHCO agencies also reported using the range of authorities to help address skills gaps, particularly for science, technology, engineering, and mathematics occupations. CHCO Agency Employees Receiving Special Payments, Fiscal Year 2016 CHCO agencies reported that these authorities had positive impacts—such as on-staff retention and applicant quality—but had few documented effectiveness assessments. Nine of 10 agencies that reported having documented assessments provided them, but GAO found that only 3 had information on effectiveness, such as its impact on meeting staffing needs. The Office of Personnel Management (OPM) collects agency data on use but has not tracked data to analyze how much authorities help agencies improve recruitment and retention government-wide. OPM may be missing opportunities to promote strategic use by providing guidance and tools on assessing effectiveness. For example, OPM has not explored reasons for trends in use of critical position pay or consistently shared best practices and innovative ways to use authorities. Without tracking data and providing guidance to help agencies assess effectiveness, OPM will be unable to determine whether use of special payment authorities helps agencies to improve recruitment and retention. What GAO Recommends GAO is making three recommendations, including that OPM should work with the CHCO Council on tracking data and providing guidance and tools to assess effectiveness of authorities, among others. OPM concurred or partially concurred with all recommendations, and described planned steps to implement them.
gao_GAO-18-300
gao_GAO-18-300_0
Background U.S. Army’s Joint Trauma System Defense Center of Excellence Since the mid-2000s, DOD and the military health system have worked to decrease trauma-related morbidity and mortality by improving trauma care in DOD’s military treatment facilities and by conducting research on providing trauma care. As part of these efforts, the Army established the JTS DCOE, which serves to provide advice on trauma care across the military. The JTS DCOE performs several functions to improve trauma care, including overseeing the DOD Trauma Registry (DODTR)—a database that captures trauma data from the time servicemembers are injured on the battlefield to when they are treated by providers in the United States. The JTS DCOE uses DODTR data to conduct performance improvement activities and to identify gaps in medical capabilities to direct ongoing and future combat casualty care research, trauma skills training, and combat casualty care. The JTS DCOE also provides data from the registry to collaborating military and civilian personnel conducting medical research. managing the development, monitoring, and review of Clinical Practice Guidelines (CPGs). These guidelines, developed by subject matter experts using data from DOD’s trauma registry, are created to inform medical professionals of best practices based on medical evidence, with a goal of minimizing inappropriate variation in medical practice and improving care for trauma injuries, specifically when military servicemembers are deployed. The development of CPGs is an ongoing process that takes place during times of war and peace, according to DOD officials. developing and providing training curriculum for first responders to trauma-related injuries. The JTS DCOE seeks to identify lessons learned from trauma care that can be used as part of this training, to help improve the medical readiness of trauma care providers. NDAA Requirement for a New DOD Joint Trauma System To create a formalized, consistent trauma system across DOD, the NDAA required that a new JTS be operated under the direction of DHA. DHA officials expect to begin initial operation of the new JTS in July 2018. Additionally, DOD plans to realign the existing JTS DCOE and its current functions under DHA. Section 707 (a)(2) of the NDAA required DOD to submit an implementation plan to Congress for the new JTS in June 2017, 180 days after the NDAA was enacted. The NDAA also includes a provision for us to review DOD’s plan within 180 days after DOD submitted it to Congress, and for DOD to implement the new JTS 90 days after we submit our review. The NDAA required that the new JTS and DOD’s implementation plan include the following four elements: 1. serve as the reference body for all trauma care provided across the 2. establish standards of care for trauma services provided at military 3. coordinate the translation of research from DOD’s centers of excellence into standards of clinical trauma care, and 4. coordinate the incorporation of lessons learned from trauma education and training partnerships pursuant to section 708 of the NDAA into clinical practice. DOD’s Joint Trauma System Implementation Plan Includes the Four Elements Required by the NDAA, but Does Not Yet Fully Incorporate Leading Practices for Planning The implementation plan submitted by DOD to Congress on August 7, 2017 includes a description of the four elements required by the NDAA. It also provides an overview of the implementation activities, including realigning the U.S. Army’s current Joint Trauma System Defense Center of Excellence to become part of the new system within DHA. Although the implementation plan includes the four required elements, neither it nor DOD’s supplemental planning documents prepared to date fully incorporate leading practices, which we have previously identified. These leading practices, such as the establishment of goals and the identification of strategies to achieve those goals, play an important role in enabling an organization to achieve its objectives. We found that DOD’s planning documents, prepared to date, incorporate only some of the leading practices. (See table 2). DOD officials acknowledged that the agency’s plans are presently incomplete because this process is ongoing. They stated that DOD is continuing to plan for implementing all four elements of the JTS— including efforts to incorporate leading practices. DOD’s planning documents that have been prepared to date and our assessment of each of the four elements are described below. Element One—Serve as a Reference Body for Trauma Care DOD’s planning documents incorporate goals associated with this element, but only include partial information about the strategies, associated risks, and plans to assess progress. Without including more complete information about plans to serve as a reference body for trauma care, it is unclear how well prepared DOD is to implement this element. Goals: According to a planning document, DOD has two goals for JTS to serve as a reference body: 1) consolidating disparate trauma registries into the DODTR. According to DOD officials, there are currently about 70 disparate registries, some of which collect trauma-related information for various entities across DOD. 2) developing a common trauma lexicon—a dictionary of common trauma care terminology to assist in the assessment of trauma-related injury data. Strategies: In addition to defining goals, the documents also include some strategies to achieve those goals, such as specific actions that DOD plans to take and target dates for accomplishing these actions. For example, the documents outline plans to take action to define key terms such as “preventable death,” “non- survivable injury,” “potentially survivable injury,” and others by a target date of July 2018. The documents also identify DHA as the lead office within DOD that is responsible for executing and achieving this action. The planning documents do not yet fully reflect the strategies needed to accomplish these goals. For example, although the documents discuss actions and milestones associated with goals for this element, they do not yet provide complete information on the resources and costs needed for implementation. The documents state that DHA will conduct an organizational analysis to determine what organizational structure, staffing needs, and other resources are needed for implementation at a later date. They also state that funding levels for DHA’s operation of the DODTR will be based on the existing JTS DCOE funding levels. However, another planning document indicates that the infrastructure for the DODTR’s existing host network—operated by the United States Army Institute of Surgical Research—would be insufficient to support the planned JTS and DODTR expansion, and that integrating even a single additional registry or component of a registry into the DODTR would require an adjustment to the funding for the system. Given that the planned activities for the new JTS would require an expansion beyond the scope of the current JTS DCOE responsibilities and activities, additional planning for equipment and network support costs may be necessary to ensure that the new JTS has sufficient resources to meet its goals. Risks: The planning documents identify risks that could affect the JTS’s ability to serve as a trauma reference body, but the documents do not yet specify how DOD plans to assess or respond to these risks. For example, although one of the planning documents identifies potential shortfalls in the DODTR host network’s ability to support an increased number of users—which are expected as the various disparate registries are consolidated—none of the documents yet address the estimated impact of this risk on DOD’s goals or how it plans to respond to the risk. Not planning for assessing and responding to risks could increase the likelihood that they become problematic, and negatively affect DOD’s goal for the JTS. Plans to Assess Progress: The planning documents do not yet fully indicate how DOD plans to assess progress made towards the goals for consolidating registries or developing a lexicon of common trauma terms, as would be consistent with leading practices. The documents include a description of a baseline for performance related to DOD’s goal to develop a lexicon of common trauma terms, but they do not yet include plans to monitor the progress made towards this goal or to assess the results of monitoring. Additionally, the documents do not yet establish a performance baseline, a system to monitor progress, or a plan to assess the results of monitoring for DOD’s other goal for this element—to consolidate registries into the DODTR. Without a fully-developed system for assessing the implementation’s progress—practices which are consistent with federal internal control standards for risk assessment—DOD may be unable to determine progress toward the goals it has identified for this element. Element Two—Establish Standards of Trauma Care for Military Services DOD’s planning documents incorporate goals and plans to assess progress, but do not yet fully incorporate leading practices related to strategies and risks. Goals: According to the documents, DOD’s goal for this element is twofold: 1) to develop, publish, and assess standards of care in DOD’s CPGs. 2) to determine if the CPG development process can be improved. DOD publishes CPGs to provide trauma care providers with recommended practices for the provision of care, based on available evidence. According to DOD documents, the CPGs minimize variations from evidence- based best practices, which help to save lives. Strategies: DOD’s planning documents describe how the new JTS will continue to produce, update, and monitor adherence to CPGs and designates JTS as the office that is primarily responsible for leading these efforts. Although DOD’s planning documents include information needed for the JTS to establish standards of care through CPGs, they do not yet fully reflect the strategies necessary to achieve DOD’s goal. DOD officials indicated that the new JTS will develop, publish, and assess CPGs using the same process used by the existing JTS DCOE. DOD officials told us that CPGs are currently reviewed on an annual basis and updated once every two years, on average. According to DOD officials, this frequency exceeds standards established by leading civilian organizations. Once updated, officials disseminate CPGs by posting them on a website, sharing them with DOD officials responsible for training trauma care providers, and discussing them at weekly conference calls on combat casualty care. Officials also told us that the existing JTS lacks authority to require that trauma care providers adhere to recommendations made in CPGs. In addition, DOD’s planning documents acknowledge that the existing process lacks sufficient mechanisms to ensure timely updates and effective dissemination, but do not yet indicate what plans are needed to make improvements in these areas. Without additional planning to improve mechanisms for CPG development and dissemination, DOD faces uncertainty regarding the new JTS’s ability to ensure that the CPGs it produces are up to date and effectively disseminated to military trauma care providers, which may ultimately impact the trauma care that it provides. Risks: The planning documents identify risks associated with the development and dissemination of trauma care CPGs, such as an inconsistent process for dissemination. However, they do not yet include information on determining the potential effects of these risks, nor do they include how DOD expects to respond, which are both leading practices for risk assessment and are consistent with federal internal control standards. Without additional planning, DOD may not be fully prepared to address risks related to updating and disseminating CPGs. Plans to Assess Progress: The planning documents include detailed information about how DOD uses performance measures for each CPG to assess progress in provider adherence to trauma care standards. The documents also establish a baseline for provider performance, a system for ongoing performance monitoring, and a process for evaluating the results of monitoring—performance measurement activities that can help the department track progress towards the goal it has established for this element. Element Three— Coordinate the Translation of Research into Trauma Care Standards One of the planning documents provides a general overview of how DOD plans to coordinate the translation of research from its centers of excellence—including the JTS DCOE and other trauma care centers of excellence—into trauma care standards, but the planning documents have yet to incorporate any of the four leading practices, including goals, strategies, risks, or plans to assess progress. According to DOD officials, the current JTS DCOE routinely translates research into trauma care standards by creating and updating these standards to incorporate the findings and results of relevant research. DOD officials also told us the current JTS DCOE routinely interacts with the various DOD organizations responsible for trauma-related research, such as by holding weekly discussions on trauma care issues. Officials stated that they do not expect these interactions to change as the JTS DCOE transitions to the new JTS. However, the planning documents do not yet provide any detail about how these interactions will inform clinical standards. Without including detailed information in the planning documents on how DOD expects to coordinate the translation of research into trauma care standards, it is unclear whether the JTS will be fully prepared to ensure that clinical standards are up-to-date and based on the most relevant evidence from research. This is critical to ensuring the effectiveness of the trauma care provided. Element Four— Incorporate Lessons Learned from Trauma Education and Training Partnerships The planning documents for this element do not yet incorporate any of the four leading practices, including goals, strategies to achieve goals, risks, or plans to assess progress. Officials indicated that planning for the implementation of this element will be incomplete until DOD establishes the new Joint Trauma Education and Training Directorate responsible for establishing these partnerships. Section 708 of the NDAA states that DOD may enter into partnerships with civilian trauma centers to provide trauma care providers with maximum and continuous exposure to a high volume of critically injured patients. According to DOD officials, planning for incorporating lessons learned will begin after the directorate reaches initial operating capacity, which they anticipate in 2018. DOD officials also told us that the JTS will collaborate with the directorate for trauma education and training partnerships, once it is established, to plan the translation of relevant lessons learned into clinical practice. Because planning for this element is still incomplete, it is unclear whether DOD will be prepared to use information from these clinical partnerships to improve the effectiveness of the trauma care it provides to injured service members. Conclusions In an effort to reduce preventable deaths and disabilities due to trauma, and as required by the NDAA, DOD is planning for the implementation of its new JTS. Specifically, the department has submitted its implementation plan to Congress as required and has developed other supplemental planning documents that describe how it plans to address the four required elements of the new system. Incorporating these elements is a critical step for DOD as it works to improve trauma care consistently across the military health system. Although the NDAA requires that DOD begin implementation in 2018, DOD’s planning is ongoing, and its planning documents do not fully incorporate leading practices that can help ensure the success of its efforts. As it moves forward, DOD has the opportunity to update its efforts and planning documents to fully incorporate these leading practices. By not doing so, DOD may be missing an opportunity to ensure that its efforts to implement a new JTS are effective and to help reduce trauma-related deaths and injuries across the military. Recommendation To fully implement the four required elements of the new Joint Trauma System, the Director of the Defense Health Agency should fully incorporate leading practices—including establishing goals, planning strategies to achieve goals, identifying and addressing risks, and assessing progress—in its planning to guide implementation efforts. (Recommendation 1) Agency Comments We provided a draft of this report to DOD for comment. DOD provided written comments, which are reprinted in appendix I, and technical comments, which we incorporated as appropriate. In its written comments, DOD concurred with our recommendation to fully incorporate leading practices in its planning to guide JTS implementation efforts. DOD’s written comments also referred to technical concerns regarding the timeliness of its updates to clinical practice guidelines. Specifically, the comments indicate that DOD updates these guidelines more frequently than standards established by leading civilian organizations. Our report includes a description of DOD’s processes for developing and updating these guidelines, including the frequency of the updates, and we added a statement regarding DOD officials’ comparison of this frequency to civilian standards. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Defense Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Will Simerl (Assistant Director), Carolyn Garvey (Analyst-in-Charge), Sarah Sheehan, Jennie Apter, and Jacquelyn Hamilton made key contributions to this report.
Why GAO Did This Study Traumatic injury is a major cause of death and disability in the military, but improved trauma care has the potential to improve these outcomes. DOD has worked to improve trauma care over time, such as by establishing a Joint Trauma System Defense Center of Excellence to examine trauma care and share best practices. To improve trauma care across DOD, the NDAA for Fiscal Year 2017 directed DOD to establish a new JTS within DOD's Defense Health Agency. The NDAA requires that the new JTS include four specified elements, and also required DOD to submit to Congress an implementation plan that included the four elements. The NDAA also included a provision for GAO to review DOD's planning for the new JTS. GAO assessed whether the implementation plan includes the four required elements and the extent to which DOD's planning efforts to date reflect leading practices from prior GAO work, such as identifying goals and strategies to achieve those goals. To conduct its work, GAO assessed DOD's implementation plan and other supplemental planning documents identified by DOD, and interviewed DOD officials. What GAO Found The Joint Trauma System (JTS) implementation plan submitted to Congress by the Department of Defense (DOD) in August 2017 includes a description of the four elements required by the National Defense Authorization Act (NDAA) and an overview of implementation activities. For example, it indicates how the Army's current JTS Defense Center of Excellence will become part of DOD's new JTS. However, the plan and other supplemental planning documents prepared to date do not fully incorporate leading practices for planning as identified by prior GAO work. GAO has previously found that implementation plans incorporating these leading practices—goals, strategies to achieve goals, risks that can affect goals, and plans to assess progress toward goals—help ensure organizations achieve their objectives. For each of the four required elements, GAO found that these leading practices either were partially incorporated or had not been incorporated: Element 1—Serve as the reference body for all trauma care provided across the military health system. DOD documents include specific goals, such as consolidating data from multiple trauma registries. They also include some strategies to achieve the goals, such as identifying lead offices and time frames to complete specific actions. However, the documents provide limited details on actions DOD plans to take, and do not indicate how DOD plans to address risks or assess its progress. Element 2—Establish standards of care for trauma care services. DOD documents include a goal to develop, publish, and assess clinical practice guidelines that serve as standards of trauma care. These documents also describe how the new JTS will continue to produce, update, and monitor adherence to the guidelines. However, they do not fully indicate plans to address risks, such as ensuring effective dissemination. Element 3—Coordinate the translation of research from DOD centers of excellence into standards of clinical trauma care. DOD planning documents do not incorporate any leading practices for this element. DOD officials told GAO that clinical standards incorporate relevant research and that officials responsible for trauma care standards routinely interact with officials responsible for research. Officials expect this practice to continue under the new JTS. Element 4—Coordinate the incorporation of lessons learned from trauma education and training partnerships into clinical practice. DOD planning documents do not incorporate any leading practices for this element. According to officials, DOD must first establish a separate directorate responsible for partnerships with civilian trauma centers before determining how to incorporate lessons from partnerships into the new JTS. According to DOD, the JTS implementation plan is a general overview of implementation activities, and planning efforts are ongoing. By not fully incorporating leading practices in its planning documents, DOD may be missing opportunities to ensure that the JTS is effectively implemented, to provide more effective trauma care across the military, and to help reduce trauma-related deaths and disabilities. What GAO Recommends GAO recommends that DOD incorporate leading practices in its planning to guide implementation efforts. DOD agreed with the recommendation.
gao_GAO-19-54
gao_GAO-19-54_0
Background FAR Part 15 describes the use of several competitive source selection processes to meet agency needs, which include the LPTA process and tradeoff process on a best value continuum (see fig. 1). The FAR states that when using the LPTA process, tradeoffs are not permitted. DOD may elect to use the LPTA process where the requirement is clearly defined and the risk of unsuccessful contract performance is minimal. In such cases, DOD can determine that cost or price should play a dominant role in the source selection. When using the LPTA process, DOD specifies its minimum requirements in the solicitation. Firms submit their proposals and DOD determines which of the proposals meet those requirements. No tradeoffs between cost or price and non-cost factors (for example, technical capabilities or past performance) are permitted. Non-cost factors are rated on an acceptable or unacceptable basis. The award is made based on the lowest priced, technically acceptable proposal submitted to the government. With either the LPTA or the tradeoff process, contracting officials may establish a competitive range and conduct discussions with offerors before selecting an offer for award. By contrast, DOD may elect to use the tradeoff process in acquisitions where the requirement is less definitive, more development work is required, or the acquisition has a greater performance risk. In these instances, non-cost factors may play a dominant role in the source selection process. Tradeoffs between price and non-cost factors allow DOD to accept other than the lowest priced proposal. The FAR requires DOD to state in the solicitation whether all evaluation factors other than cost or price, when combined, are significantly more important than, approximately equal to, or significantly less important than cost or price. Contracting officials have broad discretion in the selection of the evaluation criteria that will be used in an acquisition. A written acquisition plan generally should include a description of the acquisition’s source selection process and the relationship of the evaluation factors to the acquisition objectives, but the FAR does not explicitly require contracting officials to document the reasons why the specific source selection procedures or evaluation factors were chosen. DOD’s March 2016 Source Selection Procedures offer additional guidance regarding the use of the LPTA source selection process. The procedures are mandatory for acquisitions conducted as part of a major system acquisition program and all competitively negotiated FAR part 15 acquisitions with an estimated value over $10 million. The March 2016 guide states that the LPTA source selection process may be used in situations where there would not be any value on a product or service exceeding the required technical or performance requirements. The guide also states that such situations may include acquisitions for well-defined, commercial, or non-complex products or services; where risk of unsuccessful contract performance is minimal; and where DOD has determined there would be no need or value to pay more for higher performance. Section 813, as amended, requires that DOD revise the DFARS to require that the LPTA process only be used in situations when the following eight criteria are met. 1. DOD can clearly describe the minimum requirements in terms of performance objectives, measures, and standards that will be used to determine acceptability of offers. 2. DOD would realize no, or little, value from a proposal exceeding the solicitation’s minimum technical requirements. 3. The proposed technical approaches can be evaluated with little or no subjectivity as to the desirability of one versus the other. 4. There is a high degree of certainty that a review of technical proposals other than that of the lowest-price offeror would not identity factors that could provide other benefits to the government. 5. The contracting officer has included a justification for the use of the LPTA process in the contract file. 6. The lowest price reflects full life-cycle costs, including for operations and support. 7. DOD would realize little or no additional innovation or future technological advantage by using a different methodology. 8. For the acquisition of goods, the goods being purchased are predominantly expendable in nature, nontechnical, or have a short life expectancy or shelf life. Section 813 required DOD to revise the DFARS within 120 days of enactment of the National Defense Authorization Act for Fiscal Year 2017. The NDAA was enacted December 23, 2016, but, as of November 2018, the DFARS had not been revised. A Defense Pricing and Contracting (DPC) official stated the revisions are in process but were delayed due to a number of reasons, including the need for the revisions to reflect two additional criteria that were added to Section 813 (shown as criteria (7) and (8) in the list above) through subsequent provisions in Section 822 of the National Defense Authorization Act for Fiscal Year 2018, and compliance with Executive Order 13771, which calls for the reduction and control of regulatory costs. The DPC official stated that until the DFARS is updated, DOD contracting officials are not required to consider the Section 813 criteria. Use of the LPTA Process for Task and Delivery Orders The FAR describes a wide selection of contract types that may be used in acquisitions. One of those types is an IDIQ contract, which provides for an indefinite quantity, within stated limits, of supplies or services during a fixed period of time. The FAR implements a statutory preference for multiple-award IDIQ contracts, which are awarded to two or more contractors under a single solicitation. These contracts allow agencies to establish a group of prequalified contractors to compete for future orders under streamlined ordering process once agencies determine their specific needs. These contracts can be awarded using a source selection process that is on the best value continuum, such as LPTA or tradeoff. When a concrete need arises, a contracting officer will issue a task order for services or delivery order for products. DOD frequently issues orders under IDIQ contracts to address its needs. DOD obligated approximately $133 billion—40 percent of its total fiscal year 2017 contract obligations— through such orders. With certain exceptions, the FAR requires that when a contracting officer places an order under a multiple-award IDIQ contract, the contracting officer must provide all of the IDIQ contract holders a “fair opportunity” to be considered for the order. Generally, a contracting officer placing an order exceeding the simplified acquisition threshold must provide a “fair notice” that includes the basis upon which the selection will be made to all contractors offering the required products or services under the multiple-award contract. We have previously found that DOD has awarded IDIQ contracts using the tradeoff process but then issued orders off of those IDIQ contracts using either the LPTA process or a tradeoff process. In other words, DOD employs both the LPTA and tradeoff processes for competitive orders issued against the same IDIQ contract, depending upon the requirement. Past GAO Reports on DOD Source Selection Process Since 2010, we have issued three reports on DOD’s use of source selection processes. In October 2010, we found that, for 60 of the 88 contracts we reviewed, DOD used a tradeoff process and weighted non- cost factors as more important than price. In these cases, DOD was willing to pay more when a firm demonstrated it understood complex technical issues more thoroughly, could provide a needed product or service to meet deadlines, or had a proven track record in successfully delivering products or services of a similar nature. In addition, we determined that when making tradeoff decisions, DOD selected a lower priced proposal nearly as often as it selected a higher technically rated, but more costly, proposal. In so doing, DOD chose not to pay more than $800 million in proposed costs by selecting a lower priced offer over a higher technically rated offer in 18 of the contracts we reviewed. The majority of solicitations where non-cost factors were equal to or less important than cost were for less complex requirements. We also found that DOD faced several challenges when using the best value tradeoff process, including difficulties in developing meaningful evaluation factors, the additional time investment needed to conduct best value tradeoff procurements, and a greater level of business judgment required of acquisition staff when compared to other acquisition approaches. To help DOD effectively employ the best value tradeoff process, we recommended that DOD develop training elements such as case studies that focus on reaching tradeoff decisions. DOD concurred and implemented the recommendation in August 2012. In 2014, we found that DOD had increased its use of the LPTA process for new contracts with obligations over $25 million, using the LPTA source selection process to award an estimated 36 percent of new fiscal year 2013 contracts compared to 26 percent in fiscal year 2009. We found that contracting officials’ decisions on which source selection process would be used was generally rooted in knowledge about the requirements and contractors. For contracts with obligations over $25 million, DOD used the LPTA source selection process primarily to acquire commercial products such as fuel, and we identified relatively few uses of the LPTA process to acquire higher dollar services. For contracts with obligations over $1 million and under $25 million, DOD used the LPTA process an estimated 45 percent of the time for a mix of products and services, including fuel, aircraft parts, computer equipment, construction-related services, engineering support services, and ship maintenance and repairs. We did not make recommendations to DOD in this report. In 2017, we reviewed contracts that DOD awarded using the LPTA process for service categories for which Section 813 established the LPTA process is to be avoided to the maximum extent practicable, such as those for information technology, knowledge based services, cybersecurity, and other professional support services. We found that the Army, Navy, and Air Force rarely used the LPTA source selection process for information technology and selected support services contracts valued at $10 million or more that were awarded in the first half of fiscal year 2017. Our analysis found that the three military departments awarded 781 new contracts valued at $10 million or more during this time frame. Of these 781 contracts, 133 contracts were awarded for information technology and support services. However, only 9 of the 133 contracts used the LPTA source selection process. In addition, we found that contracting officials’ reasons for using the LPTA process were generally consistent with the criteria listed in Section 813. We did not make recommendations to DOD in this report. About One-Quarter of Fiscal Year 2017 DOD Contracts and Orders Valued $5 Million and Above Used the LPTA Process Based upon the results of our generalizable sample, we estimate that about 26 percent of contracts and orders competitively awarded by the Army, Navy, Air Force, and DLA valued at $5 million and above in fiscal year 2017 used the LPTA process. Table 1 shows the number and percentage of contracts and orders in our sample that we estimate to have used the LPTA process. We reviewed the 46 contracts and orders for which the Army, Navy, Air Force and DLA used the LPTA process and found that 20 were for products and 26 for services. Within this sample, the Army, Navy, Air Force, and DLA bought a variety of products and services (see figure 2). Contracting Officials Used the LPTA Process for Reasons Consistent with Current Requirements Contracting officials associated with the 14 contracts and orders we selected used the LPTA process, in part, because they determined there was no tradeoff available or determined that DOD would not derive any benefit from paying a premium for offers that exceeded the minimum capabilities. As previously mentioned, DOD’s March 2016 Source Selection Procedures currently states that the LPTA process may be used when there would not be additional value to a product or service exceeding the required technical or performance requirements. Therefore, these determinations are consistent with the DOD’s current guidance. The following examples illustrate contracting officials’ rationale for using the LPTA process. A DLA contracting official awarded a contract for natural gas with a ceiling value of approximately $14.8 million over a 2-year ordering period. The contracting official stated that no tradeoffs were available because the requirement was specifically for natural gas that would be used in government owned facilities across multiple states and an alternative fuel source was not required. Therefore, offerors were evaluated, from a technical acceptability perspective, on whether they were able to deliver the amount of natural gas required by the specified time frames. Similarly, the Marine Corps purchased over 15,400 general-purpose laptops with an estimated value of approximately $14.1 million. To meet a DOD initiative of upgrading general use laptops to Windows 10, Marine Corps officials determined that a commercially available laptop would meet their requirements. Marine Corps contracting officials stated that through their market research they noted there were laptops with additional capabilities available; however, they determined it was not beneficial to pay for higher capabilities. Overall, for the 14 contracts and orders we reviewed, contracting officials identified several reasons for using the LPTA process (see table 2). In many cases, contracting officials cited more than one reason. The following examples illustrate reasons contracting officials identified for the use of the LPTA process. The Air Force awarded a foreign military sales IDIQ contract, with a maximum ordering value of $65 million, to provide planned maintenance and supply support services for F-16 aircraft owned by Taiwan. The contract had a one-month mobilization period, a 5-year base ordering period, and two 1-year option ordering periods. According to Air Force officials, the contract’s requirements were well- defined because the standard tasks and processes, such as engine maintenance, corrosion prevention, and aircraft washing, are strictly defined by an Air Force instruction. Contracting officials determined there was a low risk of contractor failure because (1) the pool of qualified firms interested in performing this type of contract is limited, and (2) the incumbent workforce had to be offered the chance to continue working under any new contract, regardless of the management company that won the award. The Navy issued an order under a multiple-award IDIQ contract, with a value of $6.1 million, to renovate office space in two buildings at a naval air station. The Navy determined that the risk of contractor failure on this order was low because the contractor was pre-qualified as part of the initial contract award. Additionally, contracting officials stated the requirement was well-defined, as the contractor was required to renovate the space according to the plans provided by the Navy. The Navy awarded a multiple award IDIQ contract with an estimated maximum value of $502.6 million, over a one-year base period and four 1-year options, for repair and maintenance of non-nuclear surface ships harbored in San Diego. Navy officials considered the requirements non-complex due to the nature of the work to be performed. In this case, the tasks included welding, marine pipefitting, sheet metal forming, and electrical/electronic repairs, among others, which were to adhere to established standards that would be specified in the orders. The contracting officials stated that for more complex repairs they would use a different contract. DLA awarded a contract with an estimated value of $5.7 million, over a 2-year ordering period, for a commercial jet fuel system icing inhibitor to be delivered to Middle Eastern destinations, such as Qatar. Given that the additive was a commercial product, DLA determined that awarding the contract to the offeror that could deliver the required quantity within specific time frames at the lowest price was in the government’s best interests. Of the 14 contracts and orders we reviewed, 4 orders were for services that Section 813 identified as those that DOD should, to the maximum extent practicable, avoid using the LPTA process. These four orders were for cybersecurity services, information technology services, and knowledge-based professional services. DOD contracting officials’ rationale for using the LPTA process for these four orders were also consistent with guidance in DOD’s March 2016 Source Selection Procedures, as illustrated below: The Air Force issued an order with an estimated value of $11.6 million, with a 1-year base period and four 1-year options, for healthcare information technology system support services at several European military installations. These services included help desk support and network administration services, such as maintenance, administration, and troubleshooting services for the local computer servers. Air Force contracting officials stated the requirements were well-defined, as the services have been provided by a contractor for a long time and were well understood. Further, the officials stated they confirmed that the requiring office was not willing to pay for additional services beyond the minimum requirements. Contracting officials also determined there was a low risk of contractor failure because they were placing an order under a multiple-award IDIQ contract and all contract holders were pre-qualified to perform the work. The Air Force issued an order with a reported value of $21.6 million, with a 1-year base period and four 1-year options, for information technology services, which included cybersecurity services, network management administration, requirements analysis, and communications planning at a European military installation. Air Force contracting officials stated the requirements for this contract were well-understood, as the Air Force had been contracting for these services for more than 15 years. Further, contracting officials stated the contractor was required to use an existing government software program to identify any information technology threats. Finally, contracting officials determined there was a low risk of contractor failure because they were issuing an order under a multiple-award IDIQ contract for which all contract holders were pre-qualified to perform the work. The Army issued an order with an estimated value of $10.7 million, with a 1-year base period and two 1-year options, for professional support services at the United States Army Sergeants Major Academy at Biggs Army Airfield, El Paso, Texas. Under this order, the contractor was to provide instructors to teach a pre-existing curriculum to Sergeants Major and Master Sergeants in strategic operations, preparing them to take positions throughout the DOD. The order provided that the instructors should be former Army sergeants and hold a Master’s degree, with a preference for a Master’s degree in adult education. In addition, the instructors had to have or had to obtain specific Army contractor instruction certifications. Therefore, the contracting official stated there was no benefit in having instructors that exceeded these recommended qualifications. The Navy issued an order with a value of approximately $10 million and a period of performance of approximately four years and five months for installation of furniture/equipment onboard the USS George Washington aircraft carrier. Tasks included removing furniture, installing new, furniture in the same place, and painting, among others, to maintain ship habitability. Contracting officials determined there was no value in performing a tradeoff because the tasks were for routine work and all of the IDIQ contract holders previously were found to have the technical capability to perform the work. DOD Contracting Officials Considered Most of the Section 813 Criteria before Using the LPTA Process, but Were Confused by Some Aspects Contracting officials stated that they generally considered five of the eight criteria in Section 813 when awarding the 14 contracts and orders we reviewed. This was done, in part, because according to contracting officials, those criteria are inherently considered by contracting officials when determining which source selection process should be used. Further, based on our analysis, these five criteria are generally reflected in DOD’s March 2016 Source Selection Procedures. Table 3 illustrates whether contracting officials considered the Section 813 criteria when they decided to use the LPTA process for the 14 contracts and orders we reviewed. As previously discussed, DOD has not yet updated regulations to put the Section 813 criteria into effect. A DPC official stated that until DOD regulations are updated, DOD contracting officials are not required to consider the Section 813 criteria. Most of the contract files we reviewed did not include a written justification for the use of the LPTA process. A DPC official stated when the DFARS is updated to implement Section 813, DOD intends to include a requirement for contracting officials to prepare a written justification for the use of the LPTA process. Some contracting officials were uncertain how to address the other two criteria that were generally not considered. For example, 4 of the 14 contracts and orders that we reviewed were for products. As stated above, one of the Section 813 criteria will require contracting officers who are purchasing goods to determine that the goods are predominantly expendable in nature, nontechnical, or have a short life expectancy or shelf life. Two of the four contracting officials for the products we reviewed stated they made this determination for these purchases. However, the other two stated that they would not have known how to consider this criterion for their procurements. Specifically, a Marine Corps contracting official who purchased general use computers stated it was unclear if a computer that will be replaced every 5 years would be considered to have a short shelf life. Additionally, an Air Force contracting official who purchased Blackberry licenses stated that it was unclear if this criterion would apply to such licenses, and if it did, whether a 1-year license would be considered a short-shelf life. As a result, this contracting official stated he would not know how to consider this criterion in similar acquisitions. Additionally, 12 of the 14 contracting officials we interviewed raised a number of questions about how to consider full life-cycle costs, including operations and support, which is another criterion under Section 813. In this regard, Eight contracting officials did not think life-cycle costs applied to their acquisitions and therefore they did not understand what costs they would have considered. For example, an Army contracting official who purchased construction quality assurance and oversight services stated the concept of life-cycle costs generally applies to products, not services. Similarly, a DLA official who contracted for a de-icing agent stated that this particular product does not have life-cycle costs associated with it. Three contracting officials raised questions regarding who would be in the best position to determine life-cycle costs. For instance, an Air Force contracting official stated life-cycle costs are determined by the requiring office, not by the contracting office, so it was not clear what role the contracting office would have in evaluating life-cycle costs. One contracting official who awarded an IDIQ contract stated this criterion would not apply to such an award because specific requirements would be determined when issuing orders under the IDIQ contract. Therefore, the contracting officer believed that any life- cycle costs should be considered when issuing subsequent orders. In the two remaining cases, one contracting official stated he was not confused by this criterion, but did not consider life-cycle costs when awarding the contract to provide instructors at the Army Sergeants Major Academy. In another case, the contracting official stated life-cycle costs for a $14.8 million contract for natural gas had been considered, but the official determined there were no life-cycle costs associated with the use of natural gas in this instance. As previously discussed, DOD has not yet revised the DFARS to include the criteria specified in Section 813, nor has DOD’s March 2016 source selection procedures been updated to address consideration of the new criteria. A DPC official stated that the DFARS is in the process of being updated and will reflect Section 813. For example, the official stated that the updated regulation will require written justifications for using the LPTA process. This official, however, could not comment on whether the revisions will provide clarification, beyond what was written in Section 813, on how to apply the two criteria that DOD contracting officials generally found confusing. Without further clarification, such confusion is likely to continue. As a result, contracting officials will be at risk of not consistently applying the criteria in Section 813. Our work also found differing opinions on whether the criteria in Section 813 would apply to the issuance of competitive orders under multiple- award IDIQ contracts. Our prior work has found that such orders represent a significant portion of DOD’s annual contract obligations. For example, 7 of the 14 contracting officials generally stated the criteria in Section 813 could apply at the order level depending on the nature of the requirement. They stated that requirements are determined when issuing orders and, as a result, it is possible that methods including the LPTA process or a tradeoff process could be used when issuing orders. Conversely, the remaining 7 contracting officials stated the criteria should not apply to the issuance of orders, in part, because these criteria would generally have been considered at the time the IDIQ contract was awarded. Military department policy officials we interviewed generally believed that the criteria in Section 813 should not be applicable to orders. When we raised this issue, a DPC official stated that DOD plans to address whether the Section 813 criteria are applicable to orders when DOD revises the DFARS. Conclusions As DOD prepares to revise the DFARS to implement the eight criteria in Section 813, as amended, it has an opportunity to address the issues we identified. DOD stated its intent to require a written justification for using LPTA and to address whether the Section 813 criteria are applicable to the issuance of task and delivery orders. It is equally important that, in revising the regulation, DOD also clarify how contracting officers are to determine if a good is expendable in nature, nontechnical or have a short life expectancy or shelf life, and how they are to consider if the lowest price reflects full life-cycle costs, including for operations and support for services as well as products. Absent additional direction, contracting officials across DOD may not understand how to consistently apply these criteria when using the LPTA process. Recommendations for Executive Action We are making the following two recommendations to DOD: The Secretary of Defense should ensure that the Director, Defense Pricing and Contracting , addresses how contracting officials using the LPTA process should apply the Section 813 criterion regarding procurement for goods that are predominantly expendable in nature, nontechnical, or have a short life expectancy or shelf life as revisions to the DFARS are considered. (Recommendation 1) The Secretary of Defense should ensure that the Director, Defense Pricing and Contracting addresses how contracting officials using the LPTA process should apply the Section 813 criterion regarding full life- cycle costs, including for operations and support as revisions to the DFARS are considered. (Recommendation 2) Agency Comments We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in Appendix I, DOD concurred with both of our recommendations. DOD stated that, in addition to its ongoing efforts to update its regulations, a new DFARS Procedures, Guidance and Information case was opened on October 25, 2018 to provide contracting officers with supplemental internal guidance on applying the new criteria for using LPTA. DOD anticipates that the revised regulations and the internal guidance will be published in the fourth quarter of fiscal year 2019. DOD also provided technical comments, which were incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Director, Defense Pricing and Contracting. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Defense Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Justin Jaynes (Assistant Director), Victoria Klepacz (Analyst in Charge), Jennifer Baker, Matthew Crosby, Lorraine Ettaro, Stephanie Gustafson, Julia Kennon, Roxanna Sun, Jay Still, Alyssa Weir, and Khristi Wilkins made key contributions to this report.
Why GAO Did This Study When awarding a contract competitively, DOD may use the LPTA process, under which the lowest price is the determining factor when selecting an offer. Section 813, as amended, contained a provision for GAO to submit four annual reports on DOD's use of the LPTA process for contracts exceeding $5 million as well as how contracting officials considered eight specific criteria. GAO issued its first report in response to this provision in November 2017. This second report, among other things, assesses the extent to which (1) DOD used the LPTA process in fiscal year 2017 and (2) contracting officials considered Section 813 criteria when using the LPTA process. GAO selected a generalizable sample of 172 DOD contracts and orders valued at $5 million and above that were competitively awarded in fiscal year 2017. GAO verified that 46 of these contracts and orders used the LPTA process by reviewing solicitations. GAO selected 14 contracts and orders from the 46 based on the most frequently purchased products and services, reviewed documents, and interviewed officials to determine if the Section 813 criteria were considered. What GAO Found GAO estimates that about 26 percent of the Department of Defense's (DOD) contracts and orders valued $5 million and above in fiscal year 2017 were competitively awarded using the lowest price technically acceptable (LPTA) process. DOD used the LPTA process to buy such things as equipment, fuel, information technology services and construction services. Section 813 of the National Defense Authorization Act for Fiscal Year 2017, as amended, mandated that DOD revise its regulations to require that eight criteria be considered when using the LPTA process. As of September 2018, DOD had not yet done so. Accordingly, a DOD acquisition policy official stated that contracting officers are not yet required to consider these criteria. Nevertheless, GAO found that contracting officials generally considered five of the eight criteria for the 14 contracts and orders GAO reviewed (see table). Source: GAO analysis of Section 813, DOD source selection guidance, contract file documents and interviews with contracting officials. | GAO-19-54 A DOD official stated that the updated regulations will reflect these eight criteria, including that justifications be documented. However, the official could not comment on whether the revisions will clarify how DOD contracting officials should implement the two other criteria that were generally not considered. Some contracting officials GAO interviewed were confused about how to apply these two criteria. Four of the 14 contracting officials stated that they did not understand how to apply the criterion regarding whether purchased goods are predominantly expendable in nature, nontechnical, or have a short life expectancy or shelf life. Additionally, 8 of the 14 contracting officials stated the criterion regarding an assessment of life-cycle costs was not applicable to their acquisitions. Absent clarification on how to consider these two criteria, DOD increases the risk that its contracting officials will not consistently implement the requirements in Section 813, as amended. What GAO Recommends GAO recommends that DOD address, as regulations are updated, how contracting officials should apply two Section 813 criteria that were generally not considered. DOD concurred with the recommendations and plans to revise its regulations and issue additional guidance by the end of fiscal year 2019.
gao_GAO-18-209T
gao_GAO-18-209T_0
Background Strategic Petroleum Reserve The Energy Policy and Conservation Act (EPCA) of 1975 authorized the SPR, partly in response to the Arab oil embargo of 1973 to 1974 that caused a shortfall in the international oil market. The SPR is owned by the federal government, managed by DOE’s Office of Petroleum Reserves, and maintained by Fluor Federal Petroleum Operations LLC. The SPR stores oil in underground salt caverns along the Gulf Coast in Louisiana and Texas. DOE established an initial target capacity for the SPR of 500 million barrels based on U.S. import levels and implemented a phased approach to create large underground oil storage sites in salt formations, to reach a physical storage capacity of 750 million barrels. The SPR currently maintains four storage sites with a physical capacity of 713.5 million barrels. Three recent laws required sales of oil from the SPR to fund its modernization and other national priorities. The Bipartisan Budget Act of 2015 provided for the drawdown and sale of 58 million barrels of oil from fiscal years 2018 through 2025 and authorized the sale of up to $2 billion worth of oil through fiscal year 2020 to be used for an SPR modernization program. The Fixing America’s Surface Transportation Act provided for the drawdown and sale of 66 million barrels of oil from fiscal years 2023 through 2025. The 21st Century Cures Act provided for the drawdown and sale of 25 million barrels from fiscal years 2017 through 2019. DOE estimates that, as a result of these sales, the SPR will hold between 506 and 513 million barrels of oil by 2025. For member countries to meet net petroleum import obligations, the IEA counts both public and private oil reserves, although the United States meets its IEA obligation solely through the SPR. As of July 2017, according to IEA data, the SPR held the equivalent of 141 days of import protection and U.S. private oil held the equivalent of an additional 216 days, for a total of about 356 days. Based on EIA projections of net imports, between 506 and 513 million barrels of oil would be equivalent to about 242 and 245 days of net imports in 2025. Regional Refined Product Reserves The United States has two regional refined product reserves—Northeast Home Heating Oil Reserve and Northeast Gasoline Supply Reserve. The Northeast Home Heating Oil Reserve, which is not part of the SPR, holds 1 million barrels of ultra-low sulfur distillate, used for heating oil, for homes and businesses in the northeastern United States, a region heavily dependent upon the use of heating oil, according to DOE’s website. The distillate is stored in leased commercial storage in terminals located in three states: Connecticut, Massachusetts, and New Jersey. In 2000, President Clinton directed the creation of the reserve to hold approximately 10 days of inventory, the time required for ships to carry additional heating oil from the Gulf of Mexico to New York Harbor. The Northeast Gasoline Supply Reserve, a part of the SPR, holds a 1 million barrel supply of gasoline for consumers in the northeastern United States. According to DOE’s website, this region is particularly vulnerable to gasoline disruptions as a result of hurricanes and other natural events. In response to Superstorm Sandy, which caused widespread gasoline shortages in the region in 2012, DOE conducted a test sale of the SPR and used a portion of the proceeds from the sale to create the reserve in 2014. The gasoline is stored in leased commercial storage in terminals located in three states: Maine, Massachusetts, and New Jersey. Statutory Release Authority for the SPR Under conditions prescribed by EPCA, as amended, the President has discretion to authorize the release of petroleum products from the SPR to minimize significant supply disruptions. In the event of an oil supply disruption, the SPR can supply the market by selling stored oil. Should the President order an emergency sale of SPR oil, DOE conducts a public sale, evaluates and selects offers, and awards contracts to the highest qualified bidders. Purchasers are responsible for making their own arrangements for the transport of the SPR oil to its final destination. The Secretary of Energy also is authorized to release petroleum products from the SPR through an exchange for the purpose of acquiring oil for the SPR. According to DOE officials, this authority is sometimes utilized in oil supply disruptions when a specific volume of SPR oil is provided to a private sector company in an emergency exchange for an equal quantity of oil plus an additional amount as a premium to be returned to the SPR in the future. According to DOE’s website, emergency exchanges are generally requested by a company after an event outside the control of the company, such as a hurricane, disrupts commercial oil supplies. The Secretary of Energy is also authorized to carry out test drawdowns through a sale or exchange of petroleum products to evaluate SPR’s drawdown and sales procedures. When oil is released from the SPR, it flows through commercial pipelines or on waterborne vessels to refineries, where it is converted into gasoline and other petroleum products, and then transported to distribution centers for sale to the public. Changing Petroleum Markets Petroleum markets have changed substantially in the 40 years since the establishment of the SPR, including increases in global markets, increases in domestic oil production, and declines in net petroleum imports. Increases in global markets. At the time of the Arab oil embargo, price controls in the United States prevented the prices of oil and petroleum products from increasing as much as they otherwise might have, contributing to a physical oil shortage that caused long lines at gasoline stations throughout the United States. Now that the oil market is global, the price of oil is determined in the world market, primarily on the basis of supply and demand. In the absence of price controls, scarcity is generally expressed in the form of higher prices, as purchasers are free to bid as high as they want to secure oil supply. In a global market, an oil supply disruption anywhere in the world raises prices everywhere. Releasing oil reserves during a disruption provides a global benefit by reducing oil prices in the world market. Increases in domestic oil production. Reversing a decades-long decline, U.S. oil production has generally increased in recent years. According to EIA data, U.S. production of oil reached its highest level in 1970 and generally declined through 2008, reaching a level of almost one-half of its peak. During this time, the United States increasingly relied on imported oil to meet growing domestic energy needs. However, recent improvements in technologies have allowed producers to extract oil from shale formations that were previously considered to be inaccessible because traditional techniques did not yield sufficient amounts for economically viable production. In particular, the application of horizontal drilling techniques and hydraulic fracturing—a process that injects a combination of water, sand, and chemical additives under high pressure to create and maintain fractures in underground rock formations that allow oil and natural gas to flow—have increased U.S. oil and natural gas production. Declines in net petroleum imports. One measure of the economy’s vulnerability to oil supply disruptions is to assess net petroleum imports—that is, imports minus exports. Net petroleum imports have declined by over 60 percent from a peak of about 12.5 million barrels per day in 2005 to about 4.8 million barrels per day in 2016. In 2006, net imports were expected to increase in the future, increasing the country’s reliance on foreign oil. However, imports have declined since then and, according to EIA’s most recent forecast, are expected to remain well below 2005 import levels into the future. Canada and Mexico are the nation’s major foreign sources for imported oil. Furthermore, the United States has increased its exports of oil and refined petroleum products. DOE Has Primarily Used Exchanges from the SPR to Private Companies to Address Domestic Petroleum Disruptions To quantify how DOE has used the SPR to address domestic petroleum supply disruptions, we reviewed DOE and EIA documents. We also reviewed our past work from August 2006 to January 2014. Our preliminary analysis indicates that DOE has primarily used exchanges to private companies in response to domestic supply disruptions such as hurricanes and other events. DOE has released oil 24 times from 1985 through September 2017, including 11 releases in response to domestic supply disruptions. Of these 11 releases, 10 were exchanges, including 6 exchanges in response to hurricanes. One of the 11 releases was an SPR sale in response to Hurricane Katrina, which was part of an IEA coordinated action release. Historic releases from the SPR are shown in figure 1. Our preliminary analysis also indicates that the six exchanges from DOE to U.S. refineries in response to hurricanes totaled about 28 million barrels. Based on our preliminary analysis of DOE documents, most recently, in response to Hurricane Harvey in 2017, DOE exchanged 5 million barrels of oil to Gulf Coast refineries that requested supplies. Refinery operations largely depend on a supply of oil and feedstocks. Hurricane Harvey closed or restricted ports through which 2 million barrels of oil per day were imported, and several refineries had no supply options except for SPR oil. According to DOE officials, exchanges from the SPR have allowed refineries to continue to operate until alternative supply sources became available, ensuring continued production of refined petroleum products for use by consumers. Based on our preliminary analysis of DOE documents, DOE’s most significant response to a hurricane was in 2005 following Hurricane Katrina. As we reported in January 2014, oil platforms were evacuated and damaged, virtually shutting down all oil production in the Gulf region as a result of the hurricane. Based on our preliminary analysis of DOE documents, exchanges from the SPR, totaling 9.8 million barrels of oil, helped refineries offset this short-term physical supply disruption at the beginning of the supply chain, thereby helping to moderate the impact of the production shutdown on U.S. oil supplies. In addition to these exchanges, DOE also participated in an IEA collective action that was called in response to Hurricane Katrina by selling 11 million barrels of oil from the SPR, and IEA member countries delivered and sold much needed gasoline and other products to the United States. In total, DOE sold or exchanged 20.8 million barrels of oil from the SPR. Our preliminary analysis of DOE documents and reports also showed that although almost all of DOE’s releases in response to domestic supply disruptions have been from the SPR, DOE also used the Northeast Home Heating Oil Reserve in response to Superstorm Sandy in 2012. According to DOE’s website, the agency transferred approximately 120,000 barrels of fuel to the Department of Defense to help provide fuel for first responders. The SPR Is Limited in Its Ability to Respond to Domestic Disruptions Based on our past work and preliminary observations, the SPR is limited in its ability to respond to domestic petroleum supply disruptions for three main reasons. First, as we reported in September 2014, reserves are almost entirely composed of oil and not refined products, which may not be effective in responding to all disruptions that affect the refining sector. Second, as we reported in September 2014, reserves are nearly entirely located in one region, the Gulf Coast, which may limit responsiveness to disruptions in other regions. Third, during the course of our ongoing work, we reviewed DOE and energy task force reports that found that the statutory authorities governing SPR releases may inhibit their use for regional disruptions. Composition: As we reported in September 2014, the SPR is almost entirely composed of oil, which may not be effective in responding to all disruptions that affect the refining sector. In September 2014, we reported that many recent economic risks associated with supply disruptions have originated from the refining and distribution sectors, which provide refined products, such as gasoline, rather than from shortages of oil. Oil reserves are of limited use in such instances. We reported in May 2009 that following Hurricanes Katrina and Rita, nearly 30 percent of U.S. refining capacity was shut down for weeks, disrupting supplies of gasoline and other products. The SPR could not mitigate the effects of disrupted supplies because it holds oil. As of September 2017, over 99 percent of the SPR and its Northeast Gasoline Supply Reserve component (about 674 of 675 million barrels) is held as oil rather than as refined products, such as gasoline and diesel. Moreover, Gulf Coast hurricanes severely impacted refinery operations, such as Hurricane Katrina in 2005, Hurricane Ike and Hurricane Gustav in 2008, and Hurricane Harvey this year. According to DOE officials, oil reserves are not able to mitigate the potential effects of large-scale Gulf Coast refinery outages that may impact refined product deliveries. Location: As we reported in September 2014, the SPR is nearly entirely located in one region, the Gulf Coast, which may limit its ability to respond to disruptions in other regions. In the Gulf Coast, the SPR is located close to a major refining center as well as to distribution points for tankers, barges, and pipelines that can carry oil from it to refineries in other regions of the country. Most of the system of oil pipelines in the United States was constructed in the 1950s, 1960s, and 1970s to accommodate the needs of the refining sector and demand centers at the time. Given the SPR’s current location in the Gulf Coast, transporting oil from the reserve may impact commercial distribution of oil. Based on our ongoing work, according to DOE’s 2016 long-term strategic review of the SPR, the agency reported that the expanding North American oil production and the resulting shifts in how oil is transported around the country have reduced the SPR’s ability to add incremental barrels of oil to the market under certain scenarios in the event of an oil supply crisis. This means that while the SPR remains connected to physical assets that could bring oil to the market, in many cases, forcing SPR oil into the distribution system would result in an offsetting reduction in domestic commercial oil flows. As we reported in September 2014, it may be more difficult to move oil from the SPR to refineries in certain regions of the United States. For example, since no pipelines connect the SPR to the West Coast, supplies of petroleum products and oil must be shipped by pipeline, truck, or barge from other domestic regions or by tanker from foreign countries. Such modes of transport are slower and more costly than via pipelines. For example, it can take about 2 weeks for a vessel to travel from the Gulf Coast to Los Angeles—including transit time through the Panama Canal. Statutory release authorities: In the course of our ongoing work, we reviewed DOE and energy task force reports that found that the statutory authorities governing SPR releases may inhibit their use for regional disruptions. DOE is authorized to release petroleum distillate (fuel) from the Northeast Home Heating Oil Reserve upon a finding by the President of a severe energy supply interruption that includes a dislocation in the heating oil market or other regional supply shortage. On the other hand, because the Northeast Gasoline Supply Reserve is a part of the SPR, DOE can release gasoline from that reserve only after the President makes the statutorily required findings for release from the SPR, which do not explicitly include the existence of a regional supply shortage. According to DOE’s 2016 long-term strategic review of the SPR, a regional product reserve is meant to address regional supply shortages, whereas the SPR of which the Northeast Gasoline Supply Reserve is a part of, is meant to address severe energy supply interruptions that have a national impact. As a result, according to DOE’s 2016 long-term strategic review of the SPR, in practice, this means that the release of the gasoline reserve would have to have a national impact. The Quadrennial Energy Review of 2015 recommended that Congress integrate the authorities of the President to release products from the regional product reserves—the Northeast Home Heating Oil Reserve and Northeast Gasoline Supply Reserve—into a single, unified authority by amending the trigger for the release of fuel from the two refined product reserves so that they are aligned and properly suited to the purpose of a product reserve, as opposed to an oil reserve. As discussed, based on our preliminary observations, DOE has used the SPR in response to domestic supply disruptions, but the effectiveness of these releases is unclear because DOE has not formally assessed all of them. DOE has exchanged about 28 million barrels of oil in response to hurricanes, but we found only two reports assessing DOE’s response to Hurricanes Gustav, Ike, Katrina, and Rita, and it is unclear whether DOE has examined other responses. According to a 2006 DOE Inspector General report, DOE used the SPR and its assets with great effectiveness to address emergency energy needs in the crises surrounding Hurricanes Katrina and Rita, but the concentration of SPR sites along the Gulf Coast meant the United States also had to rely on refined petroleum products from Europe. The report noted that despite being in the path of the hurricanes’ destruction, the SPR promptly fulfilled requests for oil from refineries suffering from storm-related supply shortages. However, the damage caused by Hurricane Katrina demonstrated that the concentration of refineries on the Gulf Coast and resulting damage to pipelines left the United States to rely on imports of refined petroleum products from Europe, as part of an IEA collective response. Consequently, regions experienced a shortage of gasoline, and prices rose. DOE testified in 2009 that despite a response from the SPR and IEA, some markets south of Virginia and north of Florida could not be immediately supplied with refined products due to a lack of infrastructure to receive and distribute imports from the Atlantic coast to inland population centers. Exchanges with multiple refiners totaling 5.4 million barrels of SPR oil were authorized to Hurricanes Gustav and Ike in 2008. DOE assessed this response and submitted a report to Congress in 2009. According to DOE’s 2009 report, the exchanges conducted in September and October 2008 were successful in providing emergency petroleum supplies to refiners experiencing shortages caused by Hurricanes Gustav and Ike. As we reported in May 2009, as originally enacted, EPCA envisioned the possibility that the SPR would include a variety of petroleum products stored at locations across the country. In a 2009 hearing, the then Deputy Assistant Secretary for Petroleum Reserves testified that DOE still considers a large SPR focused on oil storage to be the best way to protect the nation from the negative impacts of a short-term international interruption to U.S. oil imports; however, the hurricanes of 2005 and 2008 showed that the SPR may be limited in its ability to address some short- term interruptions to domestic refined products supply and distribution infrastructure. Other IEA Members Structure Their Reserves Differently, with Some Holding Industry Reserves and Refined Products, and DOE Has Taken Steps to Explore These Structures Based on information reviewed during the course of our ongoing work, to respond to disruptions, 27 of the 29 IEA member countries use one of five reserve structures, also known as stockholding structures, in which these countries hold public reserves, industry reserves, or a combination of these. The five structures are shown in figure 2. Also, most members hold refined petroleum products, with many members holding at least a third of their reserves in refined petroleum products. Some members hold their refined petroleum products in different regions across their country to respond to disruptions. Based on our preliminary analysis of information on the 29 IEA member countries, 18 place a stockholding obligation on industry either exclusively or in part to meet their total emergency reserve needs. Most of these countries distribute the stockholding obligation in proportion to companies’ share of oil imports or of sales in the domestic market. However, several member countries instead impose a higher obligation on refineries because of their high amount of operating oil. According to a 2014 IEA report, most IEA members hold some amount of refined petroleum products, and a European Union (EU) directive generally requires EU members to ensure that at least one-third of their stockholding obligation is held in the form of refined petroleum products. For example, according to the IEA’s website, Germany’s stockholding agency, Erdolbevorratungsverband (EBV), holds 55 percent of its reserve in refined petroleum products such as gasoline, diesel fuel, and light heating oil. In contrast, the United States holds almost all of its reserves in oil rather than refined petroleum products. Some IEA member countries geographically disperse their reserves of refined petroleum products to be able to respond to domestic disruptions. For example, according to the IEA’s website, to maintain a wide geographical distribution of emergency reserves, the French stockholding agency stores refined petroleum products in each of its seven geographic zones. Moreover, according to the IEA’s website, France’s agency stores petroleum product reserves in each zone; reserves in each zone should represent specified amounts based on consumption in order to respond to emergencies. During a labor strike in December 2013, France used its emergency reserves to supply local gas stations when delivery of fuel was impeded for a prolonged period of time, according to a French document. In another example, the IEA reported that Germany holds petroleum product reserves in several regions in the country and that the reserves are to be distributed throughout Germany, so that a minimum reserve equivalent to a 15-day supply is maintained in each of five designated supply areas. The rationale for this is to prevent logistical bottlenecks that could occur if all emergency reserves were stored centrally, according to a 2014 IEA report. Based on our preliminary observations, DOE has taken some steps to evaluate different structures for holding reserves. However, the agency has not formally evaluated other countries’ structures in over 35 years and has not finalized its 2015 studies on regional petroleum product reserves. According to DOE officials, the agency explored the feasibility of adopting the industry structure shortly after creating the SPR and concluded that this and other structures were not feasible in the United States. In 1980, DOE studied the feasibility of adopting the agency structure, which is the most similar to the SPR since the only major difference is how the reserve is funded, according to DOE officials. According to IEA documents, in the agency structure, generally the reserve is funded by a tax or levy on products or industry, which is passed down to the consumer. In contrast, the SPR is funded through congressional appropriations. However, DOE officials we interviewed cautioned that the agency has not reassessed its findings from 35 years ago. As mentioned above, in 2016 DOE reassessed the SPR in light of the changing global oil market, but this assessment did not include a review of other IEA countries’ structures. Our preliminary review indicates that DOE examined the feasibility of additional regional petroleum product reserves in two 2015 studies, but it did not finalize these studies or expand the SPR to include additional reserves. In September 2014, we reported that DOE officials told us they were conducting a regional fuel resiliency study that would provide insights into whether there is a need for additional regional product reserves and, if so, where these reserves should be located. The Quadrennial Energy Review of 2015 recommended that the agency analyze the need for additional or expanded regional product reserves by undertaking updated cost-benefit analyses for all of the regions of the United States that have been identified as vulnerable to fuel supply disruptions. Figure 3 illustrates vulnerabilities that DOE identified in 2014. In response to the 2015 recommendation, DOE contractors studied the feasibility of additional regional petroleum product reserves, as part of the SPR, in the U.S. Southeast and West Coast regions to address supply vulnerabilities from hurricanes and earthquakes, respectively. According to DOE officials, weather events in the Southeast are of higher probability but lower consequence, and events in the West Coast are of lower probability but higher consequence. DOE did not finalize its 2015 studies on regional petroleum product reserves and make them publicly available. According to DOE officials, because consensus could not be reached within the Administration on several issues associated with the refined product reserve studies, these studies were not included as part of DOE’s 2016 long-term strategic review of the SPR. Our ongoing work indicates that DOE’s 2016 long-term strategic review of the SPR did not account for the risks of domestic supply disruptions as a factor in determining the appropriate size, location, and composition of the SPR. Prior to the two 2015 studies, in 2011, DOE carried out a cost-benefit study of the establishment of a refined product reserve in the Southeast and estimated that such a reserve would reduce the average gasoline price rise by 50 percent to 70 percent in the weeks immediately after a hurricane landfall, resulting in consumer cost savings, according to the Quadrennial Energy Review of 2015. In closing, I note that we are continuing our ongoing work examining issues that may help inform future considerations for the SPR. Given the constrained budget environment and the evolving nature of energy markets and their vulnerabilities, it is important that DOE ensures the SPR is an efficient and effective use of federal resources. We look forward to continuing our work to determine whether additional DOE actions may be warranted to promote this objective. Chairman Upton, Ranking Member Rush, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Frank Rusco, Director, Natural Resources and Environment, at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Quindi Franco, Assistant Director; Philip Farah, Ellen Fried, Nkenge Gibson, Cindy Gilbert, Gregory Marchand, Patricia Moye, Camille Pease, Oliver Richard, Danny Royer, Rachel Stoiko, Marie Suding, and Kiki Theodoropoulos. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Over 4 decades ago, Congress authorized the SPR—the world's largest government-owned stockpile of emergency oil—to release oil to the market during supply disruptions and protect the U.S. economy from damage. The SPR is managed by DOE. According to DOE's strategic plan, the SPR benefits the nation by providing an insurance policy against actual and potential interruptions in U.S. petroleum supplies caused by international turmoil and hurricanes, among other things. The SPR also helps the United States meet its obligations, including to holding reserves of oil or refined petroleum products equaling 90 days of net petroleum imports, as one of 29 members of the IEA—an international energy forum established to help members respond to major oil supply disruptions. The SPR held almost 674 million barrels of oil at the end of September 2017. This testimony primarily focuses on preliminary observations from ongoing work on (1) DOE's use of the SPR in response to domestic petroleum supply disruptions, (2) the extent to which the SPR is able to respond to domestic petroleum supply disruptions, and (3) how other IEA members structure their strategic reserves and extent to which DOE has examined these structures. GAO reviewed past work from August 2006 through September 2014 and DOE and IEA documentation. GAO also interviewed DOE and IEA officials, as part of GAO's ongoing work. What GAO Found GAO's preliminary analysis of Department of Energy (DOE) documents indicates that DOE has primarily used the Strategic Petroleum Reserve (SPR) to an exchange of oil to companies in response to domestic supply disruptions, such as hurricanes. In the event of a supply disruption, the SPR can supply the market by either exchanging oil for an equal quantity of oil plus an additional amount as a premium to be returned to the SPR in the future or selling stored oil. Since the SPR was authorized in 1975, DOE has released oil 11 times in response to domestic supply disruptions. All but one were in the form of an exchange, including six exchanges in response to hurricanes. For example, Hurricane Harvey in 2017 closed or restricted ports through which 2 million barrels of oil per day were imported. In response, DOE exchanged 5 million barrels of oil to Gulf Coast refineries. According to DOE officials, exchanges from the SPR allowed refineries to operate, ensuring continued production of refined petroleum products for use by consumers. Based on past GAO work and preliminary observations, the SPR is limited in its ability to respond to domestic supply disruptions, including severe weather events, for three main reasons. First, as GAO reported in September 2014 (GAO-14-807), the SPR is almost entirely composed of oil and not refined products like gasoline, which may not be effective in responding to all disruptions. For example, following Hurricanes Katrina and Rita, nearly 30 percent of U.S. refining capacity was shut down for weeks, disrupting supplies of gasoline and other petroleum products. The SPR could not mitigate the effects of disrupted supplies. Second, as GAO also reported in September 2014, the SPR is nearly entirely located in the Gulf Coast, so it may not be responsive to disruptions in other regions, such as the West Coast. Third, GAO's ongoing work reviewed DOE and energy task force reports that found that statutory authorities governing SPR releases may inhibit their use for regional disruptions. GAO's preliminary observations show that other International Energy Agency (IEA) member countries generally have used one of five reserve structures configured in various ways. The structures are defined by whether countries hold either public reserves (e.g., the SPR), industry reserves (e.g., placing reserve holding requirements on industry), or a combination. Most IEA members hold refined petroleum products in reserve, with many members holding at least a third of their reserves in these products. For example, in Germany, 55 percent of reserves are in petroleum products. In addition, some IEA members' reserves are geographically dispersed in their countries to respond to disruptions. For example, France has reserves in each of its seven regions and has used these to address fuel supply disruptions as a result of recent domestic strikes. DOE has taken some steps to evaluate other structures but has not formally evaluated the structures of other countries in over 35 years. In addition, DOE contractors studied the feasibility of regional product reserves in the Southeast and West Coast regions to address supply vulnerabilities from hurricanes and earthquakes, respectively but DOE did not finalize the two 2015 studies. In 2016, DOE released a long- term strategic review of the SPR that Congress had required and GAO recommended. However, DOE did not include the results of the two studies in its 2016 review. What GAO Recommends GAO is not making recommendations but will consider making them, as appropriate, as it finalizes its work.
gao_GAO-18-700T
gao_GAO-18-700T_0
Background EPA regulates drinking water contaminants by issuing legally enforceable standards under the Safe Drinking Water Act that generally limit the levels of these contaminants in public water systems. EPA has issued such regulations for approximately 90 drinking water contaminants. Public water systems, including the DOD public water systems that provide drinking water to about 3 million people living and working on military installations, are required to comply with EPA and state drinking water regulations. While EPA has not issued legally enforceable standards for PFAS in drinking water, the agency has monitored water systems in the United States for six types of PFAS chemicals—including PFOS and PFOA—in order to understand the nationwide occurrence of these chemicals. This monitoring effort was part of a larger framework established by the Safe Drinking Water Act to assess unregulated contaminants. Under this framework, EPA is to select for consideration from a list (called the contaminant candidate list) those unregulated contaminants that present the greatest public health concern, establish a program to monitor drinking water for unregulated contaminants, and decide whether or not to regulate at least 5 such contaminants every 5 years (called a regulatory determination). EPA’s regulatory determinations are to be based on the following three broad statutory criteria, all of which must be met for EPA to decide that a drinking water regulation is needed: the contaminant may have an adverse effect on the health of persons; the contaminant is known to occur or there is a substantial likelihood that the contaminant will occur in public water systems with a frequency and at levels of public health concern; and in the sole judgment of the EPA Administrator, regulation of such contaminant presents a meaningful opportunity for health risk reduction for persons served by public water systems. To date, PFOS and PFOA are unregulated because EPA has not made a positive regulatory determination for these chemicals. Even when EPA has not issued a regulation, EPA may publish drinking water health advisories. In contrast to drinking water regulations, health advisories are nonenforceable. Health advisories recommend the amount of contaminants that can be present in drinking water—“health advisory levels”—at which adverse health effects are not anticipated to occur over specific exposure durations. Most recently, in May 2016 EPA issued lifetime health advisories for PFOS and PFOA. These advisories set the recommended health advisory level for each contaminant—or both contaminants combined—at 70 parts per trillion in drinking water. According to DOD, the department also considers information in these health advisories when determining the need for cleanup action at installations with PFOS and PFOA contamination. DOD Has Initiated Actions to Address Elevated Levels of PFOS and PFOA in Drinking Water and Concerns with Firefighting Foam DOD Has Initiated Actions to Identify, Test, Address, and Respond to Orders from Regulators Regarding PFOS and PFOA in Drinking Water We reported in October 2017 that, following the release of EPA’s lifetime health advisory for PFOS and PFOA in May 2016, each of the military departments directed their installations to identify locations with any known or suspected prior release of PFOS and PFOA and to address any releases that pose a risk to human health—which can include people living outside DOD installations; and test for PFOS and PFOA in their drinking water and address any contamination above EPA’s lifetime health advisory level. We further reported that, as of December 2016, DOD had identified 393 active or closed military installations with any known or suspected releases of PFOS or PFOA. Since we issued our report, DOD has updated that number to 401 active or closed installations, according to August 2017 data provided in a March 2018 report to Congress on the department’s response to PFOS and PFOA contamination. We stated in our October 2017 report that the military departments had reported spending approximately $200 million at or near 263 installations for environmental investigations and response actions, such as installing treatment systems or supplying bottled water, as of December 2016. The Air Force had identified 203 installations with known or suspected releases of PFOS and PFOA and had spent about $153 million on environmental investigations and response actions (accounting for about 77 percent of what the military departments had spent on PFOS and PFOA activities as of December 2016). For example, the Air Force reported spending over $5 million at Peterson Air Force Base in Colorado. During our visit to that installation in November 2016, officials showed us the current and former fire training areas that they were investigating to determine the extent to which prior use of firefighting foam may have contributed to PFOS and PFOA found in the drinking water of three nearby communities. Additionally, the Air Force had awarded a contract for, among other things, installing treatment systems in those communities. The Navy had identified 127 installations with known or suspected releases of PFOS and PFOA and had spent about $44.5 million on environmental investigations and response actions (accounting for about 22 percent of what the military departments had spent on PFOS and PFOA activities as of December 2016). For example, the Navy reported spending about $15 million at the former Naval Air Station Joint Reserve Base Willow Grove in Pennsylvania. During our visit to that installation in August 2016, officials told us that the Navy was investigating the extent to which PFOS and PFOA on the installation may have contaminated a nearby town’s drinking water. At the time, the Navy had agreed to pay for installing treatment systems and connecting private well owners to the town’s drinking water system, among other things. The Army had identified 61 installations with known or suspected releases of PFOS and PFOA and had spent about $1.6 million on environmental investigations (accounting for less than 1 percent of what the military departments had spent on PFOS and PFOA activities as of December 2016), but had not yet begun any response actions. At the time of our October 2017 report, the Army had not yet completed testing its drinking water for PFOS and PFOA. DOD’s March 2018 report to Congress provided updated information on actions taken (such as providing alternative drinking water or installing treatment systems) to address PFOS and PFOA in drinking water at or near military installations in the United States, as shown in figure 1 below. Specifically, DOD reported taking action as of August 2017 to address PFOS and PFOA levels exceeding those recommended in EPA’s health advisories for drinking water for people (1) on 13 military installations and (2) outside 22 military installations. We reported in October 2017 that, in addition to actions initiated by DOD, the department also took action in response to state and federal regulators. DOD responded to four administrative orders requiring that DOD address PFOS and PFOA levels that exceeded EPA’s health advisory levels for drinking water. One order was issued by the Ohio Environmental Protection Agency at Wright-Patterson Air Force Base in Ohio, and three orders were issued by EPA at the former Pease Air Force Base in New Hampshire; Horsham Air Guard Station in Pennsylvania; and the former Naval Air Warfare Center Warminster in Pennsylvania. For example, at Wright-Patterson Air Force Base, levels of PFOS and PFOA that exceeded EPA’s lifetime health advisory levels were found at two wells on the installation in 2016. In response to the order from the Ohio Environmental Protection Agency, the Air Force closed drinking water wells, installed new monitoring wells, and provided bottled water to vulnerable populations on the installation. Additional details on each order and examples of actions by DOD to address the orders were reported on in our October 2017 report. According to DOD, it may take several years for the department to determine how much it will cost to clean up PFOS and PFOA contamination at or near its military installations. Additionally, DOD officials told us in September 2018 that they believe a legally enforceable EPA drinking water cleanup standard would ensure greater consistency and confidence in their cost estimates because such a standard would give them a consistent target to clean up to. In a January 2017 report on environmental cleanup at closed installations, we recommended that DOD include in future annual reports to Congress best estimates of the environmental cleanup costs for contaminants such as PFOS and PFOA as additional information becomes available. DOD implemented this recommendation by including in its fiscal year 2016 environmental report to Congress (issued in June 2018) an estimate of the costs to respond to PFOS and PFOA. DOD Has Taken Steps to Address Health and Environmental Concerns with Its Firefighting Foam In our October 2017 report, we found that DOD was taking steps to address health and environmental concerns with its use of firefighting foam that contains PFAS. These steps included restricting the use of existing foams that contain PFAS, testing DOD’s current foams to identify the amount of PFAS they contain, and funding research into the future development of PFAS-free foam that can meet DOD’s performance and compatibility requirements (see table 1). Some of these steps, such as limiting the use of firefighting foam containing PFAS, were in place. Others, such as researching potential PFAS-free firefighting foams, were in progress at the time of our review. DOD’s military specification for firefighting foam, which outlines performance and compatibility requirements, also requires that firefighting foam purchased by the department contain PFAS. We reported in October 2017 that, according to DOD, there was no PFAS-free firefighting foam that could meet DOD’s performance and compatibility requirements. As a result, the Navy—which is the author of the military specification— had no plans to remove the requirement for firefighting foam to contain PFAS. However, Navy officials told us during our review that if a PFAS- free foam were to be developed that could meet DOD performance and compatibility requirements the Navy would make any necessary revisions to the military specification at that time. Navy officials also said during our review that they were planning to revise the military specification to set limits for the amount of PFAS that are allowed in firefighting foam, following their testing on the amounts of PFOS, PFOA, and other PFAS found in foam used by DOD. In June 2018, DOD reported to Congress that its military specification for firefighting foam was amended to set a maximum level of PFOS and PFOA (800 parts per billion). DOD officials told us in September 2018 this maximum level applies to the amount of those chemicals in firefighting foam concentrate before it is mixed and diluted with water to create firefighting foam. The DOD officials also said that 800 parts per billion is the lowest level of PFOS and PFOA that can be detected in firefighting foam concentrate by current testing methods and technologies, but DOD is working with foam manufacturers and laboratories to achieve lower detection limits. According to the June 2018 report, DOD plans to establish lower limits for PFOS and PFOA in firefighting foam in late 2018. The June 2018 report reiterated that, according to DOD, no commercially available PFAS-free foam has met the performance requirements of the military specification, and the report also stated that DOD-funded research efforts to develop a PFAS-free foam that can meet performance requirements are still ongoing. Chairman Paul, Ranking Member Peters, and Members of the Subcommittee, this completes our prepared statement. We would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this report, please contact us at Brian J. Lepore, (202) 512-4523 or leporeb@gao.gov or J. Alfredo Gómez, (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions to this statement include Maria Storts (Assistant Director), Diane B. Raynes (Assistant Director), Michele Fejfar, Karen Howard, Richard P. Johnson, Mae Jones, Amie Lesser, Summer Lingard-Smith, Felicia Lopez, and Geoffrey Peck. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study According to health experts, exposure to elevated levels of PFOS and PFOA could cause increased cancer risk and other health issues in humans. DOD has used firefighting foam containing PFOS, PFOA, and other PFAS since the 1970s to quickly extinguish fires and ensure they do not reignite. EPA has found elevated levels of PFOS and PFOA in drinking water across the United States, including in drinking water at or near DOD installations. This statement provides information on actions DOD has taken to address elevated levels of PFOS and PFOA in drinking water at or near military installations and to address concerns with firefighting foam. This statement is largely based on a GAO report issued in October 2017 ( GAO-18-78 ). To perform the review for that report, GAO reviewed DOD policies and guidance related to PFOS and PFOA and firefighting foam, analyzed DOD data on testing and response activities for PFOS and PFOA, reviewed the four administrative orders issued by EPA and state regulators to DOD on addressing PFOS and PFOA in drinking water, visited seven installations, and interviewed DOD and EPA officials. This statement also includes updated information based on two 2018 DOD reports to Congress—one on PFOS and PFOA response and one on firefighting foam—as well as discussions with DOD officials. What GAO Found GAO reported in October 2017 that the Department of Defense (DOD) had initiated actions to address elevated levels of perfluorooctane sulfonate (PFOS) and perfluorooctanoic acid (PFOA) in drinking water at or near military installations. PFOS and PFOA are part of a larger class of chemicals called per- and polyfluoroalkyl substances (PFAS), which can be found in firefighting foam used by DOD. In May 2016, the Environmental Protection Agency (EPA) issued nonenforceable drinking water health advisories for those two chemicals. Health advisories include recommended levels of contaminants that can be present in drinking water at which adverse health effects are not anticipated to occur over specific exposure durations. In response to those health advisories, DOD's military departments directed their military installations to (1) identify locations with a known or suspected release of PFOS and PFOA and address any releases that pose a risk to human health, which can include people living outside DOD installations, and (2) test for PFOS and PFOA in installation drinking water and address any contamination above the levels in EPA's health advisories. For example: As of August 2017, DOD had identified 401 active or closed military installations with known or suspected releases of PFOS or PFOA. The military departments had reported spending approximately $200 million at or near 263 installations for environmental investigations and responses related to PFOS and PFOA, as of December 2016. According to DOD, it may take several years for the department to determine how much it will cost to clean up PFOS and PFOA contamination at or near its military installations. DOD reported taking actions (such as providing alternative drinking water and installing treatment systems) as of August 2017 to address PFOS and PFOA levels exceeding those recommended in EPA's health advisories for drinking water for people (1) on 13 military installations in the United States and (2) outside 22 military installations in the United States. In addition to actions initiated by DOD, GAO reported in October 2017 that the department also had received and responded to four orders from EPA and state regulators that required DOD to address PFOS and PFOA levels that exceeded EPA's health advisory levels for drinking water at or near four installations. GAO also reported in October 2017 that DOD was taking steps to address health and environmental concerns with its use of firefighting foam that contains PFAS. These steps included restricting the use of existing foams that contain PFAS; testing foams to identify the amount of PFAS they contain; and funding research on developing PFAS-free foam that can meet DOD's performance requirements, which specify how long it should take for foam to extinguish a fire and keep it from reigniting. In a June 2018 report to Congress, DOD stated that no commercially available PFAS-free foam has met DOD's performance requirements and that research to develop such a PFAS-free foam is ongoing.
gao_GAO-18-461
gao_GAO-18-461_0
Background The U.S. pipeline network includes both interstate and intrastate pipelines, the vast majority of which fall into the latter category: Interstate pipelines: Interstate pipelines are primarily large-volume transmission pipelines that carry gas or hazardous liquid–sometimes over hundreds of miles—to communities and large-volume users (e.g., factories). At the start of 2017, there were about 340,000 miles of interstate transmission pipelines nationwide. Newly tapped domestic gas and oil deposits have resulted in an increase in the existing pipeline infrastructure to transport natural gas and oil. Intrastate pipelines: Intrastate pipelines are primarily composed of gas distribution and some transmission pipelines that transport natural gas pipelines to residential, commercial, and industrial customers. As of 2015, there were about 2.2 million miles of distribution pipelines nationwide. In addition, an estimated 18,000 miles of federally regulated gathering pipelines carry natural gas or hazardous liquids from production areas to processing facilities where the product is refined before continuing in transmission pipelines. At the federal level, PHMSA is responsible for developing regulations for domestic interstate and intrastate natural gas and hazardous liquid pipelines. Its regulatory programs are focused on ensuring safety in the design, construction, operation, and maintenance of pipelines. Inspectors from PHMSA’s five regional offices and states are responsible for inspecting nearly 3,000 companies that operate 2.7 million miles of pipelines. Each year, PHMSA uses its Risk Ranking Index Model (RRIM) as one input to determine its annual inspection priorities. RRIM categorizes each of the nation’s pipeline systems regulated by PHMSA into high, medium, and low-risk tiers. Pipeline risk are proposedbased on a combination of categories, such as the type of pipeline material and time since last inspection. PHMSA’s guidance specifies that high-risk pipelines should be inspected at least once every 3 years, medium-risk pipelines every 5 years, and low-risk pipelines every 7 years. PHMSA’s goal each year is to inspect, at a minimum, pipeline systems where the time since last inspection meets or exceeds the PHMSA guidance for the tier. Under federal pipeline safety laws, states may assume inspection and enforcement responsibilities for intrastate gas and hazardous liquid pipelines, which are primarily natural gas distribution pipelines. States assume that responsibility by annually certifying their state pipeline safety program to PHMSA, which PHMSA must validate. As part of a state’s certification, states must establish pipeline laws similar to federal pipeline safety regulations for intrastate pipelines, but may also impose more stringent pipeline safety regulations. PHMSA reimburses certified state agencies up to 80 percent of the total cost of operating their pipeline safety program through an annual grant. PHMSA may permit certified states to participate in interstate inspections through three types of agreements. (See fig.1): Interstate agent agreement: At PHMSA’s discretion, certified states may enter into an interstate agent agreement for either their natural gas program, hazardous liquid program, or both on an annual basis. As of April, 2018, nine state pipeline agencies hold these agreements. On PHMSA’s behalf, these agencies assume inspection responsibilities for the range of interstate inspection activities, as agreed upon by PHMSA and prioritized by PHMSA during the agency’s annual inspection planning process. States may also propose and conduct additional inspections as they believe necessary. While state inspectors can identify violations, PHMSA is ultimately responsible for enforcement of interstate pipeline regulations and uses a range of enforcement tools from Warning Letters to more stringent Notices of Probable Violation with either proposed compliance orders or proposed civil penalties. Temporary interstate agreement: These agreements allow PHMSA to request a state that has had its certification validated by PHMSA to perform interstate pipeline inspections on a temporary basis. According to PHMSA guidelines, these agreements are used typically for new construction inspections, but may include assistance such as inspection of specific operators, witness to repairs or testing, or investigation of incidents. Since 2010, PHMSA has entered into temporary interstate agreements with six states. Joint inspection: The Pipes Act of 2016 included a requirement for PHMSA to allow certified states to participate in the inspection of an interstate pipeline safety facility, if requested by the state pipeline safety agency. As of April, 2018, no states have requested to participate in joint inspections. State Involvement in Interstate Pipeline Inspections, While Not Extensive, Can Enhance Oversight Activities Interstate Agent Agreements Can Bolster Oversight in Participating States According to PHMSA regional officials we met with, interstate agents conduct high-quality inspections of interstate pipelines and provide an important supplement to the federal inspection workforce. PHMSA regional officials generally agreed that interstate agents have well-trained staff and leverage their local knowledge to enhance interstate pipeline inspections within their state. Additionally, interstate agents, if authorized by PHMSA, may conduct inspections of interstate pipelines within their state more frequently than PHMSA. For instance, officials in one PHMSA region noted that an interstate agent in their jurisdiction ensured each interstate operator was inspected once every 2 years, regardless of PHMSA’s risk ranking. Similarly, in two of 5 regions that have interstate agents, PHMSA regional officials stated that they needed interstate agents to supplement their current allocation of federal inspectors. For instance, in one region, PHMSA officials said that if interstate agent agreements were discontinued, the region would need to hire 3 to 4 additional inspectors. In another region, officials said that interstate agents provided the equivalent of 5 to 10 additional inspectors. Officials in one PHMSA region said that, although the region could absorb the interstate agent workload if needed, doing so would lead to less extensive inspections because there would more pipelines to inspect with fewer federal inspectors. Interstate agents may also enhance pipeline safety oversight within their state by going above and beyond the annual interstate inspection activities required under their agreement with PHMSA. Specifically, as part of the annual inspection planning process, PHMSA’s regional offices work with interstate agents to develop an annual inspection plan. While interstate agents must prioritize PHMSA’s inspection priorities, such as participation in new construction inspections and PHMSA-led systems inspections, they can also propose additional inspections of interstate pipelines within their state. Officials in half of the nine states with interstate agent agreements stated that they proposed and obtained PHMSA’s approval for additional interstate pipeline inspections that would not otherwise have been included in PHMSA’s annual inspection plan. For instance, PHMSA’s Western Region reported that between January 1, 2015 and December 31, 2016 Washington State’s pipeline safety agency—which holds an interstate agent agreement—proposed and conducted 13 inspections beyond those identified in PHMSA’s inspection plans. During these additional inspections conducted by interstate agents, state officials have identified violations of pipeline safety regulations. Some violations, including the four illustrative examples below, were deemed serious enough that PHMSA imposed civil penalties. In 2015, the Connecticut Department of Energy and Environmental Protection inspected an interstate pipeline that traverses the state. During the inspection, Connecticut inspectors found the pipeline operator had failed to employ properly qualified welders in constructing a section of the pipeline. As a result, PHMSA issued a civil penalty of $26,200 to the pipeline operator. In response to the findings, the operator ensured its welders were properly qualified and replaced the 14 welds completed by improperly qualified welders. In 2014, the New York Department of Public Service’s Pipeline Division inspected an interstate pipeline that traverses the state. During that inspection, New York inspectors identified violations related to the operator’s corrosion-control practices. Inspectors also found that the operator failed to prepare, and follow, a manual for conducting operations and maintenance activities, as well as for emergency response. As a result, PHMSA issued a civil penalty of $61,900. In response to the findings, the operator took action to address the corrosion control-related violations and revised its operations and maintenance manual. In 2011, the New York Department of Public Service’s Pipeline Division inspected an interstate pipeline that traverses the state. During that inspection, a New York inspector identified violations related to corrosion-control practices. As a result, PHMSA issued a civil penalty of $78,900. PHMSA also issued a Compliance Order, requiring the operator to remediate the identified violations, or face an additional civil penalty. In 2014, Arizona’s Corporation Commission’s Pipeline Safety Section inspected two interstate gas transmission lines that traverse the state. During the inspection, PHMSA and Arizona inspectors found that the operator had committed probable violations by not properly odorizing its pipeline, and providing insufficient information to the public about its pipeline odorization methods. As a result, PHMSA issued a Notice of Probable Violation, proposed civil penalties totaling $162,700, and issued a Proposed Compliance Order. Although state involvement in interstate inspections can enhance oversight, officials from almost all of our selected states that do not currently have an interstate agent agreement expressed little interest in pursuing such an agreement. Specifically, some officials in we spoke with plan to focus their limited resources on intrastate pipeline safety oversight activities. For example, although Texas has over 50,000 miles of interstate pipeline, officials in that state have focused exclusively on intrastate inspection activity, citing the heavy workload of their inspection staff, as well as challenges in recruiting and retaining additional inspectors. In another instance, California’s state pipeline safety agency responsible for hazardous liquid oversight voluntarily withdrew from the interstate agent program in 2013, citing staffing shortages stemming from a difficult economic climate. Although PHMSA’s current policy stance does not prohibit the agency from entering into a formal interstate agent agreement if the circumstances warrant, the agency prefers that state agencies enter into temporary interstate agreements. PHMSA officials explained that, historically, PHMSA has used interstate agents to supplement federal inspection resources and that the current nine interstate agents supplement the federal workforce by approximately 10–15 inspectors. PHMSA officials stated that they do not intend to discontinue current interstate agent agreements, but due in part to a recent staff increase the agency has sufficient staff to meet its inspection needs without adding additional interstate agents. PHMSA officials also told us that intrastate pipelines pose the highest safety risk to states and, consequently, state pipeline safety agencies should focus their efforts on intrastate pipeline oversight rather than participating in interstate pipeline inspections. During the last 7 years, four states that applied for an agent agreement— New Hampshire, Virginia, Maryland, and Nevada—were not accepted by PHMSA for these reasons. (See app. I.) In 2013, PHMSA decided not to renew another state pipeline safety agency’s interstate agent agreement, citing the state agency’s inability to staff its program properly, among other things. PHMSA’s Other Means of State Participation in Interstate Inspections Have Not Been Used Extensively Temporary Interstate Agreements While temporary interstate agreements provide an opportunity to participate in interstate pipeline oversight, officials from some state agencies told us that the agreement’s limited scope and ad hoc nature can create obstacles to state participation. For instance, in states without an interstate agent agreement, state inspectors’ day-to-day work focuses exclusively on intrastate pipeline oversight activities. In the event PHMSA requested assistance with certain interstate inspections, state inspectors may be unfamiliar with the interstate pipeline systems and operators. As a result, some state officials said that their inspectors may have a steep learning curve when conducting inspections under a temporary interstate agreement. However, PHMSA officials disagreed that most interstate agent states would have such steep learning curve because they currently inspect intrastate transmission pipelines; the regulations for interstate and intrastate pipelines are for the most part identical. Another obstacle some state officials identified relates to the fact that state pipeline safety agencies may not have sufficient inspection staff available, when needed, to participate in ad hoc interstate inspections. Due to the limited state role and competing priorities, state pipeline safety agencies rarely enter into temporary interstate agreements. According to officials in five of the 6 states that have that have entered into temporary interstate agreements, the agreements were used for limited, ad hoc inspections that were initiated by PHMSA. The sixth temporary interstate agreement was initiated by PHMSA in lieu of the Virginia pipeline safety agency’s 2017 application for an interstate agent agreement for natural gas. PHMSA offered to enter into a longer-term, temporary interstate agreement, which would permit the state agency to inspect the installation of two large interstate pipeline systems. The state agency accepted the temporary interstate agreement, which may be extended annually until the completion of the pipeline construction. To meet its new interstate inspection obligations, the state agency told us it hired two additional inspectors. According to state officials, those two inspectors will be dedicated to intrastate pipeline inspection, which will allow two of the state agency’s more experienced inspectors to conduct interstate pipeline inspections. Current interstate agents do not consider temporary interstate agreements to be an adequate substitute for an interstate agent agreement. According to officials we spoke with that are currently interstate agents, an interstate agent agreement allows state agencies and their inspectors to develop a strong understanding of operators and pipelines within their state. A few state officials stressed that the greatest benefit of interstate agent status was the ability to leverage their local knowledge—such as the proximity and familiarity with interstate pipelines within their states—to allow for quick responses to public concerns and pipeline incidents. PHMSA officials emphasized that temporary interstate agreements are not intended to replicate an interstate agent agreement; instead, these agreements are designed to provide PHMSA the flexibility to request targeted, short-term assistance from state pipeline safety agencies with interstate pipeline inspections. Joint Inspections Joint inspections offer states the most limited role in interstate pipeline inspections and may be entered into only if the state meets certain conditions. In response to the requirement in the PIPES Act, PHMSA created joint inspections and established certain criteria for state participation. For instance, to ensure that participation in joint inspections does not compromise intrastate pipeline safety, PHMSA only allows state inspectors to participate if the state agency has accomplished the required minimum number of inspection days during the preceding calendar year. PHMSA also requires state agencies to bear the cost of participating in joint inspections—including travel and inspection time for the state inspectors—rather than allowing states to include this activity in their annual pipeline safety program grant reimbursement. According to PHMSA officials, this requirement is designed to focus limited federal funds intended to support states’ intrastate pipeline safety programs. While it is too early to know whether states will participate in joint inspections over the long term, no states have participated to date. Despite general agreement among some state pipeline safety officials that collaborating with PHMSA on interstate pipeline inspections could be beneficial, they noted that PHMSA’s criteria reduces the incentive to participate. For instance, a few of the state officials we spoke to generally expressed concern over the requirement that states bear the entire cost of their participation. Additionally, state officials perceive the current joint inspection policy as restricting state inspectors to an observer role. However, PHMSA officials we spoke with noted that the role of state inspectors can vary based on the levels of training and knowledge among state inspectors. PHMSA officials told us they intend to clarify this role for states. PHMSA Used a Regional Workload Analysis to Allocate Inspection Resources, but Has Not Assessed Future Resource Needs PHMSA Has Allocated Increased Inspection Resources Based on Regional Workload From fiscal year 2012 to 2017, PHMSA’s funding increased by nearly 40 percent, allowing the agency to hire additional pipeline inspectors. Specifically, PHMSA’s funding increased from $110 million in fiscal year 2012 to $154 million in fiscal year 2017. PHMSA’s inspection and enforcement division received the majority of the increased funding, allowing that division to hire additional staff. From fiscal year 2012 through 2017, the number of inspectors hired increased by over 25 percent, from 107 to 147 across the five PHMSA regions. (See fig. 2). In recent years, PHMSA has improved its analysis of the number of pipeline inspectors needed to address the inspection workload in each region. Before 2014, PHMSA allocated inspectors evenly across the agency’s five regions. Since 2014, PHMSA has used a regional workload analysis to allocate its interstate inspectors. Unlike the previous analysis, the regional workload analysis takes into account federal inspector workload, pipeline construction, and the amount of pipeline mileage in areas where the consequences of an accident are greater (such as populated and environmentally sensitive areas) to help ensure that PMHSA has appropriate resources in each region. For example, PHMSA’s central region received a greater percentage of inspectors than most other regions to help oversee a number of new pipeline construction projects. (See table 2). According to PHMSA officials, the regional workload analysis has resulted in a better match between workforce staffing and needs. PHMSA Lacks an Inspection Workforce Plan That Assesses Future Resource Needs for Interstate Pipeline Inspections While PHMSA has improved how it allocates its current inspection staff among the regions, the agency lacks a forward-looking workforce plan for interstate pipeline inspections. Workforce planning helps agencies take a strategic, forward-looking approach to put the right people with the right skills in the right places at the right time. We have previously identified leading practices for effective strategic workforce planning. These approaches may vary with each agency’s particular needs and mission, but share certain principles. These may include: identifying skills and competencies to fill critical workforce gaps and the strategies needed to recruit them; developing specific strategies that are tailored to address gaps in number, deployment, and alignment of human capital; and monitoring and evaluating the agency’s progress toward its human capital goals. However, PHMSA has not developed a plan that systematically identifies the anticipated interstate pipeline inspection workload or the number of inspection staff needed to meet that workload. In light of the diminishing role that interstate agents currently provide in bolstering PHMSA’s inspection workforce, a plan for conducting future interstate pipeline inspections should also account for the reduction in resources and expertise state inspectors can potentially provide. According to PHMSA officials, they have not developed a workforce plan for interstate pipeline inspections because the agency’s focus has been on allocating and training the recently hired inspectors and ensuring that pipeline inspections are completed. Further, the lack of an inspector workforce plan may be symptomatic of a wider-ranging workforce planning issue. A November, 2017 DOT Inspector General (IG) report found that PHMSA had not developed a comprehensive workforce plan since 2005 and recommended that PHMSA develop such a plan. PHMSA agreed with the recommendation and anticipates completing the plan by the end of December 2018. Of note, PHMSA’s 2005 workforce plan did not include an analysis of federal and state inspectors needed for interstate pipeline inspections. In the absence of a workforce plan for interstate inspections, PHMSA cannot proactively plan for future inspection needs to ensure that federal and state resources are in place to provide effective pipeline oversight. Conclusions PHMSA has an important role in overseeing interstate pipelines and operators to ensure pipeline safety, and the agency’s partnership with interstate agents has proven beneficial in fulfilling that role. Recent increases in funding have allowed PHMSA to increase its own inspection workforce and reduce its reliance on state agents. However, the agency does not have an inspection workforce plan to ensure that it is making the correct decisions regarding its mix of federal inspectors versus state resources. Therefore, it does not have reasonable assurance that it will be able to provide adequate oversight of interstate pipelines going forward. Recommendation PHMSA should develop a workforce plan for interstate pipeline inspections that is consistent with leading practices in workforce planning, which should include a consideration of the additional resources and safety oversight that state pipeline officials can provide. (Recommendation 1) Agency Comments We provided DOT with a draft of this report for review and comment. In its comments, reproduced in appendix II, the Department of Transportation concurred with our recommendation. The Department of Transportation also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to relevant congressional committees, the Secretary of Transportation, and other interested parties. In addition, this report will also be available at no charge on GAO’s website at http://www.gao.gov If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: States That Have Applied and Have Not Been Accepted for Interstate Agent Status In the past 7 years, four additional state pipeline safety agencies have applied for interstate agent agreements: New Hampshire: In 2014, the state legislature passed a law requiring the state’s pipeline safety agency to apply for interstate agent status on an annual basis. State pipeline safety officials cited New Hampshire inspectors’ local knowledge of interstate pipelines, as well as concerns over the frequency of PHMSA’s interstate pipeline inspection activity, as reasons for seeking an agreement. To date, PHMSA has not accepted the state agency’s annual applications for interstate agent status citing an increase in the federal inspection workforce, a preference for states to focus on intrastate pipeline oversight, and the ability for state agencies to participate in interstate inspections through other means, such as temporary interstate agreements. Virginia: In 2016, the Virginia General Assembly passed legislation requiring the state pipeline safety agency to apply for interstate agent status for natural gas. The state agency applied the following year, citing the need to conduct construction inspections of the Virginia section of two large interstate natural gas transmission pipelines. PHMSA did not accept the state agency’s application, citing increasing federal inspection resources as well the agency’s lack of full authority over its intrastate gas operators. Instead, PHMSA provided the state agency a temporary interstate agreement, renewable on an annual basis, to conduct the desired inspections. Maryland: Maryland’s pipeline safety agency applied for interstate agent status in 2014 in response to public concern over proposed construction of a new interstate pipeline. PHMSA did not accept the agency’s application for interstate agent status, citing an increase in federal resources and PHMSA’s preference that the state agency focus its inspection efforts on intrastate pipelines. According to state agency officials, public interest has waned and the state has no plans to reapply. Nevada: Nevada’s pipeline safety agency applied for interstate agent status in 2011. According to state pipeline safety officials, they did so to help retain staff, rather than as a result of pipeline safety concerns. PHMSA did not accept the agency’s request, citing a preference only to enter into new interstate agreements when additional state support was needed, as well as the preference for states to focus on intrastate pipeline facilities. According to state officials, they do not plan to reapply. Appendix II: Comments from the Department of Transportation Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, Sara Vermillion (Assistant Director), Nick Nadarski (Analyst-in-Charge), Mike Duane, David Hooper, Delwen Jones, Malika Rice, and Kelly Rubin made key contributions to this report.
Why GAO Did This Study PHMSA oversees the safety of interstate and intrastate natural gas and hazardous liquid pipelines. PHMSA certifies states to oversee intrastate pipelines, and some states also act as PHMSA's “agents” to supplement the federal inspection workforce for interstate pipelines. In recent years PHMSA has signaled a move away from using interstate agent agreements. Recent funding increases have enabled PHMSA to hire additional federal inspectors. States may receive annual grants to reimburse up to 80 percent of the cost of their pipeline safety activities. Congress included a provision in statute for GAO to review the federal and state responsibilities and resources used to inspect interstate pipelines. This report addresses (1) how state participation has affected interstate pipeline oversight and (2) PHMSA's assessment of the resources needed to conduct interstate pipeline inspections. GAO reviewed relevant laws and PHMSA guidance on state participation in these inspections; analyzed the most recent 6 years of PHMSA funding and inspector staffing data; and interviewed pipeline safety officials from PHMSA and 22 states selected based on level of participation in interstate inspections. What GAO Found State involvement in interstate pipeline inspections can enhance oversight, although the three types of agreements that the Pipeline and Hazardous Materials Safety Administration (PHMSA) uses to allow state participation are not used extensively. Annual interstate agent agreements —held by 9 states—allow states to participate in all inspection activities and can bolster interstate pipeline oversight. For instance, an inspection conducted in 2014 by New York state officials led to $61,900 in federal civil penalties. Temporary interstate agreements —used in 6 states to date—allow PHMSA to request states to participate in specific interstate pipeline inspections. PHMSA officials said these agreements provide the agency greater flexibility. Some current interstate agents GAO interviewed said that temporary interstate agreements are useful, but are not substitutes for interstate agent status because states do not participate in the full range of inspections. Finally, PHMSA as authorized by federal law recently established joint inspections allowing states to request to participate in interstate inspections. However, state officials were concerned that their role is limited and that they must bear the full cost to participate. PHMSA officials said they intend to clarify the state inspector role in joint inspections and acknowledged that federal grants cannot be used by states to support joint inspection activities. PHMSA allocated recently hired inspectors based on regional workload, but has not assessed future resource needs. From fiscal years 2012 to 2017, PHMSA's appropriations increased over 40 percent, allowing the agency to expand its inspector workforce by about 25 percent. PHMSA allocated the additional inspectors across the agency's five regions based on workload. For example, PHMSA's central region received a greater percentage of inspectors than other regions to help oversee a number of new pipeline construction projects. However, PHMSA has not planned for future workforce needs for interstate pipeline inspections. In particular, it has not assessed the resources and benefits that states can provide through the three types of agreements. Leading practices for workforce planning indicate that such forward-looking analyses are essential for effective workforce planning. Without such analyses, PHMSA cannot proactively plan for future inspection needs to ensure that federal and state resources are in place to provide effective oversight of interstate pipelines. What GAO Recommends PHMSA should develop a workforce plan for interstate pipeline inspections, considering, among other things, the additional resources and safety oversight that state pipeline officials can provide. DOT concurred with our recommendation and provided technical comments, which we incorporated as appropriate.
gao_GAO-18-492
gao_GAO-18-492_0
Background Export credit agencies such as the Bank are usually government agencies, although some private institutions operate export credit programs on their respective governments’ behalf, according to a Bank report on global export credit competition. These agencies offer financing for domestic companies to make sales to foreign buyers, in the form of products such as loans, guarantees, and insurance for exporters, according to the Organisation for Economic Co-operation and Development, which monitors international export credit activity. The Bank is one of several federal agencies promoting U.S. exports. According to the Bank, as of December 31, 2016, it had identified 96 export credit agencies worldwide. There have been significant changes in the role of export credit agencies since 2007 and the global financial crisis and the European debt crisis, according to the Bank. This is because ready access to credit before the global financial crisis has given way to caution in lending among private-sector banks, and also because other nations have adopted export credit agencies as a tool for national growth. For fiscal year 2014—which the Bank says is the most recent year in which it operated with full authority— the Bank reported authorizing nearly $20.5 billion in financing in support of an estimated $27.5 billion worth of U.S. exports and nearly 165,000 American jobs. For fiscal year 2017, operating under reduced authority, the Bank reported authorizing more than $3.4 billion in financing to support $7.4 billion of exports and an estimated 40,000 jobs. The Bank, which has about 430 employees, was established under the Export-Import Bank Act of 1945. Under the act, the Bank must have a “reasonable assurance” of repayment when providing financing; it must supplement, and not compete with, private capital; and it must provide terms that are competitive with foreign export credit agencies. Also relevant to whether the Bank provides assistance is whether foreign competitors of the U.S. exporter are receiving export credit assistance from their home nations, and thus the American exporter would need assistance to stay competitive. Over time, Congress has directed the Bank to support certain specific types of exports. Such requirements include using at least 25 percent of its authority to finance small-business exports; promoting exports related to renewable energy sources; and promoting financing for sub-Saharan Africa. Bank Product Types As described in figure 1, to support U.S. exports, the Bank offers four major types of financing: direct loans, loan guarantees, export-credit insurance, and working capital guarantees. Bank products generally have three maturity periods: Short-term transactions are for less than 1 year; medium-term transactions are from 1 to 7 years long; and long-term transactions are more than 7 years. For fiscal year 2017, the Bank reported it had exposure in 166 countries. Figure 2 shows Bank exposure by product type, geographic region, and economic sector, for fiscal year 2017. Its greatest exposure, by product type, was in loan guarantees. By geographic region, the largest exposure was the Asian market. By economic sector, exposure was biggest in aircraft products. Because the Bank’s mission is to support U.S. jobs through exports, there are foreign-content eligibility criteria and limitations on the level of foreign content that may be included in a Bank financing package. For medium- and long-term transactions, for example, the Bank limits its support to 85 percent of the value of goods and services in a U.S. supply contract, or 100 percent of the U.S. content of an export contract, whichever is less. There are also requirements that certain products supported by the Bank must be shipped only on U.S.-flagged vessels. Defaults occur when transaction participants fail to meet their financial obligations. The Bank must report default rates to Congress quarterly. It calculates the default rate as overdue payments divided by financing provided. If the rate is 2 percent or more for a quarter, the Bank may not exceed the amount of loans, guarantees, and insurance outstanding on the last day of that quarter until the rate falls under 2 percent. As of March 31, 2018, the Bank reported its default rate at 0.438 percent. Bank Board of Directors and Vacancies The Bank is overseen by a Board of Directors (the Board), which has a key role in approving Bank transactions, because directors must approve medium- and long-term transactions of greater than $10 million. Since July 2015, however, the Board has lacked a quorum (at least three members), which has precluded approval of these large transactions. Also due to the lack of a quorum, new transaction activity has shifted away from larger transactions, according to Bank managers. The Bank’s total exposure has recently declined by about a third, from $113.8 billion at the end of fiscal year 2013 to $72.5 billion at the close of fiscal year 2017, according to the Bank. In part during the period when the Board has lacked a quorum and been unable to approve large transactions, the amount of earnings the Bank has transferred to the Department of the Treasury has declined steadily, according to Bank figures. Since 2012, the amount the Bank transferred to the Treasury peaked at $1.1 billion in fiscal year 2013. In successive years, that transfer fell to $674.7 million in fiscal year 2014, $431.6 million in fiscal year 2015, and $283.9 million in fiscal year 2016, before reaching zero in fiscal year 2017. As the Board vacancies have continued, a backlog of Board-level transactions has grown, reaching an estimated $42.2 billion as of December 2017. The Board also has a key role in risk management, with members serving on the Bank’s Risk Management Committee, which oversees portfolio stress testing and risk exposure, according to the Bank. Board members also approve the appointment of the chief risk officer (CRO), the chief ethics officer, and members of advisory committees. During the course of our review, in addition to the Board quorum issue, Bank senior leadership changed. According to the Bank, the following took place: The acting chairman of the Board and president of the Bank resigned. The vice chairman, first vice president, and acting agency head also later resigned. Subsequently, a new executive vice president, chief operating officer, and acting agency head was named. Following that, an acting president and Board chairman was named. Fraud Risk Management Standards and Guidance Fraud and “fraud risk” are distinct concepts. Fraud—obtaining something of value through willful misrepresentation—is challenging to detect because of its deceptive nature. Fraud risk exists when individuals have an opportunity to engage in fraudulent activity, have an incentive or are under pressure to commit fraud, or are able to rationalize committing fraud. When fraud risks can be identified and mitigated, fraud may be less likely to occur. Although the occurrence of fraud indicates there is a fraud risk, a fraud risk can exist even if actual fraud has not yet been identified or occurred. According to federal standards and guidance, executive-branch agency managers are responsible for managing fraud risks and implementing practices for combating those risks. Federal internal control standards call for agency management officials to assess the internal and external risks their entities face as they seek to achieve their objectives. The standards state that as part of this overall assessment, management should consider the potential for fraud when identifying, analyzing, and responding to risks. Risk management is a formal and disciplined practice for addressing risk and reducing it to an acceptable level. We issued our Fraud Risk Framework in July 2015. The Fraud Risk Framework provides a comprehensive set of leading practices, arranged in four components, which serve as a guide for agency managers developing efforts to combat fraud in a strategic, risk-based manner. The Fraud Risk Framework is also aligned with Principle 8 (“Assess Fraud Risk”) of the Green Book. The Fraud Risk Framework describes leading practices in four components: commit, assess, design and implement, and evaluate and adapt, as depicted in figure 3. The Fraud Reduction and Data Analytics Act of 2015, enacted in June 2016, requires the Office of Management and Budget (OMB) to establish guidelines for federal agencies to create controls to identify and assess fraud risks, and to design and implement antifraud control activities. The act also requires OMB to incorporate the leading practices of the Fraud Risk Framework in those guidelines. In July 2016, OMB published guidance on enterprise risk management and internal controls in federal executive departments and agencies. Among other things, this guidance affirms that managers should adhere to the leading practices identified in the Fraud Risk Framework. The act also requires federal agencies to submit to Congress a progress report each year, for 3 consecutive years, on implementation of the controls established under the OMB guidelines. The Bank Has Identified a Dedicated Entity to Lead Fraud Risk Management, but Management and Staff Disagree on Aspects of an Antifraud Culture The Bank has identified a dedicated entity to lead fraud risk management activities, as called for in the first component of GAO’s Fraud Risk Framework. In addition, employees generally have a positive view of antifraud efforts across the Bank, according to our employee survey. However, we also found that management and staff have differing views on key aspects of the Bank’s antifraud culture. In particular, we identified issues inconsistent with the notion of “an antifraud tone that permeates the organizational culture,” as the Fraud Risk Framework calls for, in which there is agreement across the organization on key fraud issues and practices. These areas of disagreement on aspects of the Bank’s antifraud culture include how active the Bank should be in preventing, detecting, and addressing fraud; and the adequacy of time for underwriting, which the Bank says is its primary safeguard against fraud. Bank managers said that our findings provide an opportunity for additional staff training on fraud issues. The Bank Has Identified a Dedicated Entity to Lead Fraud Risk Management Activities The Bank has identified two managers who serve as a dedicated entity for leading fraud risk management activities, managers told us. These are a vice president of the Credit Review and Compliance division (CRC) and an assistant general counsel in the Bank’s Office of the General Counsel (OGC). According to Bank managers, they work together under the direction of the CRO, who was permanently named to the position on a part-time basis in September 2016. GAO’s Fraud Risk Framework provides that the dedicated entity can be an individual or a team, depending on the needs of the agency. Hence, the Bank’s arrangement is consistent with the framework. Before recently identifying the two managers as the dedicated entity, Bank managers told us there was no centralized entity responsible for fraud risk management. Likewise, Bank written procedures, dated February 2015, for preventing, detecting, and prosecuting fraud provided there is no “central figure in charge” of such efforts. The CRO told us that he oversees the two managers in their work as the dedicated entity. We also found that the two managers named to form the dedicated entity are involved in one of the key activities contemplated by the Fraud Risk Framework. Overall, these activities include serving as a repository of knowledge on fraud risks and controls; leading or assisting with trainings and other fraud-awareness activities; and coordinating antifraud initiatives. The two managers have helped develop and provide training, some of which is mandatory and targeted directly at fraud issues, managers told us. The Bank provides semiannual fraud training through OGC for claims-processing staff, Bank managers also said. Other training, while nominally not directed at fraud, can nevertheless involve fraud issues, Bank managers told us. For instance, managers told us recent training on shipping matters included a review of fraudulent shipping documentation, which is one way fraud can be perpetrated. Bank Managers and Staff Express Positive Views of Antifraud Culture, but They Hold Different Views on Key Aspects of That Culture GAO’s Fraud Risk Framework calls for creating an organizational culture to combat fraud, such as by demonstrating senior-level commitment to fighting fraud and involving all levels of the agency in setting an antifraud tone. Bank managers, in interviews, and staff, in our employee survey, generally expressed positive views of the Bank’s antifraud culture. For example, according to Bank managers, the Bank has maintained an antifraud culture, which they attribute to factors including: fraud and ethics training; internal controls; tone set at the top by management; a realization after fraud cases in the 2000s that the Bank cannot be solely reactive to fraud; and the pursuit of fraud cases by the Bank and its OIG. Our survey results indicate that Bank employees also generally have a positive view of antifraud tone across the Bank and attention paid to combating fraud. For example: Eighty percent said Bank management in general has established a clear antifraud tone, to the extent of “a great deal” or “a lot.” Employees said that based on senior management’s actions, preventing, detecting, and addressing fraud is “extremely” or “very” important to the Bank (86 percent). Staff expressed “a great deal” or “a lot” of confidence in senior management (76 percent), managers in their division (85 percent), and their peers (82 percent), to respond to fraud on a timely and appropriate basis. Illustrative Comments from GAO’s Survey of Bank Employees “The Bank has become much more sensitized to the risks of fraud over the last 10 years.” “The progress made on combating fraud is tremendous. When I started, no one really cared, and fraud was common…. Now, blatant attempts at fraud are a rarity.” “There is a high degree of concern at all levels of the Bank regarding potential fraud, which has resulted in good oversight.” We also found indications of disagreement among managers and staff about how active the Bank should be in preventing, detecting, and addressing fraud. Overall, Bank managers told us, the Bank’s current approach has been appropriate for dealing with fraud. In particular, an OGC manager told us that with its underwriting and due diligence standards—the process for assessing and evaluating an application before approval—and established fraud procedures, the Bank has an appropriate strategy to mitigate fraud risks it knows about or envisions occurring. However, about one-third of survey respondents (35 percent) said the Bank should be “much more active” or “somewhat more active” in preventing, detecting, and addressing fraud. Less than half (44 percent) said the current level of activity should remain the same. Asked whether what they see as the Bank’s current approach for overseeing fraud and fraud risk, based on the level of responsibilities of various parties involved, is the most effective way to do so, about 6 in 10 (62 percent) said yes. While Bank managers characterized our survey results as positive, these divergent views indicate room for strengthening antifraud culture, in light of the Fraud Risk Framework’s goal of achieving shared views across the organization. Illustrative Comments from GAO’s Survey of Bank Employees “The Bank should be much more active in preventing, detecting, and addressing fraud, because the Bank handles business transactions that involve taxpayers’ money.” “The Bank needs more funding for technology to help with fraud prevention and additional Bank staff to spot/monitor fraud.” “The first- and second-level managers have not done all they could to ensure fraud prevention. The front-line credit officers are the ones in the best position to detect fraud and management does not always support it.” “A more proactive approach to fraud detection, rather than a reactive approach, would be more prudent. This means trying to sniff out fraud the preapplication and underwriting stages.” Another area where we identified differing views is in the adequacy of time for underwriting. Preapproval underwriting, and the due diligence done as part of that process, is the Bank’s main control against fraud, according to Bank managers and procedures. However, during our review, Bank managers also acknowledged in interviews that their business involves potentially competing objectives: performing sufficient due diligence to prevent and detect fraud prior to approving transactions, while still processing transactions in a timely manner to meet customers’ needs and achieve the Bank’s mission. Some comments we received in our employee survey illustrated the tension between the competing objectives of thorough due diligence and timely processing of transactions. Illustrative Comments from GAO’s Survey of Bank Employees “Detecting fraud is a very high priority, as is appropriate. But overemphasis on managing that risk would lead to a sense of paranoia when approaching any new risk.” “Given all the other obligations we have, even more time spent on fraud detection means less time for other transaction-related work, with only marginal benefit.” “Risk is part of the business, and being overly cautious leads to never taking any risk and consequently not serving the customers.” “Fraud is important to discuss, but it should not become the main force driving the organization. There needs to be more of a risk-based analysis when determining how much to concentrate on fraud.” According to a Bank report on global export credit competition, transaction processing time is an important factor in customers’ decisions to choose the Bank over foreign export-financing agencies. In recent years, the Bank has significantly reduced processing time. Bank statistics show that the percentage of transactions completed in 30 days or fewer grew from 57 percent in fiscal year 2009 to 91 percent in fiscal year 2016. For 100 days or fewer, the rate has increased from 90 percent to 99 percent over the same period. Bank managers told us they seek to strike the right balance between the competing objectives and believe they have done so. For example, according to the CRC division, the Bank chooses to perform some of its fraud-detection and mitigation activities after application approval—such as through reviews of transactions selected on both a random and risk- based basis—in order to not unduly delay processing applications. Under Bank practices, document review can be abbreviated, and, after underwriting approval, lenders may accept certain transaction documentation, such as invoices or shipping documents, at face value unless something appears suspicious, managers told us. In the particular case of processing short- and medium-term transactions, the Bank is alert to “red flag” items—known warning signs, such as use of nonbank financial institutions, or participants that are trading entities rather than original equipment manufacturers, managers told us. But otherwise, the Bank limits the extent of its application investigation, according to the Bank’s OGC. In particular, as the Bank’s OGC told us, the Bank is required by law to make medium-term offerings a “simple product.” There is pressure both legally and commercially to process transactions quickly, because, otherwise, an exporter could lose its business opportunity, the Bank’s OGC told us. In many of these transactions, both the exporter and buyer are small, the OGC also said, so it is more difficult to get information. As a result, according to the OGC, the Bank relies more on self-reporting by transaction parties. For these reasons, the Bank’s OGC told us, for both short- and medium-term products, there are not as many “inherent checks and balances” in the process. We note that based on previous GAO work, self-reporting can present an opportunity for fraud. However, our survey results suggest that significant portions of Bank staff question whether the Bank is striking the right balance in providing sufficient time for preapproval review of transactions. Specifically, Bank staff raised concerns about the amount of time dedicated to the key task of preapproval review of applications. For each of the Bank’s three major product maturity categories, we asked whether the application process provides enough time for Bank staff to conduct thorough due diligence on potential fraud risks. For short-term products—which Bank managers said, as a category in general, have been the most susceptible to fraud recently—less than half (47 percent) said there is “always” or “usually” enough time; and about 20 percent said there is “sometimes,” “seldom,” or “never” enough time. For both medium- and long-term products, about 6 in 10 (56 percent and 61 percent, respectively) said the application process “always” or “usually” provides enough time. As noted, while Bank managers characterized our survey results as positive, these views indicate an opportunity for the Bank to further set an antifraud tone that permeates the organizational culture. Illustrative Comments from GAO’s Survey of Bank Employees “More due diligence should be required in order to qualify for the U.S. government’s support.” “The Bank is more concerned with increasing sales than preventing fraud.” Our survey also identified that while nearly half (48 percent) of respondents rated fraud as a “very significant” or “significant” risk to the Bank, there may be misunderstanding among employees on where responsibility lies for fraud risk management. We asked employees to describe the extent to which each of six offices or groups—OGC, the OIG, the Office of Risk Management, Bank senior management, all bank staff and managers collectively, or others—are responsible for overseeing fraud risk management activities at the Bank. The OIG received the highest response, with 73 percent saying it has “a great deal of responsibility.” Bank managers told us this result is to be expected, because staff associate issues of fraud with the OIG. However, these survey results suggest confusion—lack of a shared view, from the standpoint of antifraud culture—around the OIG’s role, which includes investigating suspected fraud, rather than overseeing the Bank’s fraud risk management activities. The OIG acknowledged to us that its role does not include responsibility for overseeing fraud risk management activities at the Bank. Asked about our findings overall, Bank managers told us they view our survey results as positive because the results indicate employees have a strong awareness of fraud and the risk it presents to the Bank. For example, regarding the results about the role of the OIG, they noted that staff are actively encouraged to report suspected fraud through channels—first to OGC, for subsequent referral to the OIG. Thus, employees would understand the OIG as being responsive to fraud, and Bank managers believe this likely accounts for the survey result. Nevertheless, they said, our survey results provide an opportunity for more detailed training, to better communicate with staff. In particular, the Bank managers told us such training would focus on the Bank’s approach to fraud, plus the Bank’s organizational structure for addressing fraud. The training will also clarify that the OIG has an investigative function as well as an auditing function, they said. Our employee survey results underscore the potential benefit of further fraud training. Among respondents who said they have received fraud or fraud risk-related training provided by the Bank in the last 2 years, three-quarters said it was “extremely” or “very” relevant to their job duties. Nearly two-thirds (63 percent) said it was “extremely” or “very” useful to their duties. Overall, about half (52 percent) of respondents said fraud or fraud risk-related information obtained from management, or any Bank resources, has increased their understanding of fraud “a great deal” or “a lot.” The differences we identified in perceptions of fraud risk and fraud management responsibilities do not, by themselves, implicate the performance of any particular antifraud control, or suggest that any additional control is necessary. However, to the extent views on significant antifraud issues, such as how active the Bank should be in preventing, detecting, and addressing fraud, or adequacy of time devoted to underwriting, differ across the organization, the Bank cannot ensure that it is best setting an antifraud tone that permeates the organizational culture, as provided in the Fraud Risk Framework. In particular, as the framework describes, antifraud tone and culture are important parts of effective fraud risk management. These elements can provide an imperative among peers within an organization to address fraud risks, rather than have the organization rely solely on top-down directives. The Bank Has Taken Some Steps to Assess Known Fraud Risks but Has Not Conducted a Comprehensive Fraud Risk Assessment The Bank has taken some steps to assess fraud risk. However, it has not conducted a fraud risk assessment, tailored to its operations, or created a fraud risk profile, both as provided in the second component of GAO’s Fraud Risk Framework. Further, under the framework, recent changes in the Bank’s operating environment indicate a heightened need to do so. We also found that although the Bank has been compiling a “risk register” intended to catalog risks it faces across the organization, this compilation does not include some known fraud risks, indicating that the Bank’s assessment is incomplete. In addition, we found that while the Bank has adopted a general position on the degree of risk it will tolerate, its current risk tolerance is not specific and measurable, as provided by federal internal control standards. Bank managers told us they will revise their fraud risk management practices to fully adopt the Fraud Risk Framework. The Bank Has Taken Some Steps to Assess Known Fraud Risks but Does Not Conduct Regular, Comprehensive Fraud Risk Assessments A leading practice of the Fraud Risk Framework calls for agencies to conduct fraud risk assessments at regular intervals, as well as when there are changes to the program or operating environment, because assessing fraud risks is an iterative process. Managers should determine where fraud can occur and the types of internal and external fraud the program faces. This includes an assessment of the likelihood and impact of fraud risks inherent to the program; that is, meaning both fraud risks known through fraud that has been experienced, as well as other fraud risk that can be identified, based on the nature of the program. According to a Bank report, FY2016 Enterprise Risk Assessment, the Bank is more susceptible to fraud, due to “the nature of the Bank’s mission, the high volume of transactions it executes, and the need for various groups within the Bank to work together to successfully defend against fraud.” The Bank’s short- and medium-term products are more susceptible to fraud, according to Bank managers. Other indicators of fraud, according to the managers, include domestic geography, transactions that involve truck shipments; international geography, since conducting adequate due diligence can be more difficult in remote locations; and when there are smaller, less well-known parties on both sides of the transaction. In this environment, the Bank has taken some steps to assess known fraud risks. Generally, the Bank’s practice has been to assess particular fraud risks and lessons learned following specific instances of fraud encountered, according to Bank managers. Because it has focused on fraud already encountered, the Bank’s practice has not been of the comprehensive nature provided in the Fraud Risk Framework. As an example of its current approach, according to Bank managers, the Bank experienced “significant fraud” in the early 2000s. This was chiefly in the medium-term program, and to a lesser degree, the short-term program, the managers said. As a result, the Bank made changes that reduced the fraud significantly, they said. Otherwise, according to the CRO, fraud has been addressed within product lines, as appropriate. Under its current approach, the Bank’s risk assessments do not include areas where fraud has not already been detected, according to Bank managers. They acknowledged that approach could expose the Bank to fraud risks for activities not yet discovered. A key difference between the Bank’s current approach, as illustrated above, and leading practices as provided in the Fraud Risk Framework, can be seen in how fraud risks are assessed. As described later, the Bank has been compiling risks it faces across the organization, with fraud risk among them. These efforts have focused on soliciting views of Bank staff. By contrast, the framework envisions a more comprehensive approach. Effective fraud risk assessments identify specific tools, methods, and sources for gathering information about fraud risks, according to the framework. Among other things, this can include data on trends from monitoring and detection activities. Under the framework, programs might develop surveys that specifically address fraud risks and related control activities. It may be possible, the framework suggests, to conduct focus groups, or engage relevant stakeholders, both internal and external, in one-on-one interviews or brainstorming about types of fraud risks. Thus, we found, the Bank’s current process for assessing fraud risk has been generally reactive and episodic, rather than regularly planned and comprehensive. Rather than adopt a more proactive approach, the Bank has instead relied on the normal processing and review of transactions— which build in experience with previous fraud schemes—as the truest test for identifying fraud issues or concerns, according to Bank managers. Recent changes in the Bank’s program and operating environment also heighten the need for comprehensively assessing fraud risks, according to the Fraud Risk Framework. Such changes include the Bank’s inability to approve large transactions due to the absence of a quorum. This has meant transaction activity has shifted to smaller transactions, which carry a greater risk of fraud, according to bank managers. Additionally, Congress recently mandated that the Bank increase its focus on small businesses, whose transactions present a different risk profile than those of the Bank’s large customers, according to Bank managers. Further, the Bank’s transaction backlog could also become an issue in the future. If a Board quorum is restored, there could be pressure to process transactions quickly in order to clear the backlog, which could undermine the quality of the underwriting process, according to documentation from the Office of the CRO. According to our review, the Bank’s current antifraud controls further the goal of protecting Bank resources and providing “reasonable assurance” of repayment. However, without planning and conducting regular fraud risk assessments, as identified in GAO’s Fraud Risk Framework, the Bank is vulnerable to not identifying material risks that can hurt performance or its ability to fulfill its mission. As Bank managers acknowledged to us, the Bank faces acute reputational risk if new instances of large or otherwise significant fraud emerge. The Bank Has Been Working to Identify Major Organizational Risks, but Its Identification of Fraud Risks Is Incomplete The Bank has taken some steps in an effort to identify, manage, and respond to risks, including those related to fraud. It has been developing a “risk register”—a compilation of risks across the organization. It has also recently completed an “enterprise risk assessment” through an outside consultant. However, these efforts do not reach the full extent of the relevant leading practices of the Fraud Risk Framework. Specifically, the framework call for agencies to identify inherent fraud risks of a program, examine the suitability of existing fraud controls, and then to prioritize “residual” fraud risks—that is, risks remaining after antifraud controls are adopted. For the risk register, individual business units contribute items, such as indicating types of risk and likelihood, and methods to mitigate the risk. The register, through the Bank’s Office of Risk Management, notes the risk of fraudulent deals generally, characterizing the likelihood as “somewhat likely,” but having the possibility of “major” financial, operational, legal, and reputational impacts. However, particular methods of fraud known to the Bank through experience—such as applicants submitting fraudulent documentation—are absent thus far. This indicates the register is incomplete, from the standpoint of identifying where fraud can occur and the types of internal and external fraud risks the program faces, as provided in GAO’s Fraud Risk Framework. Other inherent fraud risks, such as those posed by the Bank’s more limited understanding of transactions made when it delegates lending authority to other institutions, are also absent from its risk register. Work continues on developing the risk register, Bank managers told us. However, adoption of the risk register has been delayed, due to a reorganization of Bank management and the vacancies on the Board. Without a more comprehensive assessment of inherent fraud risks, the Bank cannot be assured of the extent to which existing controls effectively mitigate inherent risks. According to the chief risk officer, the Bank’s risk register is part of a more wide-ranging “enterprise risk management” strategy, which includes documenting a range of risks across the organization, including fraud. In March 2017, as part of this strategy, the Bank completed the enterprise risk assessment. Based on assessments by senior Bank managers, it identifies fraud risk—defined as a “significant and high-profile fraud” conducted against the Bank—as one among a range of risks facing the Bank. Consistent with Bank managers’ representations to us, the enterprise risk assessment ranks the likelihood of fraud risk as low against other risks the Bank faces—fourth out of five among “operational” risks, and 24th out of 26 total identified risks. Figure 4 depicts how the Bank evaluates these operational risks, in a schematic pairing likelihood of the event with expected impact if they were to occur. In this context, fraud risk is the least prominent risk among the top operational risks identified. In addition to operational risks, the enterprise risk assessment also details six high risks facing the Bank overall. Among them are new or unfamiliar deal structures, which may present increased repayment risk; and doing business in new and unfamiliar technologies, sectors, and industries where the Bank has limited experience. Although fraud is not explicitly identified as a risk, we note these new activities could provide an opening for those seeking to commit fraud. During our review, Bank managers maintained that the enterprise risk assessment represents a “comprehensive fraud risk assessment” undertaken by the Bank. They also, however, acknowledged that this assessment does not contain all the elements of a fraud risk assessment as described in GAO’s Fraud Risk Framework. For instance, as noted, the Bank has not conducted a comprehensive assessment of inherent fraud risks, tailored to its operations. We note that because, as described above, the Bank has not undertaken a fraud risk assessment as envisioned by the Fraud Risk Framework, its ranking of fraud risk compared to other risks may change after it has completed such an assessment. This is because a comprehensive assessment may identify new fraud risks or produce revised assessments of known fraud risks, both of which could affect relative rankings of other risks. The Bank’s Fraud Risk Tolerances Are Not Specific and Measurable A leading practice of the Fraud Risk Framework calls for agencies to determine fraud risk tolerance. Further, federal internal control standards state that managers should consider defining risk tolerances that are specific and measurable. In addition, under the framework, tolerance cannot be determined until the agency has identified inherent fraud risks and assessed their likelihood or impact. As part of its overall risk management activities, the Bank has adopted a general position on its fraud risk tolerance. Specifically, Bank managers told us that, by its nature, the Bank accepts more risk than the commercial sector; and some level of fraud is to be expected because it is not reasonable to eliminate all fraud in its programs. The instances of fraud encountered by the Bank in recent years have centered on small exposures, according to bank managers. Thus, the current level of fraud the Bank experiences is “defensible,” given the Bank’s mission and number of transactions it undertakes, according to the CRO. Bank managers said that fraud activity has steadily declined over the last decade, based on what they cited as fraud indicators that are reviewed by the Bank’s OGC. Bank managers also pointed to claims as another indication of declining fraud activity. Transaction participants file claims for losses covered under Bank loan guarantee and insurance products, such as if a borrower fails to make required payments. The Bank considers fraud to be a subset of transactions that result in claims, and managers cited declining claims activity over the last decade as an indirect measure of fraud activity. Table 1 shows a history of claims paid for fiscal years 2008 through 2017. Overall, Bank managers told us that in light of the decline in fraud they described, the task facing the Bank is to make sure that staff do not lose their focus on fraud and become too comfortable. We asked the Bank to provide statistics supporting the claimed long-term decline in fraud activity, based on fraud indicators. In response, managers told us the indicators are actually not “precise or numerical measures.” Instead, OGC noted the office is aware of fraud activity through “consultations and general sense of day-to-day business.” As for claims, we note that not all fraud activity may result in claims. Consequently, an analysis of claims alone may not reveal a complete or accurate view of fraud activity. In addition, although Bank statistics we reviewed show a decline in number of claims filed from fiscal year 2014 through nearly the end of fiscal year 2017, the decline is likely attributable to the lapse in the Bank’s authority in fiscal year 2015, according to a Bank report. While the Bank has adopted a general position on its fraud risk tolerance—that the current level of fraud is defensible, given the Bank’s mission—its current risk tolerances are not specific and measurable. Without more specific and measurable risk tolerances, the Bank cannot be assured of the extent to which any fraud risks exceed the Bank’s fraud risk tolerance. For example, a measurable risk tolerance could express willingness to tolerate an estimated amount of potentially fraudulent activity, given resource constraints in eliminating all fraud risks. The Bank Will Revise Its Practices, According to Managers After initially telling us that the Bank’s fraud risk management practices are working well and do not need modification, Bank managers later told us they will revise their approach. They now plan to conduct periodic fraud risk assessments and assess risks to determine a fraud risk profile, as provided in GAO’s Fraud Risk Framework, they said. Asked what prompted the changes, the CRO attributed them to our inquiries plus the Bank’s own growing experience with enterprise risk management. Bank managers also noted that since 2013, there has been an evolution in Bank antifraud controls, as part of what they refer to as a continuous improvement process. Specifically, the Bank’s new effort will include a range of new fraud management activities, according to the managers, starting with a fraud risk assessment and also including determining a fraud risk profile, on a priority-risk basis. The Bank also plans to identify residual risks and mitigating factors. In addition, according to the managers, this new work in addressing fraud risk is planned to include developing specific fraud risk tolerance or tolerances, with a metric for measuring such tolerance. As for implementation of the planned new approach, Bank managers stated they plan to complete a fraud risk assessment by December 2018 and to determine the Bank’s fraud risk profile by February 2019. However, Bank managers did not provide us with documentation describing in detail how they plan to ensure their fraud risk assessments and fraud risk profile are consistent with GAO’s Fraud Risk Framework. For example, we requested documentation of any specific plans to adopt any of the four components of GAO’s framework. Bank managers told us they plan to work with an outside consultant, and provided an outline of planned activities. However, the information did not describe how the Bank will ensure its risk assessments and profile include a full range of inherent fraud risks, including known fraud risks that are absent from its current risk register. Similarly, the managers did not provide documentation describing how the Bank’s fraud risk assessments and profile will include risk tolerances that are specific and measurable. Our employee survey results highlight the importance of the Bank’s planned new approach. In comments, some respondents noted the changing nature of fraud, underscoring the importance of taking a wider, more proactive approach to fraud, which the Fraud Risk Framework encourages. Illustrative Comments from GAO’s Survey of Bank Employees “There are tricks that financial fraudsters would use that many of our staff are unaware of.” “The biggest risk is that we cease to see fraud controls as an ever-evolving process.” “Types of fraud are constantly changing.” “To assume that thieves don’t evolve is inane, and to assume that you have the best, most evolved mechanisms for combating fraud is presumptuous.” Given the importance, under a more proactive approach, of being able to identify and react to new forms of fraud, we also asked employees how well they believe Bank senior management understands new or changing ways of attempting or committing fraud. About two-thirds (67 percent) said senior Bank management understands “very well” or “for the most part,” with the remaining respondents undecided or believing otherwise. The Bank Has Instituted Some Antifraud Controls but Not Developed a Strategy Based on a Fraud Risk Assessment, and Has Opportunities to Improve Fraud Awareness and Data Analytics The Bank has instituted a number of antifraud controls but has not developed an antifraud strategy based on a fraud risk profile, or implemented specific control activities to achieve such a strategy. This is because, as discussed earlier, it has not yet completed a fraud risk assessment tailored to its operations. As described in the third component of GAO’s Fraud Risk Framework, agencies should design and implement a strategy with specific control activities to address risks identified in the fraud risk assessment. We also found the Bank has opportunities to improve antifraud controls through greater fraud awareness and use of data analytics. Leading practices for fraud risk management under the third component include fraud awareness and data analytics activities, which can enhance the agency’s ability to prevent and detect fraud. The Bank currently employs a number of antifraud controls, both before and after transaction approval, which Bank managers told us include: Specific antifraud activities within individual business units, as they operate their respective programs. Review of transactions, including checking for fraud activity, following transaction approval. Later-stage review, such as examinations and recommendations by the Bank’s OIG. Preapproval antifraud efforts: Underwriting is the initial step in preventing fraud, and underwriters have a heightened awareness of fraud and irregularities, Bank managers told us. Under the Bank’s antifraud procedures, underwriters in the business units should be aware of fraud risks in their transactions and be alert to indications of fraud. Prior to approval, transactions and their participants go through several evaluations. These can assist underwriters in preventing fraud, according to Bank procedures. Figure 5 describes selected preapproval evaluations. According to the Bank, additional preapproval measures include analyzing lenders, focusing on sufficiency of due diligence or what appear to be a high level of claims; requiring collateral on most medium-term transactions; not allowing online applications to proceed unless applicants provide required information; and using a two-step approval process, in which both the underwriter and the underwriter’s supervisor must approve certain transactions. Postapproval antifraud efforts: Postapproval monitoring is generally not directed specifically at fraud, but plays a key role in fraud detection. Specifically, Bank managers told us that the Bank typically learns of fraud through the claims process—that is, after transactions are approved. Figure 6 describes postapproval monitoring. Later, third parties, such as the Bank’s OIG, review transactions and operations, the chief risk officer told us. The Bank has developed a policy and expectations for employee conduct in matters of possible fraud, imposing a duty to report any “suspicion” of fraud to OGC or the OIG. In particular, OGC is not selective about what information it passes to OIG, a manager told us—anything about Bank transactions is referred, no matter the strength of the evidence. In our employee survey, some respondents expressed concern that there is reliance on postapproval monitoring, versus greater scrutiny at the time of application. Illustrative Comments from GAO’s Survey of Bank Employees  The current division of responsibilities “is not the most effective way for the Bank to oversee fraud and fraud risk, as responsibility needs to be given to the teams on the front end—such as the individual relationship managers and loan officers—not on the back end.”  The current arrangement “seems to be more of an after-the-fact approach to potentially (if reluctantly) detecting fraud than any proactive encouragement to actively prevent fraud.” Although the Bank has instituted these pre- and postapproval antifraud controls, they may not provide the most effective protection available. According to GAO’s Fraud Risk Framework, the leading practice is for agencies to design and implement antifraud controls based on a strategy determined after performing a fraud risk assessment and creating a fraud risk profile. However, as previously discussed, the Bank has not yet completed such an assessment to determine such a profile. Consequently, the Bank cannot develop an antifraud strategy and associated controls that meet the leading practice until it has completed a fraud risk assessment and documented the results in a fraud risk profile. As noted earlier, Bank managers told us they now recognize the need to conduct assessments and develop a fraud risk profile for the Bank, and that they plan to complete this work by February 2019. They further told us that, after conducting a risk assessment and developing a fraud risk profile, they plan to design and implement antifraud controls as may be indicated by the assessment, in keeping with the framework’s third component. Until the Bank creates an antifraud strategy based explicitly on a fraud risk assessment and corresponding fraud risk profile, and has designed and implemented specific control activities to prevent and detect fraud, it is at risk of failing to address fraud vulnerabilities that could hurt its performance, undermine its reputation, or impair its ability to fulfill its mission. The Bank Has Opportunities to Improve Fraud Awareness among Its Staff As provided in GAO’s Fraud Risk Framework, increasing awareness of potential fraud schemes can serve a preventive purpose, by helping to create a culture of integrity and compliance, as well as to enable staff to better detect potential fraud. The Bank currently takes some steps to share information on fraud risks across the institution, through a variety of mechanisms, but it has opportunities to further improve information sharing to build fraud awareness. Training, cited earlier, is a leading practice of the Fraud Risk Framework, by which an agency can build fraud awareness. In particular, the framework cites requiring that all employees, including managers, attend training when hired and then on an ongoing basis thereafter. As discussed earlier, the Bank now conducts some training, and Bank managers told us they see our survey results as an opportunity to provide additional training. By extending training requirements to all employees, the Bank can seek to build awareness as broadly as possible, and with that, further reinforce antifraud tone and culture. Currently, according to our assessment of information the Bank provided, it does not offer dedicated fraud training across the organization, for all employees and on an ongoing basis. Another way to build fraud awareness is information sharing. For example, a manager in the Bank’s OGC told us he monitors fraud activity and communicates relevant fraud-related information to other units in the Bank, based on considerations such as whether a situation could be repeated in other cases. However, there are limitations in information- sharing. For example, the Bank’s OGC told us it restricts how widely it shares information on parties placed on an internally generated “watch list” of parties that should be scrutinized. The Bank also cannot share information provided by OIG on parties in a confidential law enforcement database as being under investigation, managers said, because those parties may not know they are under investigation. The reasons for such caution, according to managers, include the Privacy Act of 1974 and fear of creating a “de facto debarment list” absent any formal findings of fraud. In addition, CRC division managers told us that when the division discovers fraud-related information, it communicates such information to appropriate Bank staff. Despite concerns, we found there are opportunities for greater compilation and sharing of information, and employees said in our survey that they believe wider sharing of fraud-related information would be beneficial to building fraud awareness and performing their duties. For example, one way of boosting fraud awareness would be if Bank managers comprehensively tracked referrals of suspected fraud matters to the OIG and shared case outcomes with Bank staff, Bank managers told us. However, Bank managers told us they do not currently maintain and share such information on cases of suspected fraud referred to the OIG. Relatedly, GAO’s Fraud Risk Framework notes the opportunity for an agency to collaborate with its OIG when planning or conducting training, and promoting the results of successful OIG investigations internally. Some program managers also told us maintaining a repository of known fraud cases could aid in compliance and transaction approvals, but the Bank does not maintain and share this information with staff. In addition, as Bank managers acknowledge, compiling and maintaining information collected through the Bank’s database checks on transaction participants could serve as a library of useful information. However, Bank managers told us they do not currently maintain and share such information. In our survey, we asked employees whether Bank management provides any information on outcomes of fraud cases involving the Bank or Bank staff. Nearly half of respondents (49 percent) said no. About a third (35 percent) said yes. Among a subset of employees who reported that their job duties include direct responsibility for fraud matters, the “Yes” figure was higher but still less than a majority (41 percent). Some survey respondents noted lack of information-sharing about fraud practices and case outcomes, including that staff processing transactions must rely on personal memory for fraud issues that arose in previous transactions. Illustrative Comments from GAO’s Survey of Bank Employees “In some cases, there is no way to track bad actors or suspected fraudsters unless someone working the new transaction remembers that there was an issue with the actor in a previous transaction.” “Management seems to not want to discuss any fraud with staff. Instead, they should use the opportunity to educate staff about fraud that occurs and show the consequences that result. They need to be more open.” “While the Bank has put a lot of best practices in place, more could be done to more regularly communicate to staff about changing practices in committing and detecting fraud.” “Outcomes are rarely relayed to staff.” Underscoring the value of sharing information, our survey also found that when Bank management does share fraud-related information, Bank staff tend to find it useful in carrying out their duties. For those reporting that management does share fraud information, more than half of respondents (54 percent) said they found such information was “extremely” or “very” helpful in their job duties. Similarly, for those who reported they can readily access fraud-related information on their own from internal Bank resources, nearly two-thirds (63 percent) said the information was “extremely” or “very” helpful. In response to our inquiries, Bank managers said they plan to evaluate the feasibility of maintaining and sharing case outcome and database query information. In addition, they said OGC is exploring how it might share more fraud-related information, but in a protected way. In particular, the Bank wants to be able to share information on “integrity factors,” especially at the underwriting level. One way to do this might be distribution of fraud case studies as a refresher for staff, they said. Until the Bank makes greater efforts to share information on known fraud schemes or bad actors, the Bank forgoes the opportunity, as described in the Fraud Risk Framework, to build staff awareness that could enhance antifraud efforts in these ways. For example, by not sharing the outcomes of suspected fraud matters referred to the OIG, the Bank forgoes the opportunity to build awareness through lessons learned from actual cases, which could give staff especially relevant insight into future attempts at fraud. The Bank Has Opportunities to Improve Data Analytics to Fight Fraud GAO’s Fraud Risk Framework cites data analytics as a leading practice for preventing and detecting fraud; in particular, to mitigate the likelihood and impact of fraud. We found the Bank makes limited use of data analytics for antifraud purposes. For example, it conducts analyses of claims cases, according to Bank managers, and, as noted earlier, considers fraud to be a subset of transactions that result in claims. Documentation of such activity provided to us by the Bank includes analyses and statistical summaries, such as number and types of claims filed, and tallies of claim decisions (for example, approved, denied). However, the Bank does not perform data analytics, which are additional leading practices described in the Fraud Risk Framework. According to one manager, the Bank does not perform data analytics on its transaction-related data because the Bank OIG does not provide a specific transaction number (or “deal number”) necessary to link fraud cases it successfully pursues to the specific transactions from which the OIG action arises. Without that link, the Bank cannot distinguish transactions proven to be fraudulent from other, nonfraudulent transactions in its data, the Bank manager said. The link would be necessary for data-analytics purposes, the manager said. This inability to tie proven fraud cases to individual transactions, based on inability to obtain the key identifying information from the OIG, is a significant weakness in the Bank’s postapproval transaction monitoring, the manager further said. The Bank and its OIG take different views on this linking information. The Bank has asked the OIG to provide these specific transaction numbers in an effort to link proven fraud cases to its transaction data, according to one Bank manager. OIG officials, meanwhile, told us they always notify the Bank when a conviction is made, and provide as much information as possible and appropriate under the circumstances, including company name and individual name. OIG officials also noted that, even without the specific transaction number the Bank requests, the Bank should nevertheless be able to use OIG-provided case data to search its own transaction files and successfully locate corresponding transactions. In response to our inquiries, Bank managers said they are now considering a move into data analytics, including predictive analytics, to guard against fraud. However, until the Bank has a feasible and cost- effective means of linking OIG cases to specific transactions, its ability to use data-analytics for antifraud purposes will be limited. Without the ability to make use of data-analytics, the Bank forgoes the opportunity to develop a best-practices antifraud tool that could aid in identifying potential fraud retrospectively, on transactions already approved, or prospectively, in advance of approval. The Bank Has Opportunities to Improve Monitoring and Evaluating Outcomes of Its Fraud Risk Management Activities The fourth and final component of GAO’s Fraud Risk Framework calls for ongoing monitoring and periodic evaluations of the effectiveness of antifraud controls. This monitoring and evaluation should be from the specific perspective of antifraud controls established based on a comprehensive fraud risk assessment. Such activities can serve as an early warning system to help identify and resolve issues in fraud risk management—whether they involve current controls or prospective changes. Ongoing monitoring and periodic evaluations provide assurances to managers that they are effectively preventing, detecting, and responding to potential fraud. Further, according to the framework, effective monitoring and evaluation focuses on measuring outcomes and progress toward achieving objectives. Because the Bank has not completed a comprehensive fraud risk assessment, or designed antifraud controls based on such an assessment, it is not in a position to fulfill this final component. Even at that, however, we found the Bank does not generally evaluate the effectiveness or efficiency of its current fraud risk management practices. For example, OGC and CRC managers—who form the dedicated entity for managing fraud risks (as described earlier in component one)—both told us they are unaware of any procedure to periodically assess the effectiveness of the Bank’s fraud risk management policies. In addition, the Bank currently has no formal method for tracking fraud activity, according to a Bank manager. Thus, the Bank is not in a position to explicitly judge the effectiveness of antifraud controls. Further, as described earlier, Bank managers told us the fraud indicators they do track are not precise or numerical measures and that, instead, OGC is aware of fraud activity through a general sense of daily business. Following our inquiries, Bank managers told us they plan to revise their approach to monitoring, evaluating, and adapting their fraud risk management practices. They said they now plan to evaluate the effectiveness of those practices, following adoption of the second and third components of GAO’s Fraud Risk Framework, and with the intent to adapt controls as indicated necessary, in accordance with the framework’s fourth component. Timing will depend on implementation of the underlying fraud risk assessment, Bank managers told us. The Bank cannot be assured that its antifraud controls are optimal until it has fulfilled component four of GAO’s Fraud Risk Framework in the comprehensive fashion envisioned, following previous full implementation of components two and three. In particular, it cannot be assured that current practices are adequate, based on inherent program risks. Conclusions Proactively and strategically managing fraud risks can aid the Bank’s mission of supporting American jobs by facilitating U.S. exports, by reducing not only the risk of financial loss to the government, but also the risk of serious reputational harm to the Bank. The Bank has taken some steps to address fraud that are among leading practices identified in GAO’s Fraud Risk Framework. But overall, the Bank has approached fraud risk management on a fragmented, reactive basis, and its antifraud activities have not been marshalled into the kind of comprehensive, strategic fraud risk management regime envisioned by GAO’s Fraud Risk Framework and its leading practices. Chiefly, this is because the Bank has not anchored its fraud risk management policies in a comprehensive fraud risk assessment and corresponding risk profile, tailored to its operations, and then implemented controls designed to address the specific fraud risks identified in the assessment. Some fraud risk facing the Bank is already known, such as fabricated documentation. But as the Bank acknowledges, in addition to fraud risk inherent in its complex lines of business, it also faces significant risk from new or unfamiliar deal structures it may employ, and in new and unfamiliar technologies and industries it may service, where it has limited experience. Regular, comprehensive fraud risk assessments will address not only known types of fraud, but also seek to identify where fraud can occur and the types of fraud the program faces, including likelihood and impact. Accordingly, until the Bank begins conducting thorough, systematic assessments of its fraud risks, and compiles a risk profile prioritizing such risks, it cannot be assured that it satisfactorily understands its vulnerabilities to fraud and any gaps in its capabilities for addressing them. Following on from that, without developing and implementing an antifraud strategy that builds on the findings of the comprehensive risk assessments and risk profile, the Bank cannot be assured that its antifraud control activities are optimally designed for, and targeted to, the actual fraud risks its faces—meaning that it could be failing to address significant risks or targeting the wrong ones. Finally, without establishing outcome-oriented metrics and then regularly reviewing progress toward meeting these goals, the Bank cannot be assured that its antifraud control activities are working as intended. As we concluded our review, the Bank, encouragingly, said it would adopt the more proactive approach described by GAO’s Fraud Risk Framework. Thus, the Bank now needs to follow through on its stated intent to change its practices, and accomplish the tasks, described to us by Bank managers, as intended and in a timely fashion. This is true not only for current operations, but also prospectively, for the large transaction backlog the Bank faces, which Bank managers will process if or when the Bank’s quorum issue is resolved, and which could stress Bank fraud controls. The Bank’s identification of a dedicated entity to lead fraud risk management activities can be an important step in the right direction if that move now becomes the start of a sustained commitment. By fully adopting the elements of the framework, the Bank can strengthen its antifraud culture, better understand fraud risks facing its products and programs, and reshape how it monitors and evaluates the outcomes of its fraud risk management activities. In doing so, it will be better positioned to protect taxpayers and its multi-billion-dollar portfolio, while still meeting its mission to support American jobs and exports. Even though Bank managers have already told us they plan to implement the framework, they did not provide us documentation describing in detail how they will ensure their fraud risk assessment and fraud risk profile are consistent with leading practices of the framework—such as by ensuring the risk assessment considers all inherent fraud risks and the risk profile reflects risk tolerances that are specific and measurable. Thus, we include the following framework-specific recommendations in order to comprehensively enumerate relevant issues we identified, as well as to present clear benchmarks of accountability for assessing Bank progress. This complete listing is important in light of the Bank’s recent embrace of the framework; changes in the Bank’s executive leadership and vacancies on the Bank Board; and expected congressional consideration of the Bank’s reauthorization in 2019. Recommendations for Executive Action We are making the following seven recommendations to the Bank: The acting Bank president and Board chairman should ensure that the Bank evaluates and implements methods to further promote and sustain an antifraud tone that permeates the Bank’s organizational culture, as described in GAO’s Fraud Risk Framework. This should include consideration of requiring training on fraud risks relevant to Bank programs, for new employees and all employees on an ongoing basis, with the training to include identifying roles and responsibilities in fraud risk management activities across the Bank. (Recommendation 1) As the agency begins efforts to plan and conduct regular fraud risk assessments and to determine a fraud risk profile, the acting Bank president and Board chairman should ensure that the Bank’s risk assessments and profile address not only known methods of fraud, including those that are absent from its current risk register, but other inherent fraud risks as well. (Recommendation 2) As the agency begins efforts to plan and conduct regular fraud risk assessments and to determine a fraud risk profile, the acting Bank president and Board chairman should ensure that the risk profile includes risk tolerances that are specific and measurable. (Recommendation 3) The acting Bank president and Board chairman should ensure that the Bank develops and implements an antifraud strategy with specific control activities, based upon the results of fraud risk assessments and a corresponding fraud risk profile, as provided in GAO’s Fraud Risk Framework. (Recommendation 4) The acting Bank president and Board chairman should ensure that the Bank identifies, and then implements, the best options for sharing more fraud-related information—including details of fraud case referrals and outcomes—among Bank staff, to help build fraud awareness, as described in GAO’s Fraud Risk Framework. (Recommendation 5) The acting Bank president and Board chairman should lead efforts to collaborate with the Bank’s OIG to identify a feasible, cost-effective means to systematically track outcomes of fraud referrals from the Bank to the OIG, including creating a means to link the OIG’s proven cases of fraud to the specific Bank transactions from which the OIG actions arose. If any such means are found to be feasible and cost-effective, the acting Bank president and Board chairman should direct appropriate staff to implement them, with such information to be used for purposes consistent with GAO’s Fraud Risk Framework, such as data analytics. (Recommendation 6) The acting Bank president and Board chairman should ensure that the Bank monitors and evaluates outcomes of fraud risk management activities, using a risk-based approach and outcome-oriented metrics, and that it subsequently adapts antifraud activities or implements new ones, as determined to be appropriate and consistent with GAO’s Fraud Risk Framework. (Recommendation 7) Agency Comments and Our Evaluation We provided a draft of this report to the Bank for review and comment. In written comments, summarized below and reproduced in appendix III, the Bank agreed with our recommendations. The bank also provided technical comments, which we incorporated as appropriate. In its written comments, the Bank said it will take several steps to implement our recommendations to improve its fraud risk management activities. For example, the Bank stated it would continue to evaluate and implement methods to promote and sustain an antifraud tone that permeates the Bank’s organizational culture. In assessing fraud risks, the Bank stated it will include not only known risks, but also other inherent risks not yet known to have led to fraud. Following a fraud risk assessment as provided in GAO’s Fraud Risk Framework, the Bank stated that it will develop antifraud controls based on that assessment, subject to cost-benefit analysis. The Bank also stated that it will monitor and evaluate outcomes of its fraud risk management activities, and adapt existing controls or implement new controls as indicated, subject to cost- benefit analysis. The Bank further stated it will identify and implement ways to share more fraud-related information. In its written comments, the Bank also raised four concerns about our work. First, the Bank stated that it keeps substantial reserves for losses, which protect against taxpayer costs. We clarified our report to indicate that Bank officials told us they maintain reserves to protect against taxpayer costs. We did not evaluate the extent to which these reserves protect against taxpayer costs because doing so was outside the scope of our review. Second, the Bank stated our employee survey does not directly support some of the conclusions that we draw from responses received, and that only 24 percent of respondents were in the Export Finance area, which handles underwriting of Bank transactions. We note that the leading practices of the Fraud Risk Framework call for involving all levels of the agency in setting an antifraud tone that permeates the organizational culture. We also note that the Office of the Export Finance is not the only division involved in fraud control activities. For example, during our review, Bank managers told us that employees in the Credit Review and Compliance division, the Office of the General Counsel, and the Office of the Chief Financial Officer, among other offices, are also involved in fraud control activities. Thus, we believe it is appropriate that survey responses from those who work in these and other offices are included in our survey results. As noted in our report, Bank managers, in interviews, and staff, in our employee survey, generally expressed positive views of the Bank’s antifraud culture, but they hold different views on key aspects of that culture. We believe that our survey results support these findings, as well as related conclusions and recommendation (Recommendation 1), with which the Bank agreed. Third, the Bank stated that it has been very effective in preventing, detecting, and prosecuting fraud in Bank transactions. Our review evaluated the extent to which the Bank has adopted leading practices for managing fraud risks, as described in the Fraud Risk Framework. We did not evaluate the operational effectiveness of specific Bank control activities for preventing, detecting, and prosecuting fraud because doing so was beyond the scope of our review. Fourth, the Bank stated that our report and the employee survey did not clearly and consistently distinguish between fraud and fraud risk, which may lead to confusion in both the survey responses and the analysis in the report. However, we define the terms “actual fraud” and “fraud risk” in our employee survey, which appears in appendix II. Further, as described in greater detail in appendix I, we pretested and modified the survey to ensure questions were understood by respondents and that we used correct terminology. This process allowed us to determine whether survey questions and answer choices were clear and appropriate. Thus, we believe the survey results support our findings. Overall, as noted, these findings include positive views of the Bank’s antifraud culture as well as differing views on some aspects of that culture. We are sending copies of this report to the appropriate congressional committees, the acting president and Board chairman of the Bank, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-6722 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines management by the Export-Import Bank of the United States (the Bank) of fraud risks in its export credit activities, by evaluating the extent to which the Bank has adopted the four components described in GAO’s A Framework for Managing Fraud Risks in Federal Programs (Fraud Risk Framework). Specifically, we evaluate the extent to which the Bank has established an organizational culture and structure conducive to planned regular fraud risk assessments and assessed risks to determine a fraud risk profile; designed and implemented a strategy with specific control activities to mitigate assessed fraud risks; and evaluated outcomes using a risk-based approach and adapted activities to improve fraud risk management. To examine the extent to which the Bank has adopted the components of GAO’s Fraud Risk Framework, we reviewed Bank policy and governance documentation, plus other documentation; reviewed GAO and Bank Office of the Inspector General reports on fraud and fraud risk management topics; reviewed relevant reports of the Congressional Research Service and the Congressional Budget Office; and reviewed other reports and background information. Documentation we reviewed included Bank operating procedures, details of database search procedures, Bank annual reports, reports to Congress, the Bank’s strategic plan, risk assessments, and other materials. We also interviewed a range of Bank managers, at both the senior- management level and those overseeing relevant Bank operating units. These included the Bank’s chief financial officer, its chief risk officer, its acting chief operating officer, those with specific antifraud responsibilities, and others responsible for individual business units. These individual business units included those with responsibilities for monitoring transactions following approval. We then assessed our findings on the Bank’s fraud risk management practices and its antifraud controls against provisions of the Fraud Risk Framework, which also incorporates concepts from GAO’s Standards for Internal Control in the Federal Government. Survey Development and Administration To examine the extent to which the Bank has established an organizational culture and structure conducive to fraud risk management, we conducted a web-based survey of Bank employees. In our survey, we assessed, among other things, perceptions of the Bank’s organizational culture and attitudes toward fraud and fraud risk management, and whether employees viewed senior Bank management as committed to establishing and maintaining an antifraud culture. We surveyed all non- senior-management Bank employees, regardless of their position or length of employment, who are responsible for implementing, but not determining, Bank policy (that is, those below the level of senior vice president). There were 403 employees in our survey population, and we received 296 responses, thus producing a response rate of 73.5 percent. We received sufficient representation across Bank offices and divisions, and, overall, obtained a range of employee views. To develop our survey instrument, we utilized background research, leading practices as identified in GAO’s Fraud Risk Framework, interviews with Bank senior managers, and other sources. We conducted in-person pretests of survey questions with five Bank employees, varying in position, Bank office or division, and seniority, at Bank headquarters in Washington, D.C. We pretested the survey instrument to ensure the questions were understood by respondents, that we used correct terminology, and that the survey was not burdensome to complete. This process allowed us to determine whether the survey question and answer choices were clear and appropriate. We modified our survey instrument as appropriate based on pretest results and suggestions made by an independent survey specialist. The final survey instrument included closed- and open-ended questions on Bank management and tone-at- the-top; fraud-related training and information; antifraud environment; and personal experiences with fraud at the Bank. Throughout the survey instrument, we defined important terms, such as “senior management,” so respondents could interpret key concepts consistently through the survey. We administered the survey, via the World Wide Web, from July 31, 2017, through September 22, 2017. To do so, we obtained from Bank management a file of Bank employees with relevant identifying information. Before we opened the survey, the Bank president, at our suggestion, sent an email to employees notifying them of the forthcoming survey and encouraging them to respond. We also sent Bank employees a notification email describing the forthcoming survey, in advance of sending employees another email providing a unique username and password to access the web-based survey. To improve the response rate, we contacted Bank employees by phone who had not yet completed the survey (nonrespondents), to determine their eligibility, update their contact information, answer any questions or concerns about the survey, and seek their commitment to participate. We also sent multiple follow-up emails to nonrespondents encouraging them to respond, and provided instructions for taking the survey. These follow-up contacts reduced the possibility of nonresponse error. We sent our follow-up reminder emails to the survey population on August 10, 17, and 29, 2017, and September 1 and 14, 2017. Because we surveyed all non-senior-management employees, the survey did not involve sampling error. To minimize nonsampling errors, and to enhance data quality, we employed recognized survey design practices in the development of the survey instrument and in the collection, processing, and analysis of the survey data. We calculated frequencies for closed-ended responses and reviewed open-ended response for themes and illustrative examples. When we analyzed the survey data, an independent analyst checked statistical programs used to collect and process responses. We selected survey excerpts—tallies of answers to selected questions, plus individual comments received from respondents—presented in the main text of this report based on relevance to the respective subject matter. We conducted our performance audit from October 2016 to July 2018, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Results of GAO Survey of Bank Employees: “Anti-Fraud Controls at the Export-Import Bank of the United States” Appendix II: Results of GAO Survey of Bank Employees: “Anti-Fraud Controls at the Export-Import Bank of the United States” As described in appendix I, GAO conducted a survey of employees of the Export-Import Bank of the United States (the Bank), to obtain their views on the Bank’s organizational culture and attitudes toward fraud and fraud risk management. We surveyed 403 employees and obtained 296 responses, for a response rate of 73.5 percent. Our survey did not rely on a sample, as we distributed it to the entire employee population identified. Although originally presented through the World Wide Web, the questions and answer choices that follow are the same wording as shown to Bank employees. Results are tallied for each question. We omit, however, all individual responses to open-ended questions, in order to protect respondent anonymity. Underlined items indicate terms for which hyperlinked definitions were available in the original survey form. Please use these definitions when thinking about your answers— “Fraud” generally means obtaining something of value through willful misrepresentation; and in particular, misconduct involving Bank transactions. We mean it to include actual fraud, as found through the judicial system or an administrative process; as well as “fraud risk” – an opportunity, situation, or vulnerability that could allow for someone to engage in fraudulent activity. For this section and elsewhere, two additional definitions— “Senior management” refers to Bank managers at the senior vice president level and above. “Management in general” refers to a broader management group – first-level supervisors and above. 4. In your view, to what extent has Bank management in general established a clear anti-fraud tone for the Bank? A great deal A lot Some A little Not at all Unsure/don’t know Valid responses: 296 50.3% 29.4% 10.8% 2.7% 1.4% 5.4% 5. Based on the actions of Bank senior management in particular, how important do you think preventing, detecting, and otherwise addressing fraud is to the Bank? Extremely important Very important Not at all important Unsure/don’t know 61.5% 25.0% 7.1% 1.7% 1.0% 3.7% 6. Based on the actions of the managers of your division in particular, how important do you think preventing, detecting, and otherwise addressing fraud is to the Bank? Extremely important Very important Not at all important Unsure/don’t know 60.5% 27.9% 5.1% 1.7% 1.4% 3.4% 7. How clearly has Bank management in general communicated a standard of conduct that applies to all employees, and which includes the Bank’s expectations of behavior concerning fraud? Extremely clearly Very clearly Somewhat clearly Slightly clearly Not at all clear Unsure/don’t know Valid responses: 294 44.6% 33.3% 16.0% 1.7% 1.7% 2.7% 8. Based on your experience, for each entity below, which category best describes the level of responsibility the entity has for overseeing fraud risk management activities at the Bank? 9. Thinking about your response to question 8, do you believe your answer represents the most effective way for the Bank to oversee fraud and fraud risk? Yes No Unsure/don’t know Valid responses: 295 62.4% 7.1% 30.5% 9(a). Why, or why not, is this the most effective way for the Bank to oversee fraud and fraud risk? Fraud-Related Training and Information 10. Within the past two years, have you received fraud- or fraud risk-related training provided by the Bank 23. In your view, should the Bank be more, or less, active in preventing, detecting, and otherwise addressing fraud or fraud risk? Much more active Somewhat more active Remain the same Much less active Unsure/don’t know 9.8% 25.7% 43.6% 1.7% – 19.3% 23(a). Why do you feel this is the appropriate level of activity for addressing fraud or fraud risk? Priority and Employee Feedback 24. Among all the various activities of the Bank, where do you think preventing, detecting, and otherwise Excluding “Not applicable to my job or experience”— Always enough time Usually enough time Sometimes enough time Seldom enough time Never enough time Unsure/don’t know 14.9% 32.2% 14.4% 4.6% 1.1% 32.8% 31. If you have additional comments on any of the items above, or on fraud- or fraud risk-related issues at the Bank generally, please feel free to provide them below. 32. Would you be willing to speak with GAO regarding your answers to the survey, the topics raised above, or other fraud-related matters? 32(a). Please provide your name and contact information. Appendix III: Comments from the Export- Import Bank of the United States Staff Acknowledgments In addition to the contact named above, Jonathon Oldmixon (Assistant Director), Marcus Corbin, Carrie Davidson, David Dornisch, Paulissa Earl, Colin Fallon, Dennis Fauber, Kimberly Gianopoulos, Gina Hoover, Farahnaaz Khakoo-Mausel, Heather Latta, Flavio Martinez, Maria McMullen, Carl Ramirez, Christopher H. Schmitt, Sabrina Streagle, and Celia Thomas made key contributions to this report.
Why GAO Did This Study According to the Bank, it serves as a financier of last resort for U.S. firms seeking to sell to foreign buyers but that cannot obtain private financing for their deals. Its programs support tens of thousands of American jobs and enable billions of dollars in U.S. export sales annually, the Bank says. The Bank is also backed by the full faith and credit of the United States government, meaning that taxpayers could be responsible for Bank losses. The Export-Import Bank Reform Reauthorization Act of 2015 included a provision for GAO to review the Bank's antifraud controls within 4 years, and every 4 years thereafter. This report examines the extent to which the Bank has adopted the four components of GAO's Fraud Risk Framework—commit to combating fraud; regularly assess fraud risks; design a corresponding antifraud strategy with relevant controls; and evaluate outcomes and adapt. GAO reviewed Bank documentation; interviewed a range of Bank managers; and surveyed Bank employees about the extent to which the Bank has established an organizational culture and structure conducive to fraud risk management. What GAO Found In managing its vulnerability to fraud, the Export-Import Bank of the United States (the Bank) has adopted some aspects of GAO's A Framework for Managing Fraud Risks in Federal Programs (Fraud Risk Framework). This framework describes leading practices in four components: organizational culture, assessment of inherent program risks, design of tailored antifraud controls, and evaluation of outcomes. As provided in the framework, for example, the Bank has identified a dedicated entity within the Bank to lead fraud risk management. GAO also found that Bank managers and staff generally hold positive views of the Bank's antifraud culture. However, GAO also found that management and staff hold differing views on key aspects of that culture. These differing views include how active the Bank should be in addressing fraud. For example, Bank managers told GAO the Bank's current approach has been appropriate for dealing with fraud. However, about one-third of Bank staff responding to a GAO employee survey said the Bank should be “much more active” or “somewhat more active” in preventing, detecting, and addressing fraud. These and other divergent views indicate an opportunity to better ensure the Bank sets an antifraud tone that permeates the organizational culture, as provided in the Fraud Risk Framework. GAO found the Bank has taken some steps to assess fraud risk. For example, the Bank's practice has generally been to assess particular fraud risks and lessons learned following specific instances of fraud encountered, according to Bank managers. However, the Bank has not conducted a comprehensive fraud risk assessment, as provided in the framework. The Bank has also been compiling a “register” of risks identified across the organization, including fraud. This register, however, does not include some known methods of fraud, such as submission of fraudulent documentation, thus indicating it is incomplete. Without planning and conducting regular fraud risk assessments as called for in the framework, the Bank is vulnerable to failing to identify fraud risks that can damage its reputation or harm its ability to support U.S. jobs through greater exports. As provided in the framework, managers should determine where fraud can occur and the types of internal and external fraud the program faces, including an assessment of the likelihood and impact of fraud risks inherent to the program. At the conclusion of GAO's review, Bank managers said they will fully adopt the GAO framework. They said they plan to complete a fraud risk assessment by December 2018, and to determine the Bank's fraud risk profile—that is, document key findings and conclusions from the assessment—by February 2019. Work to adopt other framework components will begin afterward, the managers said. However, they did not provide details of how their efforts will be in accord with leading practices of the framework. As a result, GAO makes framework-specific recommendations in order to enumerate relevant issues and to present clear benchmarks for assessing Bank progress. This complete listing of recommendations is important in light of the Bank's recent embrace of the framework; recent changes in Bank leadership; and expected congressional consideration of the Bank's reauthorization in 2019. What GAO Recommends GAO makes seven recommendations, centering on conducting a fraud risk assessment, tailored to the Bank's operations, to serve as the basis for the design and evaluation of appropriate antifraud controls. The Bank agreed with GAO's recommendations, saying it will take steps to improve its fraud risk management activities.
gao_GAO-18-432T
gao_GAO-18-432T_0
Background SSA Programs and Functions The scope of SSA’s operations and responsibilities is vast. One of SSA’s key responsibilities is to provide financial benefits to eligible individuals through three benefit programs: Old-Age and Survivors Insurance (OASI)—provides retirement benefits to older individuals and their families and to survivors of deceased workers. Disability Insurance (DI)—provides benefits to eligible individuals who have qualifying disabilities, and their eligible family members. Supplemental Security Income (SSI)—provides income for aged, blind, or disabled individuals with limited income and resources. In support of its mission, SSA maintains workers’ earnings information and in fiscal year 2017 posted over 279 million earnings items to workers’ records. SSA also determines if claimants are eligible for benefits, completing 10 million claims and more than 680,000 hearings decisions in fiscal year 2017. SSA also maintains birth and death records and issues Social Security Numbers. In fiscal year 2017, SSA issued almost 17 million new and replacement Social Security cards. Beyond administering its programs and core missions, SSA provides key administrative support to the Medicare program, partners with the Department of Homeland Security in verifying employment eligibility for new hires, and assists with the administration of other programs, such as the Supplemental Nutrition Assistance Program and programs administered by the Railroad Retirement Board. SSA’s workforce is large, as is its physical footprint. About 62,000 federal employees and 15,000 state employees administer SSA programs in about 1,500 facilities nationwide. These facilities include regional offices, more than 1,200 field offices, teleservice centers, processing centers, hearings offices, the Appeals Council offices, and SSA’s headquarters in Baltimore, Maryland. Customers can access SSA services in-person at an SSA field office; by phone with field office staff or through a National 800 number; or online. In 2018, SSA reported that, each day, about 170,000 people visit and 250,000 call one of its field offices for various reasons, such as to file claims, ask questions, or update their information. SSA also reported that its national 800 number handles over 30 million calls each year. Challenges to Managing SSA’s Disability Workloads and Ensuring Program Integrity Complex eligibility rules and multiple handoffs and potential layers of review make SSA’s disability programs complicated and costly to administer. Program complexity arguably has made it challenging for SSA to make significant advances in efficiently managing high disability workloads, ensuring timely and consistent disability decisions, preventing benefit overpayments, and mitigating fraud risks. Our recent work highlighted some of the challenges SSA faces in making disability decisions that are timely, consistent and based on current concepts of disability, while also preventing and deterring fraud and ensuring that only beneficiaries who are entitled to benefits receive them. These findings underscore the need for SSA leadership to approach these challenges strategically and follow through with rigorous plans in order to achieve significant improvements in its disability programs. Making Timely Disability Decisions In recent years, SSA made noteworthy strides in reducing its backlog of initial disability claims, but delays in deciding disability appeals continue to worsen. SSA has reduced the number of pending claims each fiscal year since 2010—from about 842,000 in fiscal year 2010 to about 523,000 in fiscal year 2017. However, the number of appealed claims pending at the end of 2017 was approximately 1.1 million compared to about 700,000 in fiscal year 2010, and the average time needed to complete appeals increased from 426 days to 605 days during that same time. In our 2017 High Risk Update, we reported that SSA had taken some steps to address its growing appeals backlog, such as hiring additional administrative law judges (ALJ). SSA also published a plan in 2016 to improve appeals timeliness that called for further hiring, improving business processes, sharing workloads across offices, and making better use of IT resources, such as increasing the number of video hearings. However, SSA’s Office of Inspector General (OIG) found that many of the initiatives in SSA’s plan duplicated past efforts that had met with limited success. SSA also noted that some efforts, such as additional hiring, will depend on resource availability. We also reported that SSA is still developing plans to implement its broad vision for service delivery, Vision 2025, which addresses SSA’s capacity to provide timely initial claims and appeals decisions. To address its appeals backlog and position itself to effectively provide timely disability decisions at all levels, SSA leadership will need to continue to operationalize Vision 2025, plan and implement systems support for initial claims, and implement and monitor the success of its appeals initiatives. Modernizing Disability Criteria While SSA has made significant progress in updating the outdated occupational and medical criteria it uses to make disability eligibility decisions, some of these efforts are multi-year and will require the continued focus of top leadership. Most significantly, SSA has made strides updating a decades old Dictionary of Occupational Titles with a new Occupational Information System (OIS), which contains occupational data to make disability determinations. SSA expects to have OIS in place by 2020, and currently plans to update OIS information every 5 years thereafter. Regarding the medical criteria used to make disability decisions, we reported in our 2017 high risk update that SSA had published final rules for nearly all of the 14 body systems for adults and was on track to update criteria for all body systems every 3 to 5 years. While SSA has addressed all our recommendations in this area, other opportunities exist for updating aspects of SSA’s disability decision process. For example, SSA officials have acknowledged that the vocational rules it uses to determine eligibility may no longer accurately reflect the nature and scope of work available in the national economy and stated that the agency is conducting a review to determine if changes to vocational factors are necessary. Agency leadership will play a key role in ensuring SSA pursues these opportunities to further modernize its criteria and devotes appropriate resources to continuously updating its occupational and medical criteria on a timely basis. Enhancing the Accuracy and Consistency of Disability Decisions Our recent work analyzed variation in the rate that different ALJs grant disability benefits when claimants appeal an earlier denial, and found that SSA’s efforts to monitor the consistency of appeal hearing decisions are incomplete. In 2017 after analyzing data on hearings decisions, we estimated that the allowance (approval) rate could vary by as much as 46 percentage points between different judges with respect to a typical claim. SSA conducts various reviews to monitor the accuracy and consistency of ALJ decisions, but SSA has not systematically evaluated whether its reviews are effective. SSA has also struggled to sustain all of its quality review efforts, in part, because SSA reassigned staff to help expedite claims decisions. We also reported on shortcomings in SSA’s Compassionate Allowance initiative (CAL)—which fast tracks disability claims for severe medical conditions that are most likely to be approved— that could prevent claims from being consistently and accurately identified for expedited processing. These shortcomings include lacking a systematic approach and clear criteria for designating medical conditions for inclusion in CAL. With about one in three beneficiaries being granted benefits at SSA’s appeals hearing level, it remains crucial that SSA leadership commit to ensuring appeal applications receive fair and consistent treatment, including assessing persistent and unexplained variations in ALJ allowance rates. Ensuring oversight and scrutiny of SSA’s CAL initiative is also essential to avoid potential equity issues with regards to SSA’s most vulnerable claimants. Preventing and Collecting Overpayments Benefit overpayments represent avoidable losses to the DI trust fund and, for the individual who may have incurred an overpayment despite conscientiously reporting wages, a financial hardship when required to repay and a disincentive to pursue work. In fiscal year 2015, the most recent year for which we have data, SSA identified $1.2 billion in new overpayments in its DI program, and had $6.3 billion in total overpayment debt outstanding. In 2015, we reported that the SSA process for beneficiaries to report earnings (and consequently inform whether they remain eligible for DI benefits) had a number of weaknesses, including staff not following established procedures, limited oversight, and a lack of automated reporting options for beneficiaries, such as an automated telephone system or smart phone app. SSA has made progress expanding electronic work reporting, but these efforts will not eliminate vulnerabilities caused by SSA’s multi-faceted processes for receiving and handling work reports, and will require additional management focus to shore up internal controls and avoid unnecessary overpayments. Once overpayments do occur, SSA will endeavor to recover those overpayments. However, we recently found that the collection of overpayment debts warrants more attention than SSA has demonstrated to date. In 2016, we reported that SSA’s largest source of debt recovery is withholding a portion of beneficiaries’ monthly benefits payments. However, we found that amounts withheld may not consistently reflect individuals’ ability to pay, and that many repayment plans could take decades to complete. We recommended SSA improve oversight and pursue additional debt recovery options—recommendations that SSA has yet to implement. Absent clear policies and oversight procedures for establishing and reviewing withholding plans—SSA’s main tool for recovering overpayments—SSA cannot be sure that beneficiaries are repaying debts in appropriate amounts within appropriate time frames. Further, by not implementing additional debt collection tools that would speed up repayment, which can extend past the beneficiaries’ lifetimes and is diminished in value by inflation, SSA is missing opportunities to restore debts owed to the DI trust fund. Strategic Approach to Managing Fraud Risks Although the extent of fraud in SSA’s benefit programs is unknown, high- profile cases—such as one case reported by SSA’s OIG involving 70 individuals and $14 million in fraudulent benefits—underscore the importance of continued vigilance on the part of SSA leadership in managing fraud risks to prevent fraud. We reported in 2017 that SSA established a new office responsible for coordinating antifraud programs across the agency, and had taken steps to gather information on some fraud risks. However, we also found that SSA had not fully assessed its fraud risks, had not developed an overall antifraud strategy to align its efforts with those risks, and did not have a complete set of metrics to determine whether its antifraud efforts are effective. SSA has already taken action on one of our recommendations by producing a fraud risk assessment, which we will evaluate, and has stated its intent to take action on our other recommendations. Nevertheless, leadership will be essential for developing and implementing an antifraud strategy aligned with the risk assessment and ensuring that SSA’s efforts to prevent and detect fraud are effective, thereby helping to safeguard the integrity of its programs and its delivery of benefits to only eligible individuals. Challenges to Modernizing SSA’s Physical Footprint and Service Delivery With one of the largest physical footprints of any federal agency, and in light of rising facility costs, SSA may be able to achieve efficiencies by reducing the size of its footprint and pursuing additional, cost effective service delivery options. However, as we reported in 2013, rightsizing SSA’s physical infrastructure can be complex, politically charged, and costly; expanding service delivery options is also challenging due to the complexity of SSA’s disability programs and the varying needs of SSA’s customers. Our recent review of SSA’s plans to reconfigure its physical footprint and expand how it delivers services confirmed a number of challenges SSA must navigate. It also highlighted the importance of approaching these challenges strategically and systematically, through strong leadership that guides robust planning, data collection, and assessment efforts. Reconfiguring SSA’s Physical Footprint In our 2017 work, we identified several challenges that could hinder SSA’s ability to readily reconfigure its footprint, align it with evolving needs and potentially achieve desirable cost savings. For example, we found that despite progress reducing its square footage and the number of occupied buildings, SSA’s inflation-adjusted rental costs have remained steady. SSA’s ability to further reduce or enlarge its physical space is constrained by rental markets, and by union and community concerns. According to SSA officials, high rents, limited building stock and complicated federal leasing processes present difficulties and community needs and union concerns may further complicate relocating offices. We also found that, even though SSA is expanding its remote delivery of services—online and through new technologies—overall demand for field office services has not decreased, although demand varied greatly across SSA’s offices. Expansion of online service—such as the SSI application, which became available online in 2017—present opportunities for SSA to further reduce or reconfigure its physical footprint. However SSA may miss those opportunities because we found that SSA had not fully integrated its strategic planning and facility planning, despite leading practices that indicate facility plans should align with an agency’s strategic goals and objectives. We recommended that SSA develop a long-term facility plan that explicitly links to its strategic goals for service delivery, and includes a strategy for consolidating or downsizing field offices in light of increasing use of and geographic variation in remote service delivery. SSA agreed with our recommendation, and has since formed a Space Acquisition Review Board to consider space reductions in light of operational changes. SSA executive leadership will remain an important factor in ensuring a concerted effort to align the agency’s physical footprint with its vision for future service delivery. Expanding Remote Service Delivery Our recent work also found that while the complexity of SSA’s programs can make it challenging for customers to use online services, the agency lacked data to identify and address challenges with online applications. The online disability applications in particular can be confusing and challenging for customers to complete, according to many SSA managers and staff we interviewed. Applications that are submitted online often require follow-up contacts with applicants to obtain missing information, according to SSA front-line staff. However, while SSA has taken steps to make its online services more user-friendly, such as adding a click-to-chat function for customers who run into problems, the agency does not routinely collect data on the reasons for staff follow-ups with online applicants. Such data are critical to SSA’s efforts to further improve its online applications and ultimately allow SSA to shift more of its business online and further reconfigure its physical footprint. SSA would also benefit from establishing performance goals to help it determine whether new service delivery options are succeeding. To help address access challenges such as limited broadband internet in some rural areas, SSA has rolled out self-service personal computers in field offices, icons to link to SSA services on computers in public libraries and video services accessed from senior centers. SSA also recently completed a trial of customer service kiosks in seven SSA offices and third-party locations. SSA staff in field offices reported some positive impacts from these initiatives in terms of extending remote access to certain populations, but also cited challenges, such as with customers’ varying ability to use self-service computers. While SSA collects some data on usage, it has not developed performance targets or goals that could help it assess these initiatives’ success or identify problems. We recommended that SSA develop a cost-effective approach to identifying the most common issues with online benefit claims, and develop performance goals and collect performance data for alternate service delivery approaches. SSA agreed with our recommendations, and has since reported taking steps to implement them. As SSA continues to expand its service delivery options, the agency’s leadership will need to encourage data driven approaches to ensure high quality and effective alternative service delivery. Challenges to Modernizing Information Technology In 2016, we reported that SSA faces challenges with IT planning and management, based on over a decade of prior work that identified weaknesses in system development practices, IT governance, requirements management, strategic planning, and other aspects of IT. For example, in 2012, a GAO review reported that SSA did not have an updated IT strategic plan to guide its efforts and its enterprise architecture lacked important content that would have allowed the agency to more effectively plan its IT investments. In addition, SSA and others have reported substantial difficulty in the agency’s ability to implement its Disability Case Processing System—intended to replace 54 disparate systems used by state Disability Determination Services—citing software quality and poor system performance as issues. Consequently, in June 2016, the initiative was placed on the Office of Management and Budget’s (OMB) government-wide list of 10 high-priority programs requiring attention. In February 2018, the SSA OIG completed an assessment of an independent contractor’s analysis of options for the system. The SSA OIG concluded that several factors that limited the analysis supporting the contractor’s recommendation for SSA to continue investing in a new, custom-build version of the Disability Case Processing System. Because OMB is no longer identifying high-priority programs, in November 2017, we recommended OMB resume identifying these programs. We also recommended OMB ensure that the Federal Chief Information Officer is directly involved in overseeing these high-priority programs as past experience has shown that this oversight could improve accountability and achieve positive results. OMB neither agreed nor disagreed with our recommendations, and has not indicated whether it will take action on these recommendations. Beyond the challenges identified in these previous reports, GAO’s May 2016 report on federal agencies’ IT legacy systems highlighted the increasing costs that agencies, including SSA, may face as they continue to operate and maintain at-risk legacy systems. We identified SSA’s investment in IT infrastructure operations and maintenance as being among the 10 largest expenditures of federal agencies in fiscal year 2015. Further, we pointed out that legacy systems may become increasingly expensive as agencies have to deal with issues such as obsolete parts and unsupported hardware and software, and potentially have to pay a premium to hire staff or engage contractors with the knowledge to maintain outdated systems. For example, SSA reported re- hiring retired employees to maintain its systems that include many programs written in Common Business Oriented Language (COBOL). We highlighted a group of systems for determining retirement benefits eligibility and amounts which were over 30 years old, with some written in COBOL. We also noted that the agency had ongoing efforts to modernize the systems but was experiencing cost and schedule challenges due to the complexity of the legacy systems. We recommended that the agency identify and plan to modernize or replace legacy systems, in accordance with forthcoming OMB guidance. SSA agreed, and reported that it is finalizing its Information Technology Modernization Plan. To its credit, SSA has made progress in consolidating and optimizing its data centers. Specifically, in August 2017, we reported that, as of February 2017, SSA was one of only two agencies that had met three of the five data optimization targets established by OMB pursuant to provisions referred to as the Federal Information Technology Acquisition Reform Act. Meeting these targets increases SSA’s ability to improve its operational efficiency and achieve cost savings. In conclusion, many of the challenges facing SSA today are neither new nor fleeting because they are inherent in the complexity and massive size of SSA’s programs and the scope of broad demographic and societal changes over time. Our past work has pointed to the need for rigorous solutions to these complex problems, such as strategic planning, evaluation efforts, measuring for impact, and leveraging data—solutions that invariably require leadership attention and sustained focus. Chairman Johnson, Ranking Member Larson, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have. GAO Contact and Acknowledgements If you or your staff have any questions about this testimony, please contact Elizabeth Curda, Director, Education Workforce and Income Security Issues, at (202) 512-7215 or curdae@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony statement. GAO staff who made key contributions to this statement are Michele Grgich (Assistant Director), Daniel Concepcion (Analyst-in-Charge), Susan Aschoff, Alex Galuten, Jean McSween, Sheila McCoy, Lorin Obler, Sabine Paul, Almeta Spencer, and Erin McLaughlin Villas. Appendix I: GAO Letter to SSA on Priority Recommendations to Implement Appendix I: GAO Letter to SSA on Priority Recommendations to Implement This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study SSA provides vital benefits and services that affect the lives of many Americans. In fiscal year 2017, it paid out nearly $1 trillion in retirement and disability benefits to 67 million beneficiaries, and an average of 420,000 people call or visit one of its 1,200 field offices each day. However, SSA has struggled to manage its disability workloads, maintain program integrity, and modernize its service delivery and information technology systems. GAO has issued a number of reports on these challenges, and placed SSA's disability programs on GAO's High Risk List, in part due to challenges with workloads and claims processing. GAO was asked to testify on challenges facing SSA. This statement summarizes ongoing SSA challenges described in SSA's strategic plan and past GAO work in three areas: 1) managing disability workloads and ensuring program integrity; 2) modernizing physical infrastructure and service delivery methods; and 3) modernizing information technology. Although GAO is not making recommendations in this statement, our prior work included recommendations to help SSA address these challenges, many of which SSA has agreed with and initiated actions on. SSA provided technical comments on a draft of this statement, which we incorporated, as appropriate. What GAO Found GAO's prior work and Social Security Administration's (SSA) strategic plan for fiscal years 2018-2022 highlight significant demographic and technological challenges facing the agency. For example, SSA's workloads are increasing due to 80 million baby boomers entering their disability-prone and retirement years, and institutional knowledge and leadership at SSA will be depleted due to an expected 21,000 employees retiring by the end of fiscal year 2022. GAO's prior work has identified related management challenges and opportunities for SSA to further modernize and improve its disability programs, service delivery, and information technology (IT) systems. Managing disability workloads and program integrity. SSA has long struggled to process disability claims and, more recently, appeals of denied claims, in a timely manner. Consistent with our 2013 recommendation, SSA produced a broad vision for improving service delivery, including ensuring prompt and accurate disability decisions. However, SSA is still developing concrete plans to implement its vision. Although SSA has initiatives underway to improve appeals backlogs, GAO reported that some of SSA's appeals initiatives are either contingent on additional funding or have met with limited success when tried in the past. GAO's prior work also identified other challenges related to SSA's disability programs, and actions SSA could take, for example, to modernize disability criteria, prevent and recover overpayments, and manage fraud risks. Modernizing physical infrastructure and service delivery. Advances in technology have the potential to change how and where SSA delivers its services. For example, individuals can now apply for some disability benefits online rather than in person. However, GAO found that SSA did not have readily available data on problems customers had with online applications or why staff support was needed. Additionally, the agency had not established performance goals to determine whether new service delivery options, such as off-site kiosks, are succeeding. In addition, we found that SSA has not developed a long-term plan for its building space that, among other things, includes a strategy for downsizing offices to better reflect changes in service delivery. We recommended SSA improve building plans and do more to assess and monitor service delivery, with which SSA agreed. Modernizing information technology. SSA's legacy IT systems are increasingly difficult and expensive to maintain and GAO identified SSA's needed investment in infrastructure operations and maintenance as among the 10 largest expenditures at federal agencies in fiscal year 2015. GAO recommended SSA identify and plan to modernize or replace legacy systems, in accordance with forthcoming Office of Management and Budget guidance. SSA agreed, and reported that it is finalizing its Information Technology Modernization Plan. Continuing focus by SSA leadership is critical to addressing these broad and long-term challenges and effectively delivering benefits and services to the many Americans who depend on SSA programs.
gao_GAO-17-789
gao_GAO-17-789_0
Background Naval Forces Involved in Amphibious Operations An amphibious operation is a military operation launched from the sea by an amphibious force, embarked in ships or craft, with the primary purpose of introducing a landing force ashore to accomplish an assigned mission. An amphibious force is comprised of an (1) amphibious task force and (2) landing force together with other forces that are trained, organized, and equipped for amphibious operations. The amphibious task force is a group of Navy amphibious ships, most frequently deployed as an Amphibious Ready Group (ARG). The landing force is a Marine Air- Ground Task Force—which includes certain elements, such as command, aviation, ground, and logistics—embarked aboard the Navy amphibious ships. A Marine Expeditionary Unit (MEU) is the most-commonly deployed Marine Air-Ground Task Force. Together, this amphibious force is referred to as an ARG-MEU. The Navy’s amphibious ships are part of its surface force. An ARG consists of a minimum of three amphibious ships, typically an amphibious assault ship, an amphibious transport dock ship, and an amphibious dock landing ship. Figure 1 shows the current number of amphibious ships by class and a description of their capabilities. The primary function of amphibious ships is to provide transport to Marines and their equipment and supplies. The ARG includes an amphibious squadron that is comprised of a squadron staff, tactical air control squadron detachment, and fleet surgical team. This task organization also includes a naval support element that is comprised of a helicopter squadron for search and rescue and antisurface warfare, two landing craft detachments for cargo lift, and a beachmaster unit detachment to control beach traffic. An MEU consists of around 2,000 Marines, their aircraft, their landing craft, their combat equipment, and about 15 days’ worth of supplies. The MEU includes a standing command element; a ground element consisting of a battalion landing team; an aviation element consisting of a composite aviation squadron of multiple types of aircraft; and a logistics element consisting of a combat logistics battalion. Figure 2 provides an overview of the components of a standard ARG-MEU. An amphibious force can be scaled to include a larger amphibious task force, such as an Expeditionary Strike Group, and a larger landing force, such as a Marine Expeditionary Brigade or Marine Expeditionary Force (MEF) for larger operations. A Marine Expeditionary Brigade is comprised of 3,000 to 20,000 personnel and is organized to respond to a full range of crises, such as forcible entry and humanitarian assistance. A MEF is the largest standing Marine Air-Ground Task Force and the principal Marine Corps warfighting organization. Each MEF consists of 20,000 to 90,000 Marines. MEFs are used in major theater war and other missions across the range of military operations. There are three standing MEFs—I MEF at Camp Pendleton, California; II MEF at Camp Lejeune, North Carolina; and III MEF in Okinawa, Japan. Navy and Marine Corps Training for Amphibious Operations Navy ships train to a list of mission-essential tasks that are assigned based on the ship’s required operational capabilities and projected operational environments. Most surface combatants, including cruisers, destroyers, and all amphibious ships, have mission-essential tasks related to amphibious operations. The Navy uses a phased approach to training, known as the Fleet Response Training Plan. The training plan for amphibious ships is broken up into five phases: maintenance, basic, advanced, integrated, and sustainment. The maintenance phase is focused on the completion of ship maintenance, with a secondary focus on individual and team training. The basic phase focuses on development of core capabilities and skills through the completion of basic-level inspections, assessments, and training requirements, among other things. This phase can include certification in areas such as mobility, communications, amphibious well-deck operations, aviation operations, and warfare training. The basic phase of training requires limited Marine Corps involvement—mainly to certify amphibious ships for well-deck and flight-deck operations. The advanced phase focuses on advanced tactical training, including amphibious planning. The integrated phase is where individual units and staffs are aggregated into an Amphibious Ready Group (ARG) and train with an embarked MEU or other combat units. The sustainment phase includes training to sustain core skills and provides an additional opportunity for training with Marine Corps units, when possible. Marine Corps units train to accomplish a set of mission-essential tasks for the designed capabilities of the unit. For example, the mission-essential tasks for a Marine Corps infantry battalion include amphibious operations, offensive operations, defensive operations, and stability operations. Many Marine Corps units within the command, aviation, ground, and logistics elements have an amphibious-related mission-essential task. The Marine Corps uses a building-block approach to accomplish training, progressing from individual through collective training. For example, an assault amphibian vehicle battalion will progress through foundational, individual, and basic amphibious training—such as waterborne movement and ship familiarization—to advanced amphibious training, such as live training involving ship-to-shore movement conducted under realistic conditions. Marine Corps unit commanders use Training and Readiness manuals to help develop their training plans. Training and Readiness manuals describe the training events, frequency of training required to sustain skills, and the conditions and standards that a unit must accomplish to be certified in a mission-essential task. To be certified in the mission- essential task of amphibious operations, Marine Corps units must train to a standard that may require the use of amphibious ships. For example, ground units with amphibious-related mission-essential tasks will not be certified until live training involving sea-based operations and ship-to- shore movement has been conducted under realistic conditions. Similarly, for aviation squadrons, training for amphibious operations (called sea- based aviation operations) will not be certified until live training involving sea-based operations has been conducted under realistic conditions, including aviation operations from an amphibious platform. Similar types of units, such as all infantry battalions, may train on the same mission- essential tasks. However, unit commanders are ultimately responsible for their units’ training, and a variety of factors can lead commanders to adopt different approaches to training, such as the units’ assigned missions or deployment locations. Marine Corps units that are scheduled to deploy as part of an ARG-MEU will follow a standardized 6-month predeployment training program that gradually builds collective skill sets over three phases, as depicted in figure 3. Marine Corps’ Use of Virtual Training Devices The Marine Corps’ use of virtual training devices has increased over time. Virtual training devices were first incorporated into training for the aviation community, which has used simulators for more than half a century. The Marine Corps’ ground units did not begin using simulators and simulations until later. Specifically, until the 1980s, training in the ground community was primarily live training. Further advances in technology resulted in the acquisition of simulators and simulations with additional capabilities designed to help individual Marines and units acquire and refine skills through more concentrated and repetitive training. For example, the Marine Corps began using devices that allowed individual Marines to conduct training in basic and advanced marksmanship and weapons employment tactics. More recently, during operations in Iraq and Afghanistan, the Marine Corps introduced a number of new virtual training devices to prepare Marines for conditions on the ground and for emerging threats. For example, to provide initial and sustainment driver training, the Marine Corps began using simulators that can be reconfigured to replicate a variety of vehicles. In addition, in response to an increase in vehicle rollovers, the Marine Corps began using egress trainers to train Marines to safely evacuate their vehicles. The Marine Corps has also developed virtual training devices that can be used to train Marines in collective training, such as amphibious operations. For example, the Marine Air-Ground Task Force Tactical Warfare Simulation is a constructive simulation that provides training on planning and tactical decision making for the Marine Corps’ command element. See figure 4 for a description of examples of Marine Corps devices that can be used for individual through collective training. Navy and Marine Corps Units Completed Training for Certain Amphibious Operations Priorities but Not Others, and Efforts to Mitigate Training Shortfalls Are Incomplete Navy and Marine Corps units that are deploying as part of an ARG-MEU completed their required training for amphibious operations, but several factors have limited the ability of Marine Corps units to conduct training for other amphibious operations–related priorities. The Navy and Marine Corps have taken steps to identify and address amphibious training shortfalls, but their efforts to mitigate these shortfalls have not prioritized available training resources, systematically evaluated among potential training resource alternatives to accomplish the services’ amphibious operations training priorities, or monitored progress toward achieving the priorities. Navy and Marine Corps ARG-MEU Deploying Units Completed Required Training for Amphibious Operations, but Several Factors Have Limited Training for Other Marine Corps Amphibious Operations Priorities Navy and Marine Corps units deploying as part of ARG-MEUs have completed required training for amphibious operations, but the Marine Corps has been unable to consistently accomplish training for other service amphibious operations priorities. We found that Navy amphibious ships have completed training for amphibious operations. Specifically, based on our review of deployment certification messages from 2014 through 2016, we found that each deploying Navy ARG completed training for the amphibious operations mission in accordance with training standards. Similarly, we found that each MEU completed all of its mission-essential tasks that are required during the predeployment training program. These mission-essential tasks cover areas such as amphibious raid, amphibious assault, and noncombatant evacuation operations, among other operations. However, while the Marine Corps has completed amphibious operations training for the MEU, based on our review of unit-level readiness data from fiscal year 2014 through 2016 we found that the service has been unable to fully accomplish training for its other amphibious operations priorities, which include home-station unit training to support contingency requirements, service-level exercises, and experimentation and concept development for amphibious operations. Specific details of these shortfalls were omitted because the information is classified. Additionally, Marine Corps officials cited shortfalls in their ability to conduct service-level exercises that train individuals and units on amphibious operations-related skills, as well as provide opportunities to conduct experimentation and concept development for amphibious operations. In particular, officials responsible for planning and executing these exercises told us that one of the biggest challenges is aligning enough training resources, such as amphibious ships, to accurately replicate a large-scale amphibious operation. For example, officials from III MEF told us that the large-scale amphibious exercise Ssang Yong is planned to be conducted every other year, but that the exercise requires the availability and alignment of two ARG-MEUs in order to have enough forces to conduct the exercise. These officials stated that this alignment may only happen every 3 years, instead of every other year, as planned. In addition, officials from I MEF and II MEF told us that their large-scale amphibious exercises are intended to be a Marine Expeditionary Brigade–level training exercise, however, these exercises are typically only able to include enough amphibious ships to support a MEU, while the other forces must be simulated. Despite these limitations, Navy and Marine Corps officials have identified these service-level exercises as a critical training venue to support training for the Marine Expeditionary Brigade command element and to rebuild the capability to command and control forces participating in amphibious operations. Based on our analysis of interviews with 23 Marine Corps units, we found that all 23 units cited the lack of available amphibious ships as the primary factor limiting training for home-station units. The Navy’s fleet of amphibious ships has declined by half in the last 25 years, from 62 in 1990 to 31 today, with current shipbuilding plans calling for four additional amphibious ships to be added by fiscal year 2024, increasing the total number of amphibious ships to 35 (see fig. 5). Navy and Marine Corps officials noted a number of issues that can affect the amount of training time that is available with the current amphibious fleet. In particular, the current fleet of ships is in a continuous cycle of maintenance, ARG-MEU predeployment training, and sustainment periods, leaving little additional time for training with home-station units and participation in service-level exercises. Navy officials told us that the Optimized Fleet Response Plan may provide additional training opportunities for Marine Corps units during the amphibious ships’ sustainment periods. Given the availability of the current inventory of amphibious ships, Marine Corps requests to the Navy for amphibious ships and other craft have been difficult to fulfill. For example, data from I MEF showed that the Navy was unable to fulfill 293 of 314 (93 percent) of I MEF requests for Navy ship support for training in fiscal year 2016. Similarly, data from II MEF showed that in fiscal year 2016 the Navy was unable to fulfill 19 of 40 requests for ship services. We identified issues with the completeness of this request data. Specifically, we found that the data may not fully capture the Marine Corps’ demand for amphibious ships. As a result, this information may overstate the ability of the Navy to fulfill these requests. We discuss these data-reliability issues further below. Marine Corps officials from the 23 units we interviewed also cited other factors that limit opportunities for amphibious operations training, such as the following: Access to range space: Seventeen of 23 Marine Corps units we interviewed identified access to range space as a factor that can limit their ability to conduct amphibious operations training. Unit officials told us that priority for training resources, including range access, is given to units that will be part of a MEU deployment, leaving little range time available for other units. In addition, unit officials told us that the amount of range space available can affect the scope and realism of the training that they are able to conduct. Training for amphibious operations can require a large amount of range space, because the operational area extends from the offshore waters onto the landing beach and further inland. A complete range capability requires maneuver space, tactical approaches, and air routes that allow for maneuverability and evasive actions. However, officials from II MEF told us that the size of the landing beach near Camp Lejeune, North Carolina makes conducting beach-clearing operations infeasible. Adequate ranges have been identified as a challenge across DOD. For example, according to DOD’s 2016 Report to Congress on Sustainable Ranges, some Marine Corps installations lack fully developed maneuver corridors, training areas, and airspace to adequately support ground and air maneuver inland from landing beaches. Maintenance delays, bad weather, and transit time: Ten of 23 Marine Corps units told us that changes to an amphibious ship’s schedule resulting from maintenance overruns or bad weather can also reduce the time available for a ship to be used for training. In addition, the transit time a ship needs to reach Marine Corps units can further reduce the time available for training. This is a particular challenge for II MEF units stationed in North Carolina and South Carolina that train with amphibious ships stationed in Virginia and Florida. According to II MEF officials, transit time to Marine Corps units can take up to 18 hours in good weather, using up almost a full day of available training time for transit. High pace of deployments: Five of 23 Marine Corps units told us that the high pace of deployments and need to prepare for upcoming deployments limited their opportunity to conduct training for amphibious operations. For example, II MEF officials told us that an infantry battalion that is scheduled to deploy as part of a Special Purpose Marine Air-Ground Task Force to Africa generally does not embark on an amphibious ship or have amphibious operations as part of its assigned missions. As a result, the unit will likely not conduct amphibious operations during its predeployment training. Efforts to Identify and Address Amphibious Training Shortfalls Lack Strategic Training and Risk-Management Practices The Navy and Marine Corps have taken some steps to mitigate the training shortfall for their amphibious operations priorities, but these efforts are incomplete because they have not prioritized available training resources, systematically evaluated among potential training resource alternatives to accomplish the services’ amphibious operations training priorities, or monitored progress toward achieving the priorities. The Navy and Marine Corps are in the process of identifying (1) the amount of amphibious operations capabilities and capacity that are needed to achieve the services’ wartime requirements, and (2) the training resources and funding required to meet the amphibious operations- related training priorities. First, in December 2016, the Navy conducted a force structure assessment that established a need for a fleet of 38 amphibious ships. Based on the assessment, the Chief of Naval Operations and the Commandant of the Marine Corps determined that increasing the Navy’s amphibious fleet from a 31-ship to a 38-ship amphibious fleet would allow the Marine Corps to meet its wartime needs of having enough combined capacity to transport two Marine Expeditionary Brigades. Specifically, a 38-ship fleet would provide 17 amphibious ships for each Marine Expeditionary Brigade, plus four additional ships to account for ships that are unavailable due to maintenance. According to Navy and Marine Corps officials, an increase in the number of amphibious ships should create additional opportunities for the Navy and Marine Corps to accomplish amphibious operations training. Second, the Marine Corps has also recognized a need to improve the capacity and experience of its forces to conduct amphibious operations and is taking steps to identify the training resources and funding required to meet its amphibious operations–related training priorities. To accomplish this task, in 2016 the Marine Corps initiated the Amphibious Operations Training Requirements review. As a part of this review, the Marine Corps has comprehensively determined units that require amphibious operations training and is in the process of refining the training and readiness manuals for each type of Marine Corps unit to include an amphibious-related mission-essential task as appropriate, and better emphasizing the types of conditions and standards for amphibious training in the manuals. According to officials, as of May 2017, Marine Corps Forces Command has reviewed the mission-essential tasks for 60 unit types and found 31 unit types already had a mission-essential task for amphibious operations, while another 5 unit types required that an amphibious-related mission-essential task be added. The review further found that the other 24 unit types do not require a mission-essential task for amphibious operations. In addition, the Marine Corps Training and Education Command noted in its review that certain training standards within the training manuals are being refined in order to distinguish between levels of training accomplished. For example, for ground-based units, such as infantry battalions, an additional training standard was added for all amphibious-related mission-essential tasks that a unit would not be considered both trained and certified unless live training using amphibious ships has been conducted under realistic conditions. The Amphibious Operations Training Requirements review is also intended to accomplish other actions to better define the services’ amphibious operations training priorities, but these actions were incomplete at the time of our review. Specifically, the review will also establish an objective for the number of Marine Corps forces that must be trained and ready to conduct amphibious operations at a given point in time, and the amount of funding for ship steaming days that is required to provide training for the services’ amphibious operations priorities. According to officials responsible for the Amphibious Operations Training Requirements review, an outcome of the review is expected to be a combined Navy and Marine Corps directive signed by the Chief of Naval Operations and the Commandant of the Marine Corps that should provide guidance to better define a naval objective for amphibious readiness and required ship steaming days. Marine Corps officials estimated that the issuance of the directive will be in the summer of 2017. With these two efforts, the Navy and Marine Corps have been proactive in identifying the underlying problems with training for amphibious operations, and their ongoing efforts indicate that addressing this training shortfall is a key priority for the two services. In particular, the proposed Navy and Marine Corps directive that will result from the Amphibious Operation Training Requirements review should help establish a naval objective for amphibious readiness with the corresponding units that need to be trained and ready in amphibious operations, as well as a basis for estimating the required amount of training resources, such as ship steaming days, to meet amphibious operations training priorities. When completed, the development of this directive is an important first step to clearly identify the total resources needed for amphibious operations training. However, the Navy’s and Marine Corps’ current approach for amphibious operations training does not incorporate strategic training and leading risk-management practices. Specifically, we found the following: The Marine Corps does not prioritize all available training resources: Based on our prior work on strategic training, we found that agencies need to align their training processes and available resources to support outcomes related to the agency’s missions and goals, and that those resources should be prioritized so that the most- important training needs are addressed first. For certain units that are scheduled to deploy as part of an ARG-MEU, the Navy and Marine Corps have a formal training program that specifies the timing and resource needs across all phases of the training, including the number of days embarked on amphibious ships that the Navy and Marine Corps need to complete their training events. Officials stated that available training resources, including access to amphibious ships for training, are prioritized for these units. However, for other Marine Corps units not scheduled for a MEU deployment, officials described an ad hoc process to allocate any remaining availabilities of amphibious ship training time among home- station units. Specifically, officials stated that the current process identifies units that are available for training when an amphibious ship becomes available rather than a process that aligns the next highest- priority units with available training resources. For example, officials at Headquarters Marine Corps told us that the Navy will identify training opportunities with amphibious ships at quarterly scheduling conferences. The Marine Corps will fill these training opportunities with units that are available to accomplish training during that period, but not based on a process that identifies its highest-priority home- station units for training. Similarly, a senior officer with First Marine Division told us that he would prioritize home-station units that have gone the longest without conducting amphibious-related training, which may not be the units with the highest priority for amphibious operations training. The Navy and Marine Corps have recognized the need for reinstituting a recurring training program for home-station units, but efforts to implement such a program have not been started at the time of our review. According to Navy officials, the Navy and Marine Corps have had a recurring training program in the past to provide home- station units with amphibious operations training called the Type Commander Amphibious Training series, or TCAT, but this program was phased out 15 years ago with the implementation of the Fleet Response Training Plan that is more focused on ARG-MEU training. Navy and Marine Corps officials told us that reinstituting a similar training program would allow the services to better prioritize training resources and align units to achieve the services’ proposed naval objective for amphibious readiness. Without establishing a process to prioritize available training resources for home-station units, the Navy and Marine Corps cannot be certain that scarce training opportunities are being aligned with their highest-priority needs. The Navy and Marine Corps do not systematically evaluate a full range of training resource alternatives to achieve amphibious operations training priorities: Our prior work on risk management has found that evaluating and selecting alternatives are critical steps for addressing operational capability gaps. Based on our interviews with officials across the Marine Expeditionary Forces and review of documentation, we identified a number of alternatives that could help mitigate the risk to the services’ amphibious capability due to limited training opportunities. These alternatives include utilizing additional training opportunities during an amphibious ship’s basic phase of training; using alternative platforms for training, such as Marine Prepositioning Force ships, or the amphibious ships of allies; utilizing smaller Navy craft or pier-side ships to meet training requirements; and leveraging developmental and operational test events. However, the Navy and Marine Corps have not developed a systematic approach to explore and incorporate selected training resource alternatives into home-station training plans. Specifically, officials told us that the combined Navy and Marine Corps directive that is expected to be completed later this year will better define a naval objective for amphibious readiness and the required training resources to achieve it, and will provide guidance to the two services to better identify training resource alternatives for home-station training. Based on our review of briefing materials on the Amphibious Operations Training Requirements review, however, we found that the services have discussed using some training resource alternatives to mitigate amphibious operations training shortfalls, such as pier-side ships to minimize the required number of ship steaming days, but the services have not systematically evaluated potential alternatives. Marine Corps officials told us that fully evaluating resource alternatives, particularly the use of simulated training and pier-side ships, could allow for more amphibious training without the need for additional steaming days. Fully exploring alternatives, such as utilizing alterative platforms and pier-side ships, and incorporating a broader range of training resource alternatives into training will be important as the Navy and Marine Corps try to achieve their training priorities and could help bridge the time gap until more amphibious ships are introduced into the fleet. The Navy and Marine Corps have not developed a process or set of metrics to monitor progress toward achieving its amphibious operations training priorities and mitigating existing shortfalls: Our prior work on risk management has found that monitoring the progress made and results achieved are other critical steps for addressing operational capability gaps. Marine Corps officials told us that the service uses the readiness reporting system (Defense Readiness Reporting System—Marine Corps) to measure the capabilities and capacity of its units to perform amphibious operations. While this reporting system allows the Marine Corps to assess the current readiness of units to perform the amphibious operations mission-essential task—an important measure—the system does not provide other information. For example, it does not allow officials to assess the status of service-wide progress in achieving its amphibious operations priorities or monitor efforts by the Marine Expeditionary Forces in establishing comprehensive amphibious operations training programs. Marine Corps officials told us that they may need to capture and track additional information, such as the number of amphibious training events scheduled and completed. However, as noted above, we found that the Marine Corps does not capture complete data that could be used for these assessments, such as demand for training time with amphibious ships. For example, officials from I MEF told us they do not capture the full demand for training time with Navy ships because unit commanders will not always submit a request that they believe is unlikely to be filled. In addition, these officials stated that their requests are prescreened before being submitted to the Navy to ensure that the requests align with known periods of available ship time. As a result, requests for amphibious ships and crafts are supply- driven, instead of demand-driven, which could affect the services’ ability to monitor progress in accomplishing unit training because an underlying metric is incomplete. Establishing a process to monitor progress in achieving amphibious operations training priorities will better enable the Navy and Marine Corps to ensure that their efforts are accomplishing the intended results and help assess the extent to which the services have mitigated any amphibious operations training shortfalls. More Comprehensively Incorporating Collaboration Practices Would Further Naval Integration Efforts for Amphibious Operations The Navy and Marine Corps have taken some steps to improve coordination between the two services, but the services have not fully incorporated leading collaboration practices that would help drive efforts to improve naval integration for amphibious operations. Our prior work on interagency collaboration has found that certain practices can help enhance and sustain collaboration among federal agencies. These key practices include (1) defining and articulating a common outcome; (2) establishing mutually reinforcing or joint strategies; (3) identifying and addressing needs by leveraging resources; (4) agreeing on roles and responsibilities; (5) establishing compatible policies, procedures, systems, and other means to operate across agency boundaries; (6) developing mechanisms to monitor, evaluate, and report on results; and (7) reinforcing agency accountability for collaborative efforts through plans and reports, among others. Common outcomes and joint strategy: The Navy and Marine Corps have issued strategic documents that discuss the importance of improving naval integration, but the services have not developed a joint strategy that defines and articulates common outcomes to achieve naval integration. We have found that collaborative efforts require agency staff working across agency lines to define and articulate the common outcome or purpose they are seeking to achieve that is consistent with their respective agency goals and mission. In addition, collaborating agencies need to develop strategies that work in concert with those of their partners. These strategies can help in aligning the partner agencies’ activities, processes, and resources to accomplish common outcomes. Further, joint strategies can benefit from establishing specific objectives, related actions, and subtasks with measurable outcomes, target audiences, and agency leads. Based on our review of Navy and Marine Corps strategic-level documents, both services identify the importance of improving naval integration, but these documents do not define and articulate outcomes that are common among the services or identify actions and time frames to achieve common outcomes that would be included a joint strategy. Instead, the documents describe naval integration in varying ways, including as a means to improve the capabilities of naval forces to perform essential functions, such as sea control and maritime security; exercise command and control for large-scale operations, including amphibious operations; and establish concepts to conduct naval operations in contested environments, among other areas. For example, strategic documents developed by the Navy only broadly discuss naval integration. In March 2015, the Department of the Navy issued an updated version of A Cooperative Strategy for 21st Century Seapower. This document discusses building the future naval force, including the need to organize and equip the Marine Expeditionary Brigade to exercise command and control of joint and multinational task forces for larger operations and enable the MEF for larger operations. In January 2016, the Department of the Navy published A Design for Maintaining Maritime Superiority, stating the need to deepen operational relationships with other services to include current and future planning, concept and capability development, and assessment. Marine Corps strategic documents provide a more-detailed and expansive list of areas for improved integration with the Navy, but do not provide guidance on how to achieve those areas. For example, in March 2014, the Marine Corps issued Expeditionary Force 21, which describes the need to increase naval integration, including operational integration between the Marine Expeditionary Brigade and the Navy’s Expeditionary Strike Group. Further, in September 2016 the Marine Corps issued a Marine Corps Operating Concept that establishes five tasks needed for the Marine Corps to build its future force, including integrating the naval force to fight at and from the sea. According to Navy and Marine Corps officials, naval integration is a broad term, has different meanings across various service organizations, and is not commonly understood. For example, officials told us that the services have identified the need to develop more-precise language around the term naval integration and articulate common outcomes to create a more- integrated approach to develop naval capabilities. Another senior Marine Corps training official told us that clear guidance is needed on how to define outcomes for naval integration for Navy and Marine Corps command-level staff. In particular, the official stated that without guidance it is unclear how an integrated staff should be composed—whether as two separate Navy and Marine Corps command staffs that should work together, or as one staff composed of both Navy and Marine Corps personnel. The continuing lack of common outcomes and a joint strategy could limit the Navy and Marine Corps ability to achieve their goals for naval integration. Further, joint strategies for improving naval integration could help ensure that services efforts are aligned to maximize available training opportunities and resources. Compatible policies, procedures, and systems: The Navy and Marine Corps have established several mechanisms to better coordinate their respective capabilities for amphibious operations training, but have not fully established compatible policies, procedures, and systems to foster and build naval integration. We have found that agencies need to address the compatibility of standards, policies, procedures, and data systems that will be used in the collaborative effort. These policies can be used to provide clarity about roles and responsibilities, including how the collaborative effort will be led. The Marine Corps has established a working group that provides a forum for collaboration for amphibious operations. Specifically, Marine Corps Forces Command established a Maritime Working Group to develop and manage a continuing Navy–Marine Corps quarterly collaborative process that is comprised of officials from the services’ headquarters, components, and operating forces. According to its mission statement, the Maritime Working Group is intended to align naval amphibious exercise planning to inform force development, war games, experimentation, and coalition participation in order to advance concepts; influence doctrine; inform naval exercise design and sourcing; inform capabilities development; and increase naval warfighting readiness. Based on our observation of the Maritime Working Group in September 2016, we found that the forum covered a broad range of topics including exercise prioritization, experimentation, and planning for future Navy exercises. Following the meeting, a summary of the topics discussed was provided to all participants as well as follow-on actions to be completed. However, we found that the Navy and Marine Corps have not fully established compatible policies and procedures, such as common training tasks and standards and agreed-upon roles and responsibilities, to ensure their efforts to achieve improved naval integration are consistent and sustained. For example, on the West Coast, the Navy and Marine Corps organizations 3rd Fleet and I MEF have issued guidance that formalizes policies that assign 1st Marine Expeditionary Brigade and Expeditionary Strike Group 3 with the responsibilities to conduct joint training. This guidance addresses the importance of Navy and Marine Corps interoperability by formalizing procedures, assigning responsibility, and providing general policy regarding training certification standards for these units. Officials from Fleet Forces Command noted that there is not similar guidance for East Coast–based units for the 2nd Marine Expeditionary Brigade and Expeditionary Strike Group 2. According to a Navy inspection report, Fleet Forces Command officials stated that they did not institute a deployment certification program for Expeditionary Strike Group 2 because of changing priorities at the command. As a result, the services lack clarity on the roles and responsibilities for these organizations—another key collaboration practice—that is needed to ensure these improvements are prioritized to further and sustain the collaborative effort. Both the Navy and Marine Corps have also identified areas where more- compatible training is needed to improve the skills and abilities of naval forces to perform certain missions. For example, Marine Corps training guidance from III MEF identifies a number of areas where Marine Corps units could improve collective naval capabilities by expanding training with the Navy, including areas such as joint maneuver, seizure and defense of forward naval bases, and facilitating maritime maneuver, among others. The Marine Corps Operating Concept also identifies other areas where integration with the Navy should be enhanced, including for intelligence, surveillance, and reconnaissance; operating in a distributed or disaggregated environment; and employment of fifth-generation aviation, such as the F-35. However, the services have been limited in their efforts to improve naval integration in these areas because they have not established compatible training tasks and standards that would institutionalize Navy and Marine Corps unit-level training requirements. Marine Corps officials told us that without compatible training tasks and standards, there is not a mechanism to force continued integration between the services outside of forces deploying as part of an ARG-MEU to help develop integrated naval capabilities. We also found that some of the Navy and Marine Corps’ systems for managing and conducting integrated training are incompatible, leading to inefficiencies in the process to manage training events involving Navy and Marine Corps units. For example, the Marine Corps has developed a system called Playbook to help align Navy and Marine Corps resources for training exercises that have been scheduled through the Force Synchronization process. At the time of our review, the Marine Corps was in the process of inputting data for all of its scheduled training exercises, including experiments and war games, into the system in order to align training resources and capabilities to its highest priority exercises and help build a training and exercise plan through 2020. However, the Navy uses several other data systems to track and capture its training resource requirements, and these systems are incompatible with Playbook. The lack of interface requires the Marine Corps to manually input and reconcile Navy information into its system. This can cause certain inefficiencies in arranging training. For example, officials from III MEF told us that adjustments to the Navy’s maintenance schedule for amphibious ships are not always communicated in advance, which can create a misalignment in the availability of amphibious ships and Marine Corps units to conduct training exercises. The Marine Corps has identified the need to define the Navy’s use of Playbook and explore a potential interface with Navy systems, but, as of May 2017, officials said that any evaluation, including potential cost-benefit analyses for addressing the interoperability issues, had not yet taken place. By having incompatible systems to schedule training, the services remain at risk of missing opportunities to maximize training opportunities for amphibious operations. Leverage resources to maximize training opportunities: The Navy and Marine Corps have identified certain opportunities where the two services can better leverage resources to conduct additional amphibious operations training together, but these opportunities have not been fully maximized. We have found that collaborating agencies should look for opportunities to address needs by leveraging each other’s resources, thus obtaining additional benefits that would not be available if they were working separately. Marine Corps Forces Command and Fleet Forces Command, as well as Marine Corps Forces Pacific and Pacific Fleet, have each established a Campaign Plan for Amphibious Operations Training. The purpose of these plans is to align resources for larger, service-level exercises for amphibious operations over a 5-year period. The goal of these exercises is to develop operational proficiency for a Marine Expeditionary Brigade–level contingency or crisis, but the specific focus of the exercise can change from year to year. For example, in 2017 the Bold Alligator exercise will focus on joint forcible entry operations and anti-access / area denial, whereas in prior years the focus has been on other operational areas, such as crisis response. We found that the Navy and Marine Corps also use mechanisms, such as scheduling conferences, to coordinate and prioritize requests for ship services for these exercises, as well as for other training events. The services are looking to better leverage available training resources for amphibious operations, but enhancing their collaborative efforts could take greater advantage of potential training opportunities. For example, Navy officials have stated that the Surface Warfare Advanced Tactical Training initiative could provide an additional training opportunity for Marine Corps units to train with Navy ships. This initiative is intended to provide amphibious ships with a period of training focused on advanced tactical training, such as defense of the amphibious task force, and multiunit ship-to-shore movement, among other objectives. According to a Navy official responsible for the development of this initiative, its primary focus is on advanced tactical training for Navy personnel, but greater integration with the Marine Corps may be needed to accomplish certain training objectives, such as air defense. Further, it would provide an opportunity for the Marine Corps to achieve additional amphibious operations training. However, according to this official, the Marine Corps did not provide input into how its capabilities could be fully incorporated into the Navy’s advanced tactics training or identify potential opportunities to maximize amphibious operations training for both services. Further, the Marine Corps officials told us that there are opportunities to use transit time during Navy community-relations events, such as port visits, to conduct amphibious training for home-station units, but these events are not always identified with enough lead time to take full advantage of the training opportunity. According to officials at II MEF, Marine Corps units typically need at least 6 months of advance notice to align their forces and equipment for the potential training opportunity. Further, Marine Corps officials told us that the Navy does not always have a fully trained staff with the amphibious ship during these events, which can limit the comprehensiveness of the training that Marine Corps units are able to accomplish. These officials also stated that the flight deck or well deck may not be certified for use at the time of these community- relations events, further limiting their utility for Marine Corps training. Despite these limitations, Marine Corps officials have told us that these events can still provide training benefits, such as ship familiarization for Marines, but that these opportunities still require advanced notice. By improving coordination over its training resources, the services will be better positioned to take full advantage of these scarce training opportunities. Mechanisms to monitor results and reinforce accountability: The Navy and Marine Corps have processes to evaluate and report on the results of specific training exercises, but have not developed mechanisms to monitor, evaluate, and report on results nor jointly reinforced accountability for their naval integration efforts through agency plans and reports. We have found that agencies need to monitor and evaluate their efforts to enable them to identify areas for improvement and help decision makers obtain feedback for improving operational effectiveness. Further, agency plans and reports can reinforce accountability by aligning goals and strategies with the collaborative effort. For large-scale exercises, such as Bold Alligator, the Marine Corps conducts reviews that identify actions that should be sustained moving forward, as well as areas that should be improved in future exercises, including issues related to naval integration. However, the services have not established other processes or mechanisms to monitor, evaluate, and report on results that are needed to measure progress in achieving service-level goals for naval integration and to align efforts to maximize training opportunities for amphibious operations. For example, the Marine Corps does not have a process to monitor and report on results for the critical tasks identified in its Marine Corps Operating Concept, including those tasks related to naval integration, such as integrating command structures, developing concepts for littoral operations in a contested environment, and conducting expeditionary advanced base operations. Monitoring progress against these tasks, as well as common outcomes, once defined, should help the Navy and Marine Corps track progress toward achieving improved naval integration. While the Navy and Marine Corps have taken some steps to improve naval integration in recent years, these efforts are still in the early stages. In particular, Navy and Marine Corps officials stated that the services have not yet defined or articulated common outcomes needed to achieve naval integration because they have not determined who would be responsible for this effort or when to begin its development. Defining and articulating common outcomes for naval integration would allow the services to more effectively incorporate other leading collaboration practices aimed at those common outcomes, to the extent deemed appropriate, such as developing a joint strategy, establishing compatible policies, leveraging resources, and monitoring results. The Marine Corps Has Not Fully Integrated Its Virtual Training Devices into Operational Training The Marine Corps has taken some steps to better integrate virtual training devices into its operational training. However, the Marine Corps’ process to manage the development and use of its virtual training devices in operational training plans has gaps. The Marine Corps Has Taken Some Steps to Integrate Virtual Training Devices into Operational Training The Marine Corps has taken some steps to integrate virtual training devices into operational training and has other efforts under way. In 2013, we reported that the Marine Corps did not have information on the performance and cost of virtual training that would assist the service in assessing and comparing the benefits of virtual training as it sought to optimize the mix of live and virtual training to meet requirements and prioritize training investments. We also found that the Marine Corps had not developed overall metrics or indicators to measure how the use of virtual training devices had contributed to improving the effectiveness of training, or identified a methodology to identify the costs associated with using virtual training. We recommended that the Marine Corps develop outcome-oriented performance metrics for assessing the effect of virtual training on improving performance or proficiency and develop a methodology to identify the costs of virtual training in order to compare the costs of using live and virtual training. Further, in 2015 the Commandant of the Marine Corps issued guidance that stated the service will focus on better leveraging virtual training technology and that all types of Marine Corps forces should make extensive use of virtual training where appropriate. In response to our recommendations and the Commandant’s guidance, in 2015 the Marine Corps Training and Education Command created a Simulation Assessment Working Group with stakeholders from across the Marine Corps to identify training events that could be supported by virtual training devices and incorporate those devices into Training and Readiness manuals. The working group found that over 7,000 of the 12,000 training events reviewed could use a virtual training device to either fully or partially meet the training standard of that event. The group also identified 135 events that may only be performed using the virtual training device or must be performed with the device as a prerequisite to live training. Based on the results of the working group, Training and Education Command updated the corresponding unit-specific Training and Readiness manuals to identify where a training event could be completed using a virtual training device. While this action represents some progress toward better incorporating virtual training devices into operational training, our recommendations remain open because the Marine Corps’ efforts to develop specific outcome-oriented performance metrics to assess virtual training or a methodology to make more- informed comparisons between the costs of live and virtual training are not yet complete. According to a senior Training and Education Command official, the Marine Corps is working to update its training information management system to better capture this information. In 2015, the Marine Corps also issued a Concept of Operations (CONOPS) for the United States Marine Corps Live, Virtual, and Constructive – Training Environment (LVC-TE) (hereafter referred to as Concept of Operations) that is intended to describe the live, virtual, and constructive training environment based on operational requirements in sufficient detail to continue the development of this training capability. According to the Concept of Operations, the goal in implementing the live, virtual, and constructive training environment is to expand training opportunities, reduce training costs, improve safety, and maintain high levels of proficiency and readiness. The Concept of Operations estimates that the live, virtual, and constructive training environment will be implemented in 2022. Lastly, the Marine Corps has an ongoing effort to better inform users of the availability of virtual training devices that support ground-based units. Specifically, the Marine Corps Training and Education Command is developing a Ground Training Simulations Implementation Plan that is intended to provide a framework for the use of current and future virtual training devices for ground units. The Ground Training Simulations Implementation Plan is modeled after the processes used by the Marine Corps’ aviation community to integrate simulators into aviation training. The Marine Corps estimates that the plan will be finalized in the summer of 2017. According to a Training and Education Command official involved in the plan’s development, the plan will help address a challenge the Marine Corps has faced in educating commanders on the availability and capabilities of available virtual training devices. This challenge is consistent with information we gathered during our visit to selected Marine Corps installations. Officials at the two Battle Simulation Centers we visited, for example, told us that unit commanders do not always know what virtual training devices are available and how they can be used to meet training requirements. Marine Corps Process to Manage the Development and Use of Virtual Training Devices in Operational Training Plans Has Gaps The Marine Corps process to manage the development and use of virtual training devices in operational training plans has gaps due to a lack of guidance. Specifically, the Marine Corps does not (1) include consideration of critical factors for integrating virtual training devices into operational training in its front-end planning to support the acquisition of its virtual training devices, (2) consistently consider expected and actual usage data for virtual training devices to support its investment decisions, or (3) consistently evaluate the effectiveness of its virtual training devices for operational training. Front-End Planning The Marine Corps’ process for conducting front-end planning and analysis to support the acquisition of its virtual training devices does not include consideration of critical factors for integrating virtual training devices into operational training, such as the specific training tasks the device is intended to address, how the device would be used to meet proficiency goals, or available time for units to train with the device. DOD’s Strategic Plan for the Next Generation of Training for the Department of Defense states that the right mix of live, virtual, and constructive training capabilities will depend on training tasks and objectives, required proficiency, and available training time, among other factors. In addition, we have previously found that part of the front-end analysis process for training and development programs should include a determination of the skills and competencies in need of training and how training will build proficiency for those skills and competencies. Based on our analysis of the Marine Corps’ front-end planning documents (called system development documents) for the six virtual training devices included in our review, we found that documentation for five of the six devices did not include specific training tasks. In addition, the documentation for two devices specified that specific training tasks would be identified during the verification and validation phase, which is a type of analysis that typically takes place after the device has already been acquired, according to a senior Training and Education Command official. While the documentation for all of the devices included a high- level discussion of relevant mission areas, documentation for five out of six devices did not identify specific training tasks, such as specific training events in a unit’s Training and Readiness manual, that the device was intended to address. For example, documentation for the Combined Arms Command and Control Training Upgrade System includes a high-level discussion of mission areas that the device supports, such as force application, command and control, and battlespace awareness. It also states that the device is to support training events, but it does not specify what those events are. In addition, none of the system development documents we reviewed identified proficiency goals or considered available training time for the units to use the device. According to officials at Training and Education Command, many virtual training devices in the Marine Corps’ inventory were developed based on urgent needs to meet capability gaps identified by warfighters and were not based on training requirements. Of the six devices included in our review, three of the devices were acquired to meet urgent warfighter needs—the Family of Egress Trainers—Modular Amphibious Egress Trainer, the Operator Driver Simulator, and the Supporting Arms Virtual Trainer. However, the system development documents we reviewed for those three devices were completed after the devices had been fielded to meet the urgent needs, but still did not identify specific training tasks or proficiency goals, or consider available training time for the units to use the device. Moreover, the system development documents for two of the remaining three devices we reviewed did not contain this information. While the Marine Corps did not identify and assess these factors in the front-end planning process, the Marine Corps has begun taking steps to identify these factors through efforts such as the Simulation Assessment Working Group. However, these efforts are occurring after the devices have already been acquired and fielded, leading to decisions that have potential cost implications. For example, in its analysis, the Simulation Assessment Working Group did not fully consider alternative devices that could be used to achieve specific training tasks because its methodology was to identify the one virtual training device that was considered the “best in breed” simulator for conducting each training event rather than considering all devices that could be used for the event, including those that might be more cost-effective. Officials at II MEF told us that this methodology did not include an evaluation of the device’s cost compared to other devices that could achieve similar training outcomes. For example, these officials told us that the Supporting Arms Virtual Trainer was identified as a “best in breed” device for a number of training events, including calls for fire and close air support. However, these officials stated that the Deployable Virtual Training Environment device is a lower- cost alternative that could achieve similar outcomes for many of the training events that do not require the level of realism provided by the Supporting Arms Virtual Trainer. Based on information provided by Training and Education Command, the acquisition cost for the Supporting Arms Virtual Trainer is about $4.5 million per system while the acquisition cost for the Deployable Virtual Training Environment laptop is around $3,700 (see fig. 6). The Marine Corps’ front-end planning process to support the acquisition of virtual training devices has gaps because the service does not have specific policies to ensure the process considers key factors. Specifically, Navy and Marine Corps acquisition policies we reviewed do not require that front-end planning consider specific training tasks the device is intended to address, how the device would be used to meet proficiency goals, or available time for units to train with the device. Training and Education Command officials acknowledged the gaps in the Marine Corps’ process and stated that the front-end process for future device acquisitions would identify specific training tasks that a device will address. However, without guidance that specifically addresses these factors, the Marine Corps does not have a reasonable basis to ensure that it is acquiring the right number and type of virtual training devices to meet its operational training needs. Expected and Actual Usage Data The Marine Corps does not consistently consider expected and actual usage data for virtual training devices to support its investment decisions. Our prior work has found that agencies should establish measures that they can use in assessing training programs, such as expected training hours, which reflect the usage rates of the training program. However, the Marine Corps did not establish expected usage rates in its system development documents for five of the six virtual training devices included in our review, and a senior Training and Education Command official said it also has not established expected usage rates since acquiring the devices. For example, the system development document for the Supporting Arms Virtual Trainer stated that the usage of the device could replace up to 33 percent of the live-fire missions required to retain annual currency, but the document does not specify that units are expected to use the device to replace that high of a percentage of the live-fire missions. As a result, the Marine Corps does not have a baseline against which to assess actual usage of the device. Only the system development document for the Marine Air-Ground Task Force Tactical Warfare Simulation included usage targets, stating that usage is expected to be extensive and estimates that the device will be used for 700 hours per system per year. However, the system development documents for the other four devices we reviewed did not include any information on expected usage rates. Additionally, the Marine Corps has not consistently collected actual usage data for its virtual training devices, which could be used to inform continued investments in existing virtual training devices. During our review, a senior Marine Corps Training and Education Command official told us that Training and Education Command collects data for about two- thirds of the Marine Corps’ total inventory of virtual training devices, but usage data are not available for certain devices. More specifically, the Marine Corps provided usage data for three of the six devices that were included in our review, but it was unable to provide usage data for certain systems, such as the Marine Air-Ground Task Force Tactical Warfare Simulation and the Combined Arms Command and Control Training Upgrade System. This official stated that contractors collect data on these devices, but there is no Marine Corps’ system to collect data on the number of Marines or hours trained. Specifically, contractors submit spreadsheets on a monthly basis showing the number of Marines who have used the device, but these data are not included in any formal reports and there is no standard database for collecting or evaluating them. The Marine Corps has not considered actual usage data in its decision making for additional investments in certain virtual training devices, despite low usage rates for a number of those devices. For example, according to available contractor data, actual usage for the Operator Driver Simulator was significantly lower than the current available hours. Based on data provided by Training and Education Command, the Operator Driver Simulator was used for approximately 7,600 hours in fiscal year 2015 and 5,600 hours in fiscal year 2016, but was available for use for approximately 192,000 hours. However, based on the results of the Simulation Assessment Working Group, Training and Education Command estimated that to accomplish all training events linked to the Operator Driver Simulator would require about 570,000 available training hours. As a result, the Simulator Assessment Working Group recommended various investment options for the Operator Driver Simulator that ranged from $56 million to $121 million, despite the current low utilization and excess capacity. Officials from Training and Education Command told us that they anticipate an increase in user demand for the Operator Driver Simulator based on guidance from the Commandant of the Marine Corps to make driver certification more rigorous. However, officials from Marine Corps Systems Command stated that current Operator Driver Simulators have deficiencies in supporting driver training and, therefore, Marines choose to drive live vehicles instead. The Marine Corps has not considered expected and actual usage of its virtual training devices to support investment decisions due to a lack of guidance on establishing and collecting usage data. Marine Corps training guidance for ground units states that virtual training devices shall be used, as applicable, when constraints limit the use of realistic training conditions, but it does not identify the extent to which virtual training devices are expected to be used. Without guidance on setting usage- rate expectations and assessing actual usage, the Marine Corps risks sustained investment in virtual training devices that do not meet operational training needs. Evaluate the Effectiveness of Devices We also found that the Marine Corps was not consistently evaluating the effectiveness of its virtual training devices to accomplish operational training. Our prior work has shown that agencies need to develop processes that systematically plan for and evaluate the effectiveness of their training and development efforts. These evaluations should include data measures, both quantitative and qualitative, to assess training results in areas such as increased user proficiency. Further, evaluations of training effectiveness should be used to make decisions on whether resources should be reallocated or redirected. The Marine Corps uses the verification and validation report process as its primary assessment of a virtual training device after it has been fielded, according to the senior Training and Education Command official with whom we spoke. However, based on our review of postfielding analyses for the virtual training devices included in our review, we found that the Marine Corps does not have a consistent process for selecting devices for which to complete these analyses or how the analysis should be conducted. More specifically, we were provided with verification and validation reports for only three of the six devices in our review—the Supporting Arms Virtual Trainer, the Family of Egress Trainers—Modular Amphibious Egress Trainer, and the Operator Driver Simulator—as well as plans to complete these reports for two other devices. According to a senior Training and Education Command official, Training and Education Command considers certain factors to prioritize the completion of verification and validation reports, such as planned investments for major upgrades on a device. The official also stated that Training and Education Command prioritized completing reports for these virtual training devices to specifically align with recommendations made by the Simulation Assessment Working Group. However, the Simulation Assessment Working Group does not take place on a recurrent basis, and therefore the recommendations from the group do not establish a process for prioritizing future verification and validation reports. Officials from Marine Corps Systems Command told us that program managers are now trying to perform verification and validation reports for future acquisitions prior to full acceptance of the training systems, but that this step is not mandatory. Additionally, there is not a consistent process to include training effectiveness evaluations within the verification and validation report itself. The verification and validation process is not required to include an evaluation of effectiveness based on current guidance, but as noted in the verification and validation report for the Family of Egress Trainers— Modular Amphibious Egress Trainer, such an evaluation is essential to determine whether the capabilities of a virtual training device satisfy requirements to improve training performance and combat readiness. In two instances, the verification and validation reports for the Operator Driver Simulator and Family of Egress Trainers—Modular Amphibious Egress Trainer both included evaluations of the effectiveness of the devices in improving user proficiency, which concluded that the devices enabled Marines to successfully pass related training courses. In another instance, the Marine Corps did not conduct a training effectiveness analysis as part of the verification and validation process. Specifically, for the Supporting Arms Virtual Trainer, Marine Corps Systems Command attempted to conduct a training effectiveness evaluation, but training activity data for a statistically significant sampling of the target training audience were unavailable, which suggests the need for improved data on device usage. We further found that the training effectiveness evaluations that the Marine Corps did complete differed in how they were conducted, which can affect the quality of the information the evaluations provide. For example, the training effectiveness evaluation for the Operator Driver Simulator was conducted to determine whether the device effectively trained Marines to perform tasks required for one specific training and readiness event. The methodology included collecting training activity data from 1 fiscal year in one location and for one of the Operator Driver Simulator vehicle variants. The report noted that conducting a more- complete evaluation, along with additional data collection, would better identify opportunities to improve and enhance training. In contrast, the training effectiveness evaluation for the Family of Egress Trainers— Modular Amphibious Egress Trainer also collected training activity data, but collected data from multiple training sites and for all training courses conducted during the 1-year period used for the evaluation. According to officials from Marine Corps Systems Command, the effectiveness evaluation methods may vary based on the type of training being executed and how well the training requirements are defined. These officials stated that when the device’s training requirements have been more thoroughly defined, the effectiveness evaluation can be more targeted. The Navy and Marine Corps acquisition policy and guidance documents we reviewed do not establish a process to consistently evaluate the training effectiveness of virtual training devices, including identifying the devices to be evaluated and determining what data should be collected and assessed. According to a senior Training and Education Command official, evaluating effectiveness is not a required part of the verification and validation process and is an area that needs to be addressed. The Marine Corps’ Concept of Operations also identified a lack of guidance for conducting effectiveness analyses. Specifically, the Concept of Operations identifies a lack of policy guiding live, virtual, and constructive training capabilities and benefits. It also identifies a training gap on the linkages between live, virtual, and constructive training, as well as a policy gap around the lack of guidance on analysis of virtual training devices after they have been fielded. Without guidance establishing a well-defined process to consistently evaluate the effectiveness of virtual training devices for training—including the selection of devices, guidelines on conducting the analysis, and the data that should be collected and assessed—the Marine Corps risks investing in devices whose value to operational training is undetermined. Conclusions The Navy and Marine Corps have identified the need to rebuild the capability to conduct amphibious operations and to reinvigorate naval integration between the services toward that end. However, the Navy and Marine Corps have not completed efforts needed to mitigate their training shortfalls for amphibious operations. Specifically, the services have not developed an approach to prioritize available training resources, systematically evaluate among training resource alternatives to achieve amphibious operations priorities, and monitor progress toward achieving them. Without such an approach, the services are not well positioned to mitigate existing amphibious operations training shortfalls and begin to rebuild their amphibious capability as the services await the arrival of additional amphibious ships into the fleet. In addition, while the Navy and Marine Corps have taken a number of positive steps to improve coordination between the two services, they need to define and articulate common outcomes for naval integration. This first critical step will enable them to fully incorporate other leading collaboration practices aimed at a common purpose, such as developing a joint strategy; more fully establishing compatible policies, procedures, and systems; better leveraging resources; and establishing mechanisms to monitor results that are needed to achieve service-level goals for naval integration and to align efforts to maximize training opportunities for amphibious operations. Further, the Marine Corps’ process to integrate virtual training devices into operational training has gaps. Developing guidance for the development and use of virtual training devices would help close these gaps, which is critical as virtual training will become increasingly important to the development of the capability of Marines, including the capability for conducting amphibious operations, among other mission areas. Recommendations for Executive Action To better mitigate amphibious operations training shortfalls, we recommend the Secretary of Defense direct the Secretary of the Navy, in coordination with the Chief of Naval Operations and Commandant of the Marine Corps, to develop an approach, such as building upon the Amphibious Operations Training Requirements review, to prioritize available training resources, systematically evaluate among training resource alternatives to achieve amphibious operations priorities, and monitor progress toward achieving them. To achieve desired goals and align efforts to maximize training opportunities for amphibious operations, we recommend the Secretary of Defense direct the Secretary of the Navy, in coordination with the Chief of Naval Operations and Commandant of the Marine Corps, to clarify the organizations responsible and time frames to define and articulate common outcomes for naval integration, and use those outcomes to develop a joint strategy; more fully establish compatible policies, procedures, and systems; better leverage training resources; and establish mechanisms to monitor results. To more effectively and efficiently integrate virtual training devices into operational training, we recommend that the Secretary of Defense direct the Commandant of the Marine Corps to develop guidance for the development and use of virtual training devices that includes developing requirements for virtual training devices that consider and document training tasks and objectives, required proficiency, and available training time; setting target usage rates and collecting usage data; and conducting effectiveness analysis of virtual training devices that defines a consistent process for performing the analysis, including the selection of the devices to be evaluated, guidelines on conducting the analysis, and the data that should be collected and assessed. Agency Comments We provided a draft of the classified report to DOD for review and comment. The department’s comments on the classified report are reprinted in Appendix II. In its comments, DOD concurred with all three recommendations. DOD stated that it will review the status of actions the Navy and Marine Corps plan to take in response to all three recommendations within the next twelve months. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Office of the Under Secretary of Defense for Personnel and Readiness, the Secretary of the Navy, and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of this report are to determine the extent to which (1) the Navy and Marine Corps have completed training for amphibious operations priorities and taken steps to mitigate any training shortfalls, (2) the Navy’s and Marine Corps’ efforts to improve naval integration for amphibious operations incorporate leading collaborative practices, and (3) the Marine Corps has integrated selected virtual training devices into its operational training. This report is a public version of a classified report that we issued in August 2017. DOD deemed some of the information in our August report to be classified, which must be protected from loss, compromise, or inadvertent disclosure. Therefore, this report omits classified information on select Marine Corps units’ ability to complete training for amphibious operations. Although the information provided in this report is more limited, the report addresses the same objectives as the classified report and uses the same methodology. We focused our review on Navy and Marine Corps organizations and units that have a role in the development and execution of training requirements for amphibious operations. For the Navy, we focused on the training requirements and accomplished training for amphibious ships. For the Marine Corps, we focused on selected active-component units that have identified training requirement for amphibious operations, including Marine Expeditionary Units (MEU) and other units with a mission-essential task for amphibious operations. We selected a nongeneralizable sample of 23 Marine Corps units to speak with in order to interview geographically dispersed units under each Marine Expeditionary Force, as well as units across all elements of the Marine Air-Ground Task Force (i.e., command, ground combat, aviation combat, and logistics combat forces). See below for the list of 23 Marine Corps units. We focused on the Marine Corps’ integration of virtual training devices into operational training because the Navy does not have virtual training devices that simulate amphibious operations, including ship-to- shore movement, according to Navy officials. In addition, we focused on Marine Corps virtual training devices that are used to support the command and ground elements of the Marine Air-Ground Task Force. We selected a nongeneralizable sample of six virtual training devices based on the target training audience, applicability to amphibious operations training, location, and type of training events (individual or collective training) for which the devices are used. The devices included in our review are the Combined Arms Command and Control Training Upgrade System, Marine Air-Ground Task Force Tactical Warfare Simulation, Supporting Arms Virtual Trainer, Amphibious Assault Vehicle Turret Trainer, Family of Egress Trainers—Modular Amphibious Egress Trainer, and Operator Driver Simulator. To determine the extent to which the Navy and Marine Corps have completed training for amphibious operations priorities and taken steps to mitigate any training shortfalls, we analyzed deployment certification reports for all Amphibious Ready Group (ARG)—Marine Expeditionary Unit (MEU) deployments over the most-recent 3-year period. We also analyzed unit-level readiness data for all Marine Corps’ infantry battalions, assault amphibian vehicle battalions, Osprey tilt-rotor aircraft squadrons, and Marine Expeditionary Brigades over the most-recent 3- year period—from fiscal years 2014 through 2016—and compared those data against unit-level training requirements for amphibious operations. We analyzed 3 years of training data because training requirements for Marine Corps units are reviewed and updated on a 3-year cycle. We performed data-reliability procedures on the unit-level readiness data by comparing the data against related documentation and surveying knowledgeable officials on controls over reporting systems and determined that the data presented in our findings were sufficiently reliable for the purposes of this report. We interviewed Navy and Marine Corps officials to discuss any factors that limited their ability to conduct training for amphibious operations. We assessed the reliability of data on amphibious ship requests by speaking with knowledgeable officials and determined the data were sufficiently reliable for the purposes of presenting the number of actual requests submitted and fulfilled. In addition, we reviewed processes and initiatives established by the Navy and Marine Corps to identify and assess training shortfalls for amphibious operations, including the Marine Corps’ Amphibious Operations Training Requirements review, and evaluated these processes and initiatives against our prior work on strategic training and risk management. To determine the extent to which the Navy’s and Marine Corps’ efforts to improve naval integration for amphibious operations incorporate leading collaboration practices, we reviewed the Navy and Marine Corps documents, including A Cooperative Strategy for 21st Century Seapower and the Marine Corps Operating Concept, that discuss the goal of improving naval integration. We also reviewed mechanisms that have been established to coordinate training, including campaign plans for amphibious operations; observed a working group focused on amphibious operations; and interviewed officials with both services to discuss efforts to improve naval integration. We assessed the extent to which the Navy’s and Marine Corps’ efforts toward improving naval integration have followed leading practices for collaboration that we have identified in our prior work. Specifically, we have identified eight practices described in our prior work that can help enhance and sustain collaboration. We selected seven of the eight practices most relevant to issues we identified in our prior work on collaboration to assess the status of Navy and Marine Corps collaborative efforts to improve naval integration. Based on our analysis, we selected the following seven practices: define and articulate a common outcome; establish mutually reinforcing or joint strategies; identify and address needs by leveraging resources; agree on roles and responsibilities; establish compatible policies, procedures, and other means to operate across agency boundaries; develop mechanisms to monitor, evaluate, and report on results; and reinforce agency accountability for collaborative efforts through agency plans and reports. To determine the extent to which the Marine Corps has integrated selected virtual training devices into its operational training, we collected information on the development, usage, and evaluation of virtual training devices, and their integration into operational training plans. We reviewed documentation on actions the Marine Corps has taken to integrate its virtual training devices into operational training, including documentation on the Simulation Assessment Working Groups and the Ground Training Systems Plan. We reviewed DOD and Marine Corps acquisition policies and interviewed Marine Corps officials responsible for the acquisition and oversight of virtual training devices at Training and Education Command and Marine Corps Systems Command and officials responsible for management of the virtual training devices at the Battle Simulation Centers at Camp Lejeune, North Carolina, and Camp Pendleton, California. We reviewed acquisition documents for each of the selected devices, including Capability Production Documents and Capability Development Documents, and assessed the extent to which these documents included key information as identified in leading practices for managing strategic training and DOD’s Strategic Plan for the Next Generation of Training for the Department of Defense. We also reviewed documentation on the Marine Corps process to include expected and actual usage data for virtual training devices to support investment decisions. Further, we reviewed analyses conducted after the selected devices had been fielded through Verification and Validation Reports and evaluated the extent these documents assessed the effectiveness of the virtual training devices for improving user proficiency. The performance audit upon which this report is based was conducted from May 2016 to August 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate, evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with DOD from August 2017 to September 2017 to prepare this unclassified version of the original classified report for public release. This public version was also prepared in accordance with these standards. Navy Marine Corps Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, Matthew Ullengren, Assistant Director; Russell Bryan; William Carpluk; Ron La Due Lake; Joanne Landesman; Kelly Liptan; Shahrzad Nikoo; and Roxanna Sun made key contributions to this report.
Why GAO Did This Study The Navy and Marine Corps have identified a need to improve their ability to conduct amphibious operations—an operation launched from the sea by an amphibious force. Senate and House reports accompanying bills for the National Defense Authorization Act for Fiscal Year 2017 included provisions for GAO to review Navy and Marine Corps training. This report examines the extent to which (1) the Navy and Marine Corps have completed training for amphibious operations priorities and taken steps to mitigate any training shortfalls, (2) these services' efforts to improve naval integration for amphibious operations incorporate leading collaboration practices, and (3) the Marine Corps has integrated selected virtual training devices into operational training. GAO analyzed training initiatives; interviewed a nongeneralizable sample of officials from 23 units that were selected based on their training plans; analyzed training completion data; and selected a nongeneralizable sample of six virtual training devices to review based on factors such as target audience. This is a public version of a classified report GAO issued in August 2017. Information that DOD deemed classified has been omitted. What GAO Found Navy and Marine Corps units that are deploying as part of an Amphibious Ready Group and Marine Expeditionary Unit (ARG-MEU) completed their required training for amphibious operations, but other Marine Corps units have been limited in their ability to conduct training for other amphibious operations–related priorities. GAO found that several factors, to include the decline in the fleet of the Navy's amphibious ships from 62 in 1990 to 31 today limited the ability of Marine Corps units to conduct training for other priorities, such as recurring training for home-station units (see figure). As a result, training completion for amphibious operations was low for some but not all Marine Corps units from fiscal years 2014 through 2016. The services have taken steps to address amphibious training shortfalls, such as more comprehensively determining units that require training. However, these efforts are incomplete because the services do not have an approach to prioritize available training resources, evaluate training resource alternatives, and monitor progress towards achieving priorities. Thus, the services are not well positioned to mitigate any training shortfalls. The Navy and Marine Corps have taken some steps to improve coordination between the two services, but have not fully incorporated leading collaboration practices to improve integration of the two services—naval integration—for amphibious operations. For example, the Navy and Marine Corps have not defined and articulated common outcomes for naval integration that would help them align efforts to maximize training opportunities for amphibious operations. The Marine Corps has taken steps to better integrate virtual training devices into operational training, but gaps remain in its process to develop and use them. GAO found that for selected virtual training devices, the Marine Corps did not conduct front-end analysis that considered key factors, such as the specific training tasks that a device would accomplish; consider device usage data to support its investment decisions; or evaluate the effectiveness of existing virtual training devices because of weaknesses in the service's guidance. As a result, the Marine Corps risks investing in devices that are not cost-effective and whose value to operational training is undetermined. What GAO Recommends GAO recommends that the Navy and Marine Corps develop an approach for amphibious operations training and define and articulate common outcomes for naval integration; and that the Marine Corps develop guidance for the development and use of its virtual training devices. The Department of Defense concurred.
gao_GAO-19-60
gao_GAO-19-60_0
Background The Secret Service plays a critical role in protecting the President, Vice President, their immediate families, and national leaders, among others. In addition, the component is responsible for safeguarding the nation’s currency and financial payment systems. To accomplish its mission, Secret Service officials reported that, as of June 2018, the component had approximately 7,100 employees (including the Uniformed Division, special agents, and administrative, professional, and technical staff). These employees were assigned to the component’s headquarters in Washington, D.C., and 133 field offices located throughout the world (including 115 domestic offices and 18 international offices). The Secret Service’s employees are heavily dependent on the component’s IT infrastructure and communications systems to perform their daily duties. According to data reported on the Office of Management and Budget’s IT Dashboard, the component planned to spend approximately $104.8 million in fiscal year 2018 to modernize and maintain its IT environment. To manage this IT environment, the Secret Service hired a full-time CIO in November 2015. In addition, in an effort to improve its management structure, the component consolidated all IT staff and assets under this new CIO in March 2017. OCIO officials stated that these staff include the government employees who provide direct and indirect support of the day-to-day operations of the Secret Service’s enterprise systems and services. According to Secret Service officials, the component’s IT workforce included 190 staff, as of July 2018. These officials stated that 166 of these employees were located in the component’s headquarters in Washington, D.C., and 24 were located in domestic field offices. The officials also reported that these July 2018 staffing levels were below their current approved staffing level of 220 staff (which included 44 positions in domestic field offices). Secret Service IT staff also deploy to other locations, as necessary, to provide support for certain security activities. For example, the Secret Service reported that, in 2017, OCIO deployed over 79 staff to New York, N.Y., to provide communications support during the United Nations General Assembly. DHS IT Acquisition Policies and Guidance As a component of DHS, the Secret Service must follow the department’s policies and processes for managing acquisitions, including IT acquisitions. DHS categorizes its acquisition programs according to three levels that are determined by the life cycle costs of the programs. These levels then determine the extent of required program and project management and the acquisition decision authority (the individual responsible for management and oversight of the acquisition). The department also categorizes its acquisition programs as major or non- major based on expected cost. Table 1 describes the levels of DHS’s acquisition programs and their associated acquisition decision authorities. DHS’s policies and processes for managing major acquisition programs are primarily set forth in its Acquisition Management Directive 102-01 and Acquisition Management Instruction 102-01-001. In particular, these policies establish that a major acquisition program’s decision authority is to review the program at a series of predetermined acquisition decision events to assess whether the program is ready to proceed through the acquisition life cycle phases. Figure 1 depicts the acquisition life cycle established in DHS acquisition management policy. DHS’s Acquisition Management Directive and Instruction do not establish an acquisition life cycle framework for the department’s non-major acquisition programs. Instead, according to the Instruction, Component Acquisition Executives (i.e., the senior acquisition official within a component that is responsible for implementation, management, and oversight of the component’s acquisition process) are required to establish component-specific non-major acquisition policies and guidance that support the “spirit and intent” of the department’s acquisition policies. To that end, the Secret Service developed a policy that establishes an acquisition life cycle framework for its non-major acquisition programs. This acquisition framework for the component’s non-major acquisition programs is consistent with the acquisition framework that DHS established for its major acquisition programs. In particular, the Secret Service’s framework includes the same phases and decision events as DHS’s framework (e.g., acquisition decision event 2A, the point at which the acquisition decision authority determines whether a program may proceed into the obtain phase). In addition, DHS’s Systems Engineering Life Cycle Instruction and Guidebook outline a framework of major systems engineering activities and technical reviews that are to be conducted by all DHS programs and projects, both major and non-major. This framework is intended to ensure that appropriate systems engineering activities are planned and implemented, and that a program’s development effort is meeting the business need. In particular, the systems engineering life cycle framework consists of nine major activities (e.g., requirements definition, integration, and testing) and a set of related technical reviews (e.g., preliminary design review) and artifacts (e.g., requirements documents). DHS policy allows programs to tailor these activities, technical reviews, and artifacts based on the unique characteristics of the program (e.g., scope, complexity, and risk). For example, a program may combine systems engineering technical reviews and artifacts, or add additional reviews. This tailored approach must be documented in a program’s systems engineering life cycle tailoring plan. The systems engineering technical reviews are intended to provide DHS the opportunity to determine how well a program has completed the necessary systems engineering activities. Each technical review includes a minimum set of exit criteria that must be satisfied before a program may move on to the next systems engineering activity. At the end of the technical review, the program manager must develop a technical review completion letter that documents the outcome of the review, including stakeholder concurrence that the exit criteria were satisfied. Moreover, DHS’s agile instruction, which was first issued in April 2016 and updated in April 2018, identifies agile as the preferred development approach for the department’s IT programs and projects. Agile is a type of incremental (i.e., modular) development, which calls for the rapid delivery of software in small, short increments rather than in the typically long, sequential phases of a traditional waterfall approach. DHS’s agile instruction also states that component CIOs are to set modular (i.e., incremental) outcomes and target measures to monitor progress in achieving agile implementation for IT programs and projects. To that end, the department identified core metrics that its agile IT programs are to use to monitor progress, including the number of story points completed per release and the number of releases per quarter. Further, DHS policy and guidance have established an acquisition (i.e., contract) review process that is intended to enable the DHS CIO to review and effectively guide the department’s IT expenditures. According to the department’s IT acquisition review guidance, DHS components with a CIO (which includes the Secret Service) are to submit to DHS OCIO for review, IT acquisitions that (1) have total estimated procurement values of $2.5 million or more; and (2) are funded by a level 1, 2, or 3 program with a life cycle cost estimate of at least $50 million (i.e., a major investment, as defined by DHS’s capital planning and investment control guidance). DHS Policies Outline Component-Level CIO Responsibilities DHS policies and guidance also establish numerous responsibilities for the department’s component-level CIOs that are aimed at ensuring proper oversight and management of the components’ IT investments. Among other things, these component-level CIO responsibilities relate to topics such as IT budgeting, portfolio management, and oversight of programs’ systems engineering life cycles. Table 2 identifies 14 selected IT oversight responsibilities for DHS’s component CIOs. Overview of the Secret Service’s IT Portfolio The Secret Service acquires IT infrastructure and services that are intended to improve its ability to execute its investigation and protection missions. According to data reported on the Office of Management and Budget’s IT Dashboard, the Secret Service planned to spend about $104.8 million on IT in fiscal year 2018, which included approximately $34.6 million for the development and modernization of its IT infrastructure and services, and about $70.2 million for the operations and maintenance of this infrastructure (including 21 existing IT systems). Also according to data reported on the IT Dashboard, as of April 2018, the Secret Service had one major IT investment (called the Information Integration and Technology Transformation and discussed in more detail later in this report), seven non-major IT investments, and one non- standard infrastructure investment. Figure 2 depicts the Secret Service’s planned IT spending for fiscal year 2018. The Secret Service Initiated the Information Integration and Technology Transformation Investment to Address IT Challenges The Secret Service has faced long-standing challenges in managing its IT infrastructure. For example, A National Security Agency audit of the Secret Service’s IT environment in 2008 identified network and system vulnerabilities that needed immediate remediation to protect the component’s systems and electronic information. The Secret Service determined in 2010 that it had IT capability gaps associated with three key areas: network security, information sharing and situational awareness, and operational communications. The component reported that it required a significant IT modernization effort with sustained investment of resources to replace dated and restrictive network and communications capabilities. The Secret Service also reported in 2010 that it had 42 mission- support applications that were operating on a 1980’s mainframe that lacked multi-level security (i.e., the ability to view classified information from two security levels, such as secret and top secret, at the same time), was beyond its equipment life cycle, and was at risk of failing. Further, in 2011, DHS’s Office of Inspector General reported that the Secret Service’s existing infrastructure did not meet current operational requirements. According to the Secret Service, this dated infrastructure was unable to support newer technologies (e.g., Internet protocol), share common DHS enterprise services, or migrate to the department’s consolidated data centers. To address challenges with its IT environment, in 2009, the Secret Service initiated the IITT investment, which is intended to modernize and enhance the component’s infrastructure, communications systems, applications, and processes. In particular, IITT is a portfolio of programs and projects that are meant to, among other things, improve systems availability in support of the Secret Service’s business operations, increase interoperability with other government systems and networks, enhance the component’s system and network security, and enable scalability to support growth. From 2010 to July 2018, according to OCIO officials, the Secret Service spent approximately $392 million on IITT. In fiscal year 2018, the component had planned to spend approximately $42.7 million on IITT (i.e., about 40 percent of its total planned IT spending for the fiscal year), according to data reported on the Office of Management and Budget’s IT Dashboard. In total, the planned life cycle cost estimate for IITT is at least $811 million. As of June 2018, IITT was a major investment comprised of two programs (one of which included three projects) and one standalone project (i.e., it was not part of another program) that had capabilities that were in planning or development and modernization. These programs and project were the Enabling Capabilities program, Enterprise Resource Management System program (which included three projects that were each being implemented using an agile methodology: Uniformed Division Resource Management System, Events Management, and Enterprise-wide Scheduling), and the Multi-Level Security project. Table 3 describes the IITT programs and projects that had capabilities that were in planning or development and modernization, as of June 2018. The table also includes the associated level, acquisition decision authority, estimated life cycle costs, and planned or actual dates of operational capability for each of the programs and projects. (Appendix II also provides additional information on these programs and projects.) The Enabling Capabilities program within IITT is designated as a major acquisition program. As such, its acquisition decision authority is the DHS Under Secretary for Management, and both DHS and the Secret Service provide oversight to this program. IITT’s other program and project—the Enterprise Resource Management System program (which includes three projects, as discussed earlier) and Multi-Level Security project—are designated non-major acquisition programs. In June 2011, DHS’s Under Secretary for Management delegated acquisition decision authority for this non-major program and project to the Secret Service Component Acquisition Executive. As such, oversight of the Enterprise Resource Management System program (including its three projects) and the Multi- Level Security project is conducted primarily at the component level. The Secret Service also implemented other capabilities that are now in operations and maintenance (i.e., the capabilities have been fielded and are operational) as part of the IITT investment, such as a capability to move data between systems in separate classification levels (e.g., top secret and secret) and communications interoperability. Table 4 describes IITT capabilities that are in operations and maintenance. DHS’s Management of Human Capital Is a High- Risk Effort DHS, including the Secret Service, has faced long-standing challenges in effectively managing its workforce. In January 2003, we designated the implementation and transformation of DHS as high risk, including its management of human capital, because it had to transform 22 agencies—several with major management challenges—into one department. This represented an enormous and complex undertaking that would require time to achieve in an effective and efficient manner. Since that time, the department has made important progress in strengthening and integrating its management functions. Nevertheless, we have continued to report that significant work remains for DHS to improve these management functions. Among other things, we previously reported that the department had lower average employee morale than the average for the rest of the federal government. We also reported that, in 2011, based on employee responses to the Office of Personnel Management’s Federal Employee Viewpoint Survey—a tool that measures employees’ perceptions of whether and to what extent conditions characterizing successful organizations are present in their agency—DHS was ranked 31st out of 33 large agencies on the Partnership for Public Service’s Best Places to Work in the Federal Government rankings. The most recent results of these surveys in 2017 showed that DHS continues to maintain its low rankings. DHS’s Office of Inspector General has reported on challenges that the Secret Service has faced in managing its IT workforce. Specifically, in October 2016, the Inspector General reported that the Secret Service CIO did not have oversight of, or authority over, all IT resources, including the workforce; in particular, almost all of the component’s IT employees were located in a division outside of OCIO; and the Secret Service had vacancies in key positions responsible for managing IT, including not having a full-time CIO from December 2014 through November 2015. As previously discussed, the Secret Service has taken actions to address these two issues with the management of its IT workforce. These actions included hiring its full-time CIO in November 2015 and consolidating the workforce and all IT assets under this CIO in March 2017. The Secret Service CIO Fully Implemented Most of the Required Responsibilities Of the 14 selected responsibilities established for component-level CIOs in DHS’s IT management policies, the Secret Service CIO had fully implemented 11 responsibilities and had partially implemented 3 responsibilities. Table 5 summarizes the extent to which the Secret Service CIO had implemented each of the 14 responsibilities. The Secret Service CIO fully implemented 11 of the 14 selected component-level CIO responsibilities. Examples of the responsibilities that the CIO fully implemented are as follows: Develop, implement, and maintain a detailed IT strategic plan. Consistent with DHS’s IT Integration and Management directive, in January 2017, the Secret Service CIO developed an IT strategic plan that outlined the CIO’s strategic IT goals and objectives, as well as tasks intended to meet the goals and objectives. The CIO maintained this strategic plan, to include updating it in January 2018. The CIO also took steps to implement the tasks identified within the strategic plan, such as working to develop an IT training program. In particular, as part of this effort to develop an IT training program, OCIO identified recommended training for the office’s various IT workforce groups (discussed in more detail later in this report). Concur with each program’s and/or project’s systems engineering life cycle tailoring plan. In accordance with DHS’s Systems Engineering Life Cycle instruction, the Secret Service CIO concurred with the systems engineering life cycle tailoring plan for one program and three projects included in the Secret Service’s IITT investment. Specifically, the CIO documented his approval via his signature on the tailoring plans for IITT’s Enabling Capabilities program, and Multi-Level Security, Uniformed Division Resource Management System, and Events Management projects. Participate on DHS’s CIO Council, Enterprise Architecture Board, or other councils/boards as appropriate, and appoint employees to serve when necessary. As required by DHS’s IT Integration and Management directive, the Secret Service CIO participated on two required DHS-level councils/boards, and appointed a delegate to serve in his place, when necessary. Specifically, the Secret Service CIO or the CIO’s delegate—the Deputy CIO—attended bi-monthly meetings of the DHS CIO Council. In addition, another Secret Service CIO appointee—the component’s Chief Architect—attended an ad hoc meeting of the Enterprise Architecture Board in June 2017. In addition, the Secret Service CIO had partially implemented three component-level CIO responsibilities, as follows. Manage the component IT investment portfolio, including establishing a component-level IT acquisition review process that enables component and DHS review of component acquisitions (i.e., contracts) that contain IT. As directed in DHS’s Capital Planning and Investment Control directive and guidebook, the Secret Service CIO took steps to manage the component’s IT investment portfolio, including reviewing certain contracts containing IT. For example, among our random sample of 33 IT contracts that the Secret Service awarded between October 1, 2016, and June 30, 2017, we found that the CIO or the CIO’s delegate had reviewed 31 of these contracts. However, the CIO had not established and documented a defined process for reviewing contracts containing IT, which may have contributed to why the CIO or the CIO’s delegate did not review 2 of the 33 contracts in our sample. OCIO officials were unable to explain why neither of these officials reviewed the 2 contracts, which had a combined planned total procurement value of approximately $1.75 million. In particular, one of the contracts, with a planned total procurement value of about $1,122,934, was to provide credentialing services for the 2017 Presidential Inauguration. The other contract, with a planned total procurement value of about $629,337, was to provide maintenance support for a logistics system. The OCIO officials acknowledged that both contracts should have been approved by one of these officials. Without establishing and documenting an IT acquisition review process that ensures that the CIO or the CIO’s delegate reviews all contracts containing IT, as appropriate, the CIO’s ability to analyze the contracts to ensure that they are a cost-effective use of resources and are aligned with the component’s missions and goals is limited. Ensure all component IT policies are in compliance and alignment with DHS IT directives and instructions. As required by DHS’s IT Integration and Management directive, the Secret Service CIO had ensured that certain component IT policies were in compliance and alignment with DHS IT directives and instructions. For example, in alignment with the department’s IT Integration and Management directive, the Secret Service’s Investment Governance for IT policy specifies that the component CIO (in conjunction with each Secret Service Office) is responsible for developing the component IT spend plan, as well as developing and maintaining an IT strategic plan. However, the Secret Service’s enterprise governance policy was not in compliance with DHS’s IT Integration and Management directive. Specifically, while the department’s policy states that the Secret Service CIO is responsible for developing and reviewing the component’s IT budget formulation and execution, the Secret Service’s enterprise governance policy does not specify this as the CIO’s responsibility. According to OCIO officials, the Secret Service CIO participates in the development and review of the IT budget formulation and execution as a member of the Executive Resources Board (the Secret Service’s highest-level governing body, which has the final decision authority and responsibility for enterprise governance), and the Secret Service Deputy CIO is a voting member of the Enterprise Governance Council (the Secret Service’s second-level governance body and advisory council to the Executive Resources Board). However, the Secret Service’s enterprise governance policy has not been updated to reflect these roles. The Secret Service did not update its enterprise governance policy to properly reflect the CIO’s and Deputy CIO’s roles on the Executive Resources Board or Enterprise Governance Council because OCIO officials were not aware that these roles were not properly documented in the component’s policy until we identified this issue during our review. Further compounding the issue of the Secret Service’s enterprise governance policy not properly reflecting the CIO’s and Deputy CIO’s roles and responsibilities on the component’s governance boards is that the Secret Service has not developed a charter for its Executive Resources Board. We have previously reported that a best practice for effective investment management is to define and document the board’s membership, roles, and responsibilities. One such way to do so is via a charter. According to Secret Service officials, the component does not have a charter for the board because, while the Secret Service has established the board pursuant to law, there is little statutory guidance on how the board must be formalized, including whether a charter is required. The officials acknowledged that development of a board charter is a best practice. They stated that, in response to our review, the component has begun efforts to develop a charter for the Executive Resources Board, but they did not know when it would be completed. Until the Secret Service updates its enterprise governance policy to specify (1) the CIO’s current role and responsibilities on the Executive Resources Board, to include developing and reviewing the IT budget formulation and execution, and (2) the Deputy CIO’s role and responsibilities on the Enterprise Governance Council, the CIO’s ability to develop and review the component’s IT budget may be limited. Further, until the Secret Service develops a charter for its Executive Resources Board that specifies the roles and responsibilities of all board members, including the CIO, the Secret Service will not be effectively positioned to ensure that all members understand their roles and responsibilities on the board and will perform them as expected. Set modular outcomes and target measures to monitor the progress in achieving agile implementation for IT programs and/or projects within their component. Consistent with DHS policy, the Secret Service CIO has set modular outcomes and target measures to monitor the progress of two IITT projects that the component is implementing using an agile methodology—Uniformed Division Resource Management System and Events Management. For example, the modular outcomes set for these projects included measuring planned and actual burndown (i.e., the number of user stories completed). In addition, the projects were to measure their velocity (i.e., the rate of work completed) for each sprint (i.e., a set period of time during which the development team is expected to complete tasks related to developing a piece of working software). However, the modular outcomes and target measures did not include product quality or post-deployment user satisfaction, although such measures are leading practices for managing agile projects. According to Secret Service OCIO officials, the component does not mandate the specific metrics that its agile projects are to use; instead, each project is to determine the metrics based on stakeholder requirements and unique project characteristics. The officials further stated that these metrics are to be documented in an acquisition program baseline and program management plan; this baseline and program management plan are then to be approved by the CIO. To its credit, the component’s one agile project that, as of May 2018, had deployed its system to users—the Uniformed Division Resource Management System—did measure product quality. OCIO officials also stated that they regularly receive verbal, undocumented feedback from users on the system and they plan to conduct a documented user satisfaction survey on this system by September 2018. Nevertheless, without ensuring that product quality and post- deployment user satisfaction metrics are included in the modular outcomes and target measures that the CIO sets for monitoring agile projects, the Secret Service lacks assurance that the Events Management project or other future agile projects will measure product quality or post-deployment user satisfaction. Without guidance specifying that agile projects track these metrics, the projects may not do so and the CIO may be limited in his knowledge of the progress being made on these projects. The Secret Service Did Not Fully Implement the Majority of the Selected Leading Planning and Management Practices for Its IT Workforce Workforce planning and management is essential for ensuring that federal agencies have the talent, skill, and experience mix they need to execute their missions and program goals. To help agencies effectively conduct workforce planning and management, the Office of Personnel Management, the Chief Human Capital Officers Council, DHS, the Secret Service, and we have identified numerous leading practices related to five workforce areas: strategic planning, recruitment and hiring, training and development, employee morale, and performance management. Table 6 identifies the five workforce areas and 15 selected leading practices associated with these areas (3 practices within each area). Of the five selected workforce planning and management areas, the Secret Service had substantially implemented two of the areas and minimally implemented three of the areas for its IT workforce. In addition, of the 15 selected leading practices associated with these workforce planning and management areas, the Secret Service had fully implemented 3 practices, partly implemented 8 practices, and did not implement any aspects of 4 practices. Table 7 summarizes the extent to which the Secret Service had implemented for its IT workforce the five selected workforce planning and management areas and 15 selected leading practices associated with those areas, as of June 2018. The Secret Service Minimally Implemented Selected Leading IT Strategic Workforce Planning Practices Strategic workforce planning is an essential activity that an agency needs to conduct to ensure that its human capital program aligns with its current and emerging mission and programmatic goals, and that the agency is able to meet its future needs. We previously identified numerous leading practices related to IT strategic workforce planning, including that an organization should (1) establish and maintain a strategic workforce planning process, including developing all competency and staffing needs; (2) regularly assess competency and staffing needs, and analyze the IT workforce to identify gaps in those areas; and (3) develop strategies and plans to address gaps in competencies and staffing. The Secret Service minimally implemented the three selected leading practices associated with the IT strategic workforce planning area. Specifically, the component partly implemented two of the practices and did not implement one practice. Table 8 lists these selected leading practices and provides our assessment of the Secret Service’s implementation of the practices. Establish and maintain a strategic workforce planning process, including developing all competency and staffing needs—partly implemented. The Secret Service took steps to establish a strategic workforce planning process for its IT workforce. For example, the Secret Service CIO developed and maintained a plan that identified strategic workforce planning tasks, to include analyzing the staffing requirements of the IT workforce. In addition, the Secret Service defined general core competencies (e.g., communication and customer service) for its workforce, including IT staff. However, OCIO did not identify all required knowledge and skills needed to support this office’s functions. In particular, while OCIO identified certain technical competencies that its IT workforce needs, such as cybersecurity, the office did not identify and document all of the technical competencies that it needs. OCIO officials stated that they did not identify and document the technical competencies that the office needs because the Secret Service was focused on reorganizing the IT workforce under a single, centralized reporting chain within the CIO’s office. Consequently, the officials stated that they had not completed the work to identify all required IT knowledge and skills necessary to support the office. Yet, the Secret Service completed the IT workforce reorganization effort over a year ago, in March 2017 and, since then, OCIO has not identified all of the required IT knowledge and skills that the office needs. OCIO officials told us that they plan to identify all of the technical competency needs for the IT workforce, but they were unable to specify a time frame for when these needs would be fully identified. Until OCIO identifies all of the required knowledge and skills for the IT workforce, the office will be limited in its ability to identify and address any competency gaps associated with this workforce. In addition, the Secret Service did not reliably determine the number of IT staff that it needs in order to support OCIO’s functions. Specifically, in January 2017, an independent review of the staffing model that the component used to identify its IT workforce staffing needs found that the model was not based on any verifiable underlying data. In late August 2018, Office of Human Resources officials reported that they had hired a contractor in early August 2018 to update the staffing model to improve the quality of the data. These officials expected the contractor to finish updating the model by August 2019. The officials plan to use the updated model to identify the Secret Service’s IT workforce staffing needs for fiscal year 2021. Updating the staffing model to incorporate verifiable workload data should increase the likelihood that the Secret Service is able to appropriately identify its staffing needs for its IT workforce. Regularly assess competency and staffing needs, and analyze the IT workforce to identify gaps in those areas—not implemented. The Secret Service regularly assessed the competency and staffing needs for 1 of the occupational series within its IT workforce (i.e., the 2210 IT Specialist series). However, it did not regularly assess the competency and staffing needs for the remaining 11 occupational series that are associated with the component’s IT workforce, nor identify any gaps that it had in those areas. OCIO officials stated that they had not assessed these needs or identified competency or staffing gaps because, among other things, the Secret Service was focused on reorganizing the IT workforce under a single, centralized reporting chain within the CIO’s office. However, as previously mentioned, the component completed this effort in March 2017, but OCIO did not subsequently assess its competency and staffing needs, nor identify gaps in those areas. OCIO officials reported that they plan to assess the competencies of the IT workforce to identify any gaps that may exist; however, they were unable to identify a specific date by which they expect to have the capacity to complete this assessment. Until OCIO regularly analyzes the IT workforce to identify its competency needs and any gaps it may have, OCIO will be limited in its ability to determine whether its IT workforce has the necessary knowledge and skills to meet its mission and goals. Further, Office of Human Resources officials reported that they plan to update the staffing model that they use to identify their IT staffing needs to include more reliable workload data. However, as discussed earlier, the Secret Service had not yet developed that updated model to determine its IT staffing needs. Office of Human Resources officials reported that once they update the staffing model they plan to re- evaluate the Secret Service’s IT staffing needs. The officials also stated that, going forward, they plan to reassess these needs each year as part of the annual budget cycle. Regular assessments of the IT workforce’s staffing needs should increase the likelihood that the Secret Service is able to appropriately identify the number of IT staff it needs to meet its mission and programmatic goals. Develop strategies and plans to address gaps in competencies and staffing—partly implemented. The Secret Service developed recruiting and hiring strategies to address certain competency and staffing needs (e.g., cybersecurity) for its IT workforce. These strategies included, among other things, participating in DHS-wide recruiting events and using special hiring authorities. However, because OCIO did not identify all of its IT competency and staffing needs, and lacked a current analysis of its entire IT workforce, the Secret Service could not provide assurance that the recruiting and hiring strategies it developed were specifically targeted towards addressing current OCIO competency and staffing gaps. For example, without an analysis of the IT workforce’s skills, OCIO did not know the extent to which it had gaps in areas such as device management and cloud computing. As a result, the Secret Service’s recruiting strategies may not have been targeted to address any gaps in those areas. Until the Secret Service updates its recruiting and hiring strategies and plans to address all IT competency and staffing gaps identified (after OCIO completes its analysis of the entire IT workforce, as discussed earlier), the Secret Service will be limited in its ability to effectively recruit and hire staff to fill those gaps. The Secret Service Minimally Implemented Selected Leading Recruitment and Hiring Practices According to the Office of Personnel Management, the Chief Human Capital Officers Council, and our prior work, once an agency has determined the critical skills and competencies that it needs to achieve programmatic goals, and identifies any competency or staffing gaps in its current workforce, the agency should be positioned to build effective recruiting and hiring programs. It is important that an agency has these programs in place to ensure that it can effectively recruit and hire employees with the appropriate skills to meet its various mission requirements. The Office of Personnel Management, the Chief Human Capital Officers Council, and we have also identified numerous leading practices associated with effective recruitment and hiring programs. Among these practices, an agency should (1) implement recruiting and hiring activities to address skill and staffing gaps by using the strategies and plans developed during the strategic workforce planning process; (2) establish and track metrics to monitor the effectiveness of the recruitment program and hiring process, including their effectiveness at addressing skill and staffing gaps, and report to agency leadership on progress addressing those gaps; and (3) adjust recruitment plans and hiring activities based on recruitment and hiring effectiveness metrics. The Secret Service minimally implemented the selected three leading practices associated with the recruitment and hiring workforce area. Specifically, the component partly implemented one of the three practices and did not implement the other two practices. Table 9 lists these selected practices and provides our assessment of the Secret Service’s implementation of the practices. Implement recruiting and hiring activities to address skill and staffing gaps by using the strategies and plans developed during the strategic workforce planning process—partly implemented. OCIO officials implemented the activities identified in the Secret Service’s recruiting and hiring plans. For example, as identified in its recruiting plan, OCIO participated in a February 2017 career fair to recruit job applicants at a technology conference. In addition, in August 2017, OCIO participated in a DHS-wide recruiting event. Secret Service officials reported that, during this event, they conducted four interviews for positions in OCIO. However, as previously discussed, OCIO did not identify all of its IT competency and staffing needs, and lacked a current analysis of its entire IT workforce. Without complete knowledge of its current IT competency and staffing gaps, the Secret Service could not provide assurance that the recruiting and hiring strategies that it had implemented fully addressed these gaps. Establish and track metrics to monitor the effectiveness of the recruitment program and hiring process, including their effectiveness at addressing skill and staffing gaps, and report to agency leadership on progress addressing those gaps—not implemented. The Secret Service had not established and tracked metrics for monitoring the effectiveness of its recruitment and hiring activities for the IT workforce. Officials in the Office of Human Resources attributed this to staffing constraints and said their priority was to address existing staffing gaps associated with the Secret Service’s law enforcement groups. In June 2018, Office of Human Resources officials stated that they plan to implement metrics to monitor the effectiveness of the hiring process for the IT workforce by October 2018. The officials also stated that they were in the process of determining (1) the metrics that are to be used to monitor the effectiveness of their workforce recruiting efforts and (2) whether they need to acquire new technology to support this effort. However, the officials did not know when they would implement the metrics for assessing the effectiveness of the recruitment activities and whether they would report the results to leadership. Until the Office of Human Resources (1) develops and tracks metrics to monitor the effectiveness of the Secret Service’s recruitment activities for the IT workforce, including their effectiveness at addressing skill and staffing gaps; and (2) reports to component leadership on those metrics, the Secret Service and the Office of Human Resources will be limited in their ability to analyze the recruitment program to determine whether the program is effectively addressing IT skill and staffing gaps. Further, Secret Service leadership will lack the information necessary to make effective recruitment decisions. Adjust recruitment plans and hiring activities based on recruitment and hiring effectiveness metrics—not implemented. While the Secret Service CIO stated in June 2018 that he planned to adjust the office’s recruiting and hiring strategies to focus on entry- level staff rather than mid-career employees, this planned adjustment was not based on metrics that the Secret Service was tracking. Instead, the CIO stated that he planned to make this change because his office determined that previous mid-career applicants were often unwilling or unable to wait for the Secret Service’s lengthy, required background investigation process to be completed. However, as previously mentioned, the Secret Service did not develop and implement any metrics for assessing the effectiveness of the recruitment and hiring activities for the IT workforce. As a result, the Office of Human Resources and OCIO were not able to use such metrics to inform adjustments to their recruiting and hiring plan and activities, thus, reducing their ability to target potential candidates for hiring. Until the Office of Human Resources and OCIO adjust their recruitment and hiring plans and activities as necessary, after establishing and tracking metrics for assessing the effectiveness of these activities for the IT workforce, the Secret Service will be limited in its ability to ensure that its recruiting plans and activities are appropriately targeted to potential candidates. In addition, the component will lack assurance that these plans and activities will effectively address skill and staffing gaps within its IT workforce. The Secret Service Minimally Implemented Selected Leading Training and Development Practices An organization should invest in training and developing its employees to help ensure that its workforce has the information, skills, and competencies that it needs to work effectively. In addition, training and development programs are an integral part of a learning environment that can enhance an organization’s ability to attract and retain employees with the skills and competencies needed to achieve cost-effective and timely results. DHS, the Secret Service, and we have previously identified numerous leading training and development-related practices. Among those practices, an organization should (1) establish a training and development program to assist the agency in achieving its mission and goals; (2) use tracking and other control mechanisms to ensure that employees receive appropriate training and meet certification requirements, when applicable; and (3) collect and assess performance data (including qualitative or quantitative measures, as appropriate) to determine how the training program contributes to improved performance and results. The Secret Service minimally implemented the selected three leading practices associated with the training and development workforce area. Specifically, the component partly implemented two of the three practices and did not implement one practice. Table 10 lists these selected leading practices and provides our assessment of the Secret Service’s implementation of the practices. Establish a training and development program to assist the agency in achieving its mission and goals—partly implemented. OCIO was in the process of developing a training program for its IT workforce. For example, OCIO developed a draft training plan that identified recommended training for the office’s various IT workforce groups (e.g., voice communications employees). However, the office had not defined the required training for each IT workforce group. In addition, OCIO officials had not yet determined which activities they would implement as part of the training program (e.g., soliciting employee feedback after training is completed and evaluating the effectiveness of specific training courses), nor did they implement those activities. OCIO officials stated that they had not yet fully implemented a training program because their annual training budget for fiscal year 2018 was not sufficient to implement such a program. However, resource constrained programs especially benefit from identifying and prioritizing training activities to inform training budget decisions. Until OCIO (1) defines the required training for each IT workforce group, (2) determines the activities that it will include in its IT workforce training and development program based on its available training budget, and (3) implements those activities, the office may be limited in its ability to ensure that the IT workforce has the necessary knowledge and skills for their respective positions. Use tracking and other control mechanisms to ensure that employees receive appropriate training and meet certification requirements, when applicable—partly implemented. OCIO used a training system to track that the managers for IITT’s programs had met certain certification requirements for their respective positions. In addition, OCIO manually tracked the technical training that certain IT staff took. However, as discussed earlier, OCIO did not define the required training for each IT workforce group. As such, the office was unable to ensure that IT staff received the appropriate training relevant to their respective positions. Until it ensures that IT staff complete training specific to their positions (after defining the training required for each workforce group), OCIO will have limited assurance that the workforce has the necessary knowledge and skills. Collect and assess performance data (including qualitative or quantitative measures, as appropriate) to determine how the training program contributes to improved performance and results—not implemented. As previously discussed, OCIO did not fully implement a training program for the IT workforce; as such, the office was unable to collect and assess performance data related to such a program. OCIO officials stated that, once they fully implement a training program, they intend to collect and assess data on how this program contributes to improved performance. However, the officials were unable to specify a time frame for when they would do so. Until OCIO collects and assesses performance data (including qualitative or quantitative measures, as appropriate) to determine how the IT training program contributes to improved performance and results (once the training program is implemented), the office may be limited in its knowledge of whether the training program is contributing to improved performance and results. The Secret Service Substantially Implemented Selected Leading Practices for Improving the Morale of Its IT Workforce, but Did Not Demonstrate Sustained Improvement Employee morale is important to organizational performance and an organization’s ability to retain talent to perform its mission. We have previously identified numerous leading practices for improving employee morale. Among other things, we have found that an organization should (1) determine root causes of employee morale problems by analyzing employee survey results using techniques such as comparing demographic groups, benchmarking against similar organizations, and linking root cause findings to action plans; and develop and implement action plans to improve employee morale; (2) establish and track metrics of success for improving employee morale, and report to agency leadership on progress improving morale; and (3) maintain leadership support and commitment to ensure continued progress in improving employee morale, and demonstrate sustained improvement in morale. With regard to its IT workforce, the Secret Service substantially implemented the selected three practices associated with the employee morale workforce area. Specifically, the component fully implemented two of the selected practices and partly implemented one practice. Table 11 lists these selected practices and provides our assessment of the Secret Service’s implementation of the practices. Determine root causes of employee morale problems by analyzing employee survey results using techniques such as comparing demographic groups, benchmarking against similar organizations, and linking root cause findings to action plans. Develop and implement action plans to improve employee morale—fully implemented. The Secret Service used survey analysis techniques to determine the root causes of its low employee morale, on which we have previously reported. For example, the component conducted a benchmarking exercise where it compared the morale of the Secret Service’s employees, including IT staff, to data on the morale of employees at other agencies, including the U.S. Capitol Police, U.S. Coast Guard, and the Drug Enforcement Administration. As part of this exercise, the Secret Service also compared its employee work-life offerings (e.g., on-site childcare and telework program) to those available at other agencies. In addition, the Secret Service developed and implemented action plans for improving employee morale. Among these action plans, for example, the component implemented a student loan repayment program and expanded its tuition assistance program’s eligibility requirements. Establish and track metrics of success for improving employee morale, and report to agency leadership on progress improving morale—fully implemented. The Secret Service tracked metrics for improving employee morale and reported the results to leadership. For example, the component tracked metrics on the percentage of the workforce, including IT staff, that participated in the student loan repayment and tuition assistance programs. In addition, the Chief Strategy Officer reported to the Chief Operating Officer the results related to meeting those metrics. Maintain leadership support and commitment to ensure continued progress in improving employee morale, and demonstrate sustained improvement in morale—partly implemented. Secret Service leadership developed and implemented initiatives that demonstrated their commitment to improving the morale of the Secret Service’s workforce. For example, since 2014, the Secret Service had worked with a contractor to identify ways to improve the morale of its entire workforce, including IT staff. However, as of June 2018, the Secret Service was unable to demonstrate that it had sustained improvement in the morale of the component’s IT staff. In particular, the component was only able to provide IT workforce-specific results from one employee morale assessment that was conducted subsequent to the consolidation of this workforce into OCIO in March 2017. These results were from an assessment conducted by the component’s Inspection Division in December 2017 (the assessment found that the majority of the Secret Service’s IT employees rated their morale as “very good” or “excellent.”) While the component also provided certain employee morale results from the Office of Personnel Management’s Federal Employee Viewpoint Survey in 2017, these results were not specific to the IT workforce. Instead, this workforce’s results were combined with those from staff in another Secret Service division. According to OCIO officials, the results were combined because, at the time of the survey, the IT workforce was administratively identified as being part of that other division. OCIO officials stated that, going forward, they plan to continue to assess the morale of the IT workforce on an annual basis as part of the Federal Employee Viewpoint Survey. In addition, the officials stated that OCIO-specific results may be available as part of the 2018 survey results, which the officials expect to receive by September 2018. By measuring employee satisfaction on an annual basis, the Secret Service should have increased knowledge of whether its initiatives that are aimed at improving employee morale are in fact increasing employee satisfaction. The Secret Service Substantially Implemented Selected Performance Management Leading Practices, but Did Not Explicitly Align Expectations with Organizational Goals Agencies can use performance management systems as a tool to foster a results-oriented organizational culture that links individual performance to organizational goals. We have previously identified numerous leading practices related to performance management that are intended to enhance performance and ensure individual accountability. Among the performance management practices, agencies should (1) establish a performance management system that differentiates levels of staff performance and defines competencies in order to provide a fuller assessment of performance, (2) explicitly align individual performance expectations with organizational goals to help individuals see the connection between their daily activities and organizational goals, and (3) periodically provide individuals with regular performance feedback. The Secret Service substantially implemented the selected three leading practices associated with the performance management workforce area. Specifically, the component fully implemented one of the three practices and partly implemented the other two practices. Table 12 lists these selected leading practices and provides our assessment of the Secret Service’s implementation of the practices. Establish a performance management system that differentiates levels of staff performance and defines competencies in order to provide a fuller assessment of performance—partly implemented. The Secret Service’s performance management process requires leadership to make meaningful distinctions between levels of staff performance. In particular, the component’s performance plans for IT staff, which are developed by the Office of Human Resources and tailored by OCIO, as necessary, specify the criteria that leadership use to determine if an individual has met or exceeded the expectations associated with each competency identified in their respective performance plan. The performance plans include pre-established, department-wide competencies that are set by DHS, as well as occupational series-specific goals that may be updated by the Secret Service. However, because OCIO did not fully define and document all of its technical competency needs for the IT workforce, as discussed earlier, the Secret Service’s performance plans for IT staff did not include performance expectations related to the full set of technical competencies required for their respective positions. In addition, because OCIO officials were unable to specify a time frame for when they will identify all of the technical competency needs for the IT workforce (as previously discussed), the officials were also unable to specify a time frame for when they would update the IT workforce’s performance plans to include those relevant technical competencies. Until OCIO updates the performance plans for each occupational series within the IT workforce to include the relevant technical competencies, once identified, against which IT staff performance should be assessed, the office will be limited in its ability to provide IT staff with a complete assessment of their performance. In addition, Secret Service management will have limited knowledge of the extent to which IT staff are meeting all relevant technical competencies. Explicitly align individual performance expectations with organizational goals to help individuals see the connection between their daily activities and organizational goals—partly implemented. The Secret Service’s performance plans for IT staff identified certain goals that appeared to be related to organizational goals and objectives. For example, the performance plan for the Telecommunications Specialist occupational series (which is one of the series included in OCIO’s IT workforce) identified a goal for staff to support the voice, wireless, radio, satellite, and video systems serving the Secret Service’s protective and investigative mission. This performance plan goal appeared to be related to the component’s strategic goal on Advanced Technology, which included an objective to create the infrastructure needed to fulfill mission responsibilities. However, the Secret Service was unable to provide documentation that explicitly showed how individual employee performance links to organizational goals, such as a mapping of the goals identified in employee performance plans to organizational goals. Specifically, while Office of Human Resources officials stated that each Secret Service directorate is responsible for ensuring that employee goals map to high-level organizational goals, OCIO officials stated that they did not complete this mapping. The officials were unable to explain why they did not align the goals in their employees’ performance plans to the component’s high-level goals. According to the officials, the Secret Service is in the process of implementing a new automated tool that will require each office to explicitly align individual performance expectations to organizational goals. The officials stated that OCIO plans to use this tool to create employees’ fiscal year 2019 performance plans. By explicitly demonstrating how individual performance expectations align with organizational goals, the Secret Service’s IT staff should have a better understanding of how their daily activities contribute towards achieving the Secret Service’s goals. Periodically provide individuals with regular performance feedback—fully implemented. Secret Service leadership periodically provided their IT staff with performance feedback. Specifically, on an annual basis, OCIO staff received feedback during a mid-year and end-of-year performance feedback assessment. In our prior work, we have stressed that candid and constructive feedback can help individuals maximize their contribution and potential for understanding and realizing the goals and objectives of an organization. Further, this feedback is one of the strongest drivers of employee engagement. The Secret Service and DHS Implemented Selected Leading Monitoring Practices for the IITT Investment According to leading practices of the Software Engineering Institute, effective program oversight includes monitoring program performance and conducting reviews at predetermined checkpoints or milestones. This is done by, among other things, comparing actual cost, schedule, and performance data with estimates in the program plan and identifying significant deviations from established targets or thresholds for acceptable performance levels. In addition, the Software Engineering Institute previously identified leading practices for effectively monitoring the performance of agile projects. According to the Institute, agile development methods focus on delivering usable, working software frequently; as such, it is important to measure the value delivered during each iteration of these projects. To that end, the Institute reported that agile projects should be measured on velocity (i.e., number of story points completed per sprint or release), development progression (e.g., the number of user stories planned and accepted), product quality (e.g., number of defects), and post-deployment user satisfaction. DHS and the Secret Service had fully implemented the selected leading practice for monitoring the performance of one program and three projects within the IITT investment, and conducting reviews of this program and these projects at predetermined checkpoints. In addition, with regard to the selected leading practice for monitoring agile projects, the Secret Service had fully implemented this practice for one of its two projects being implemented using agile and had partially implemented this practice for the other project. Table 13 provides a summary of DHS’s and the Secret Service’s implementation of these leading practices, as relevant for one program and three projects within IITT. Monitor program performance and conduct reviews at predetermined checkpoints or milestones. Consistent with leading practices, DHS and the Secret Service monitored the performance of IITT’s program and projects by comparing actual cost, schedule, and performance information against planned targets and conducting reviews at predetermined checkpoints. For example, within the Secret Service: The Enabling Capabilities program and Multi-Level Security project monitored their contractors’ costs spent to-date on a monthly basis and compared them to the total contract amounts. OCIO used integrated master schedules to monitor the schedule performance of the Enabling Capabilities program and Multi-Level Security project. OCIO also monitored the cost, schedule, and performance of the Uniformed Division Resource Management System and Events Management projects during monthly status reviews. In addition, DHS and the Secret Service conducted acquisition decision event reviews and systems engineering life cycle technical reviews of IITT’s program and projects at predetermined checkpoints and, when applicable, identified deviations from established cost, schedule, and performance targets. For example: Secret Service OCIO met with DHS’s Office of Program Accountability and Risk Management in February 2017, and with DHS’s Acting Under Secretary for Management in June 2017, to discuss a schedule breach for the Enabling Capabilities program. In particular, the Enabling Capabilities program informed DHS that the program needed to change the planned date for acquisition decision event 3 (the point at which a decision is made to fully deploy the system) in order to conduct tests in an operational environment prior to that decision event. This delay was due to the Secret Service misunderstanding the tests that it was required to conduct prior to that decision event. Specifically, the Enabling Capabilities program had conducted tests on “production representative” systems, but these tests were not sufficient to meet the requirements for acquisition decision event 3. The project team for Multi-Level Security identified that certain technical issues they had experienced would delay system deployment and full operational capability (the point at which an investment becomes fully operational). As such, in October 2017, the project notified the Secret Service Component Acquisition Executive of these expected delays. In particular, the web browser that was intended to provide users on “Sensitive But Unclassified” workstations the ability to view information from different security levels, experienced technical delays in meeting personal identity verification requirements. The project team also described for the executive how the schedule delay would affect the project’s performance metrics and funding, and subsequently updated the project plan accordingly. Measure and monitor agile projects on, among other things, velocity (i.e., number of story points completed per sprint or release), development progression (e.g., the number of features and user stories planned and accepted), product quality (e.g., number of defects), and post-deployment user satisfaction. Secret Service OCIO measured its two agile projects—Uniformed Division Resource Management System and Events Management— using certain agile metrics. In particular, OCIO officials measured the Uniformed Division Resource Management System and Events Management projects using key metrics related to velocity and development progression. For example, the officials measured development progression for both projects on a daily basis. In addition, OCIO officials monitored each project’s progress against these metrics during bi-weekly reviews that they conducted with each project team. The OCIO officials also tracked product quality metrics for the Uniformed Division Resource Management System. For example, on a monthly basis, the officials tracked the number of helpdesk tickets that had been resolved related to the system. In addition, on a quarterly basis, they tracked the number of Uniformed Division Resource Management System defects that (1) had been fixed and (2) were in the backlog. However, while OCIO officials received certain post-deployment user satisfaction information from end-users of the Uniformed Division Resource Management System by, among other things, tracking the number of helpdesk tickets related to the system and via daily verbal, undocumented feedback from certain Uniformed Division officers, OCIO officials had not fully measured and documented post- deployment user satisfaction with the system, such as via a survey of employees who use the system. The officials stated that they had not conducted and documented a survey because they were focused on (1) addressing software performance issues that occurred after they deployed the system to a limited number of users, and (2) continuing system deployment to the remaining users after they addressed the performance issues. OCIO officials stated that they plan to conduct such a documented survey by the end of September 2018. The results of the user satisfaction survey should provide OCIO with important information on whether the Uniformed Division Resource Management System is meeting users’ needs. Conclusions The Secret Service’s full implementation of 11 of 14 component-level CIO responsibilities constitutes a significant effort to establish CIO oversight for the component’s IT portfolio. Additional efforts to fully implement the remaining 3 responsibilities, including ensuring that all IT contracts are reviewed, as appropriate; ensuring that the Secret Service’s enterprise governance policy appropriately specifies the CIO’s role in developing and reviewing the component’s IT budget formulation and execution; and ensuring agile projects measure product quality and post-deployment user satisfaction, will further position the CIO to effectively manage the Secret Service’s IT portfolio. When effectively implemented, IT workforce planning and management activities can facilitate the successful accomplishment of an agency’s mission. However, the Secret Service had not fully implemented all of the 15 selected practices for its IT workforce for any of the five areas— strategic planning, recruitment and hiring, training and development, employee morale, and performance management. The Secret Service’s lack of (1) a strategic workforce planning process, including the identification of all required knowledge and skills, assessment of competency gaps, and targeted strategies to address specific gaps in competencies and staffing; (2) targeted recruiting activities, including metrics to monitor the effectiveness of the recruitment program and adjustment of the recruitment program and hiring efforts based on metrics; (3) a training program, including the identification of required training for IT staff, ensuring that staff take required training, and assessment of performance data regarding the training program; and (4) a performance management system that includes all relevant technical competencies, greatly limits its ability to ensure the timely and effective acquisition and maintenance of the Secret Service’s IT infrastructure and services. On the other hand, by monitoring program performance and conducting reviews at predetermined checkpoints for one program and three projects associated with the IITT investment, in accordance with leading practices, the Secret Service and DHS provided important oversight needed to guide that program and those projects. Measuring projects on leading agile metrics also provided the Secret Service CIO with important information on project performance. Recommendations for Executive Action We are making the following 13 recommendations to the Director of the Secret Service: The Director should ensure that the CIO establishes and documents an IT acquisition review process that ensures the CIO or the CIO’s delegate reviews all contracts containing IT, as appropriate. (Recommendation 1) The Director should update the enterprise governance policy to specify (1) the CIO’s current role and responsibilities on the Executive Resources Board, to include developing and reviewing the IT budget formulation and execution; and (2) the Deputy CIO’s role and responsibilities on the Enterprise Governance Council. (Recommendation 2) The Director should ensure that the Secret Service develops a charter for its Executive Resources Board that specifies the roles and responsibilities of all board members, including the CIO. (Recommendation 3) The Director should ensure that the CIO includes product quality and post-deployment user satisfaction metrics in the modular outcomes and target measures that the CIO sets for monitoring agile projects. (Recommendation 4) The Director should ensure that the CIO identifies all of the required knowledge and skills for the IT workforce. (Recommendation 5) The Director should ensure that the CIO regularly analyzes the IT workforce to identify its competency needs and any gaps it may have. (Recommendation 6) The Director should ensure that, after OCIO completes an analysis of the IT workforce to identify any competency and staffing gaps it may have, the Secret Service updates its recruiting and hiring strategies and plans to address those gaps, as necessary. (Recommendation 7) The Director should ensure that the Office of Human Resources (1) develops and tracks metrics to monitor the effectiveness of the Secret Service’s recruitment activities for the IT workforce, including their effectiveness at addressing skill and staffing gaps; and (2) reports to component leadership on those metrics. (Recommendation 8) The Director should ensure that the Office of Human Resources and OCIO adjust their recruitment and hiring plans and activities, as necessary, after establishing and tracking metrics for assessing the effectiveness of these activities for the IT workforce. (Recommendation 9) The Director should ensure that the CIO (1) defines the required training for each IT workforce group, (2) determines the activities that OCIO will include in its IT workforce training and development program based on its available training budget, and (3) implements those activities. (Recommendation 10) The Director should ensure that the CIO ensures that the IT workforce completes training specific to their positions (after defining the training required for each workforce group). (Recommendation 11) The Director should ensure that the CIO collects and assesses performance data (including qualitative or quantitative measures, as appropriate) to determine how the IT training program contributes to improved performance and results (once the training program is implemented). (Recommendation 12) The Director should ensure that the CIO updates the performance plans for each occupational series within the IT workforce to include the relevant technical competencies, once identified, against which IT staff performance should be assessed. (Recommendation 13) Agency Comments and Our Evaluation DHS provided written comments on a draft of this report, which are reprinted in appendix III. In its comments, the department concurred with all 13 of our recommendations and provided estimated completion dates for implementing each of them. For example, with regard to recommendation 2, the department stated that the Secret Service would update its enterprise governance policy and related policies to outline the roles and responsibilities of the CIO and Deputy CIO, among others, by March 31, 2019. In addition, for recommendation 13, the department stated that the Secret Service OCIO will include relevant technical competencies in performance plans, as appropriate, in the next performance cycle that starts in July 2019. If implemented effectively, these actions should address the weaknesses we identified. The department also identified a number of other actions that it said had been taken to address our recommendations. For example, in response to recommendation 8, which calls for the Office of Human Resources to (1) develop and track metrics to monitor the effectiveness of the Secret Service’s recruitment activities for the IT workforce and (2) report to component leadership on those metrics, DHS stated that the Secret Service’s Office of Human Resources’ Outreach Branch provides to the department metrics on recruitment efforts toward designated priority mission-critical occupations. However, for fiscal year 2017, only 1 of the 12 occupational series associated with the Secret Service’s IT workforce was designated as a mission-critical occupation for the component (i.e., the 2210 IT Specialist series). The 11 other occupational series were not designated as mission- critical occupations. In addition, for fiscal year 2018, none of these 12 occupational series were designated as mission-critical occupations. As such, metrics on recruiting for these IT series may not have been reported to DHS leadership. Moreover, while we requested documentation of the recruiting metrics for the Secret Service’s IT workforce and, during the course of our review, had multiple subsequent discussions with the Secret Service regarding such metrics, the component did not provide documentation that demonstrated it had established recruiting metrics for its IT workforce. Tracking such metrics and reporting the results to Secret Service leadership, as we recommended, would provide management with important information necessary to make effective recruitment decisions. Further, in response to recommendation 10, which among other things, calls for the CIO to define the required training for each IT workforce group, the department stated that the Secret Service OCIO recently developed training requirements for each workforce group, which were issued during our audit. However, while during our audit OCIO provided a list of recommended training courses, the office did not identify them as being required courses. Defining training that is required for each IT workforce group, as we recommended, would inform OCIO of the necessary training for each position and enable the office to prioritize this training, to ensure that its staff have the needed knowledge and skills. In addition to the aforementioned comments, we received technical comments from DHS and Secret Service officials, which we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Director of the Secret Service, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. Should you or your staffs have any questions on information discussed in this report, please contact me at (202) 512-4456 or HarrisCC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to evaluate the extent to which: (1) the U.S. Secret Service (Secret Service) Chief Information Officer (CIO) has implemented selected information technology (IT) oversight responsibilities, (2) the Secret Service has implemented leading workforce planning and management practices for its IT workforce, and (3) the Secret Service and the Department of Homeland Security (DHS) have implemented selected performance and progress monitoring practices for the Information Integration and Technology Transformation (IITT) investment. To address the first objective, we analyzed DHS’s policies and guidance on IT management to identify the responsibilities that were to be implemented by the component-level CIO related to overseeing the Secret Service’s IT portfolio, including existing systems, acquisitions, and investments. From the list of 33 responsibilities that we identified, we then excluded the responsibility that was associated with information security, which is expected to be addressed as part of a separate, subsequent GAO review. We also excluded those responsibilities that were significantly large in scope (e.g., implement an enterprise architecture) or that, in our professional judgment, lacked specificity (e.g., provide timely delivery of mission IT services). As a result, we excluded from consideration for this review a total of 10 CIO responsibilities. For the 23 that remained, we then combined certain responsibilities that overlapped with other related responsibilities. For example, we combined related responsibilities on the component CIO’s review of IT contracts. As a result, we identified 14 responsibilities for review. We validated with the acting DHS CIO that these responsibilities were key responsibilities for the department’s component-level CIOs. We then included all 14 of the responsibilities in our review. The 14 selected component-level CIO responsibilities were: 1. Develop and review the component IT budget formulation and execution. 2. Manage the component IT investment portfolio, including establishing an IT acquisition review process that enables component and DHS review of component acquisitions (i.e., contracts) that contain IT. 3. Develop, implement, and maintain a detailed IT strategic plan. 4. Ensure all component IT policies are in compliance and alignment with DHS IT directives and instructions. 5. Concur with each program’s and/or project’s systems engineering life cycle tailoring plan. 6. Support the Component Acquisition Executive to ensure processes are established that enable systems engineering life cycle technical reviews and that they are adhered to by programs and/or projects. 7. Ensure that all systems engineering life cycle technical review exit criteria are satisfied for each of the component’s IT programs and/or projects. 8. Ensure the necessary systems engineering life cycle activities have been satisfactorily completed as planned for each of the component’s IT programs and/or projects. 9. Concur with the systems engineering life cycle technical review completion letter for each of the component’s IT programs and/or projects. 10. Maintain oversight of their component’s agile development approach for IT by appointing the responsible personnel, identifying investments for adoption, and reviewing artifacts. 11. With Component Acquisition Executives, evaluate and approve the application of agile development for IT programs consistent with the component’s agile development approach. 12. Set modular outcomes and target measures to monitor the progress in achieving agile implementation for IT programs and/or projects within their component. 13. Participate on DHS’s CIO Council, Enterprise Architecture Board, or other councils/boards as appropriate, and appoint employees to serve when necessary. 14. Meet the IT competency requirements established by the DHS CIO, as required in the component CIO’s performance plan. To determine the extent to which the Secret Service CIO has implemented these responsibilities, we obtained and assessed relevant component documentation and compared it to the responsibilities. Specifically, we obtained and analyzed documentation including evidence of the CIO’s participation on the Secret Service governance board that has final decision authority and responsibility for enterprise governance, including the IT budget; monthly program management reports showing the CIO’s oversight of IT programs, projects, and systems; monthly status reports on program spending; the Secret Service’s IT strategic plan; the Secret Service’s enterprise governance policy; meeting minutes from the DHS board and council on which the CIO participated (i.e., the CIO Council and Enterprise Architecture Board); and documentation demonstrating whether the CIO met the IT competency requirements. In addition, we obtained and analyzed relevant documentation related to the CIO’s oversight of the major IT investments on which the Secret Service was spending development, modernization, and enhancement funds during fiscal year 2017. As of July 2017, the component had one investment—IITT—that met this criterion. IITT is a portfolio investment that, as of July 2017, included two programs (one of which included three projects) and one standalone project (i.e., it was not part of another program) that had capabilities that were in planning or development and modernization: the Enabling Capabilities program, Enterprise Resource Management System program (which included three projects, called Uniformed Division Resource Management System, Events Management, and Enterprise-wide Scheduling), and Multi-Level Security project. In particular, we obtained and analyzed documentation related to the CIO’s oversight of the systems engineering life cycles for IITT’s Enabling Capabilities program and the Uniformed Division Resource Management System, Events Management, and Multi-Level Security projects. This documentation included acquisition program baselines, systems engineering life cycle tailoring plans, and systems engineering life cycle technical review briefings and completion letters. We then compared the documentation against the five selected systems engineering life cycle oversight responsibilities (responsibilities 5, 6, 7, 8, and 9). We also obtained and analyzed documentation related to the CIO’s oversight of two projects that the Secret Service was implementing using an agile methodology—Uniformed Division Resource Management System and Events Management. Specifically, we obtained and assessed documentation of (1) the CIO’s approval for these projects to be implemented using an agile methodology and (2) the agile development metrics that the CIO established for each of these projects. We then compared this documentation to the three agile development-related component-level CIO responsibilities (responsibilities 10, 11, and 12). Further, to determine the extent to which the Secret Service CIO had established an IT acquisition (i.e., contract) review process that enabled component and DHS review of component contracts that contain IT (which is part of responsibility 2), we first asked Secret Service officials to provide us with a list of all new, unclassified IT contracts that the component awarded between October 1, 2016, and June 30, 2017. The Secret Service officials provided a list of 54 contracts. We validated that these were contracts for IT or IT services by: (1) searching for them in the Federal Procurement Data System – Next Generation; (2) identifying their associated product or service codes, as reported in that system; and (3) determining whether those codes were included in the universe of 79 IT product or service codes identified by the Category Management Leadership Council. In validating the list of 54 contracts provided by the Secret Service, we determined that 5 of the contracts were not associated with an IT product or service code. As such, we removed those contracts from the list. In addition, we found that three other items identified by the component were not in the Federal Procurement Data System – Next Generation. Secret Service officials subsequently confirmed that these three items were not contracts. We therefore removed these three items from the list. As such, the final list of validated contracts identified by the Secret Service included 46 IT contracts. In addition, to identify any IT contracts that were not included in the list provided by the Secret Service, we conducted a search of the Federal Procurement Data System – Next Generation to identify all unclassified contracts that (1) the component awarded between October 1, 2016, and June 30, 2017; (2) were not a modification of a contract; and (3) were associated with 1 of the 79 IT product or service codes identified by the Category Management Leadership Council. Based on these criteria, we identified 144 Secret Service IT contracts in the Federal Procurement Data System – Next Generation (these 144 contracts included the 46 contracts previously identified by Secret Service officials). We then asked Secret Service officials to validate the accuracy, completeness, and reliability of these data, which they did. From each of these two lists of IT contracts (i.e., the list of 46 IT contracts identified by the Secret Service and the list of 144 IT contracts that we identified from the Federal Procurement Data System – Next Generation), we then selected random, non-generalizable samples of contracts, as described below. First, from the list of 46 IT contracts identified by Secret Service officials, we removed 4 contracts that had total values of less than $10,000. To ensure that we selected across all contract sizes, we randomly selected 12 contracts from the remaining list of 42 contracts, using the following cost ranges: $10,000 to $50,000 (4 contracts), more than $50,000 to less than $250,000 (4 contracts), and more than $250,000 (4 contracts). Second, from our list of 144 IT contracts that we identified from the Federal Procurement Data System – Next Generation, we removed the 46 contracts identified by Secret Service officials. We also removed 12 contracts that had total values of less than $10,000. To ensure that we selected across all contract sizes, we randomly selected 21 contracts from the remaining list of 86 contracts, using the following cost ranges: $10,000 to $50,000 (7 contracts), more than $50,000 to less than $250,000 (7 contracts), and more than $250,000 (7 contracts). In total, we selected 33 IT contracts for review. We separated the contracts into the three cost ranges identified above in order to ensure that contracts of different value levels had been selected. This enabled us to determine the extent to which the CIO appropriately reviewed contracts of all values. To determine the extent to which the CIO had established an IT contract approval process that enabled the Secret Service and DHS, as appropriate, to review IT contracts, we first asked Secret Service Office of the CIO (OCIO) officials for documentation of their IT contract approval process. These officials were unable to provide such documentation. Instead, the officials stated that the Secret Service CIO or the CIO’s delegate approves all IT contracts prior to award. The officials also provided documentation that identified four staff to whom the CIO had delegated his approval authority. Further, the officials stated that, in accordance with DHS’s October 2016 IT acquisition review guidance, they submitted to DHS OCIO for approval any IT contracts that met DHS’s thresholds for review, including those that (1) had total estimated procurement values of $2.5 million or more, and (2) were associated with a major investment. Based on the IT acquisition review process that Secret Service OCIO officials described, we then obtained and analyzed each of the 33 selected IT contracts and associated approval documentation to determine whether or not the Secret Service CIO or the CIO’s delegate had approved each of the contracts. In particular, we (1) reviewed the name of the contract approver on the approval documentation, and (2) compared the signature dates that were on the contracts to the signature dates that were identified on the associated approval documentation. In addition, to determine whether or not the Secret Service CIO submitted to DHS OCIO for approval the IT contracts that (1) had total estimated procurement values of $2.5 million or more, and (2) were associated with major investments, we first analyzed the 144 Secret Service IT contracts that we had previously pulled from the Federal Procurement Data System – Next Generation to determine which contracts met the $2.5 million threshold. We identified 4 contracts that met this threshold. We then requested that OCIO identify the levels (i.e., major or non-major) of the investments associated with these contracts. According to OCIO officials, 3 of the 4 contracts were associated with non-major investments and 1 was not associated with an investment. As such, based on DHS’s October 2016 IT acquisition review guidance, none of these contracts needed to be submitted to DHS OCIO for review. We also interviewed Secret Service officials, including the CIO and Deputy CIO, regarding the CIO’s implementation of the 14 selected component-level responsibilities. We assessed the evidence against the selected responsibilities to determine the extent to which the CIO had implemented them. To address the second objective—determining the extent to which the Secret Service had implemented leading workforce planning and management practices for its IT workforce—we first identified seven topic areas associated with human capital management based on the following sources: The Office of Personnel Management’s Human Capital Framework. Office of Personnel Management and the Chief Human Capital Officers Council Subcommittee for Hiring and Succession Planning, End-to-End Hiring Initiative. GAO, High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO, IT Workforce: Key Practices Help Ensure Strong Integrated Program Teams; Selected Departments Need to Assess Skill Gaps. GAO, Department of Homeland Security: Taking Further Action to Better Determine Causes of Morale Problems Would Assist in Targeting Action Plans. GAO, Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government. GAO, Results-Oriented Cultures: Creating a Clear Linkage between Individual Performance and Organizational Success. DHS acquisition guidance. Secret Service acquisition guidance. Among these topic areas, we then selected five areas that, in our professional judgment, were of particular importance to successful workforce planning and management. They were also previously identified as part of our high-risk and key issues work on human capital management. These areas include: (1) strategic planning, (2) recruitment and hiring, (3) training and development, (4) employee morale, and (5) performance management. We also reviewed these same sources and identified numerous leading practices associated with the five topic areas. Among these leading practices, we then selected three leading practices within each of the five areas (for a total of 15 selected practices). The selected practices were foundational practices that, in our professional judgment, were of particular importance to successful workforce planning and management. Table 14 identifies the five selected workforce areas and 15 selected associated practices. To determine the extent to which the Secret Service had implemented the selected leading workforce planning and management practices for its IT workforce, we obtained and assessed documentation and compared it against the 15 selected practices. In particular, we analyzed the Secret Service’s human capital strategic plan, human capital staffing plan, IT strategic plan, documentation of the component’s staffing model that it used to determine the number of IT staff needed, an independent verification and validation report on the component’s staffing models, documentation of the current number of IT staff, the Secret Service’s recruitment and outreach plans, documentation of DHS’s hiring authorities (which are applicable to the Secret Service), the Secret Service’s training strategic plan, IT workforce training plan, action plans for improving employee morale, and templates used for measuring and reporting employee performance. We also interviewed Secret Service officials—including the CIO, Deputy CIO, and workforce planning staff—about the component’s workforce- related policies and documentation. Further, we discussed with the officials the Secret Service’s efforts to implement the selected workforce practices for its IT workforce. Regarding our assessments of the Secret Service’s implementation of the 15 selected workforce planning and management practices, we assessed a practice as being fully implemented if component officials provided supporting documentation that demonstrated all aspects of the practice. We assessed a practice as not implemented if the officials did not provide any supporting documentation for that practice, or if the documentation provided did not demonstrate any aspect of the practice. We assessed a practice as being partly implemented if the officials provided supporting documentation that demonstrated some, but not all, aspects of the selected practice. In addition, related to our assessments of the Secret Service’s implementation of the five selected overall workforce areas, we assessed each area as follows, based on the implementation of the three selected practices within each area: Fully implemented: The Secret Service provided evidence that it had fully implemented all three of the selected practices within the workforce area; Substantially implemented: The Secret Service provided evidence that it had either fully implemented two selected practices and partly implemented the remaining one selected practice within the workforce area, or fully implemented one selected practice and partly implemented the remaining two selected practices within the workforce area; Partially implemented: The Secret Service provided evidence that it had partly implemented each of the three selected practices within the workforce area; Minimally implemented: The Secret Service provided evidence that it partly implemented two selected practices and not implemented the remaining one selected practice within the workforce area, or partly implemented one selected practice and not implemented the remaining two selected practices within the workforce area; or Not implemented: The Secret Service did not provide evidence that it had implemented any of the three selected practices within the workforce area. To address the third objective—determining the extent to which the Secret Service and DHS have implemented selected performance and progress monitoring practices for IITT—we reviewed leading project monitoring practices and guidance from the Software Engineering Institute. First, we reviewed the practices within the Project Monitoring and Control process area of the Institute’s Capability Maturity Model Integration® for Acquisition. Based on our review, we identified four practices associated with monitoring program performance and progress. In our professional judgment, all four of these practices were of significance to managing the IITT investment given the phase of the life cycle that the investment was in. As such, we elected to include all four of these practices in our review, and combined them into one practice, as follows: Monitor program performance and conduct reviews at predetermined checkpoints or milestones by, among other things, comparing actual cost, schedule, and performance data with estimates in the program plan and identifying significant deviations from established targets or thresholds for acceptable performance levels. Next, given the agile development methodology that the Secret Service was using for certain projects within IITT, we reviewed the Software Engineering Institute’s technical note on the progress monitoring of agile contractors. Based on our review, and in consultation with an internal expert, we selected four agile metrics that the Institute identified as important for successful agile implementations and that, in our professional judgment, were of most significance to monitoring the performance of IITT’s agile projects. We then combined these four metrics into one practice, as follows: Measure and monitor agile projects on velocity (i.e., number of story points completed per sprint or release), development progression (e.g., the number of features and user stories planned and accepted), product quality (e.g., number of defects), and post-deployment user satisfaction. To determine the extent to which DHS and the Secret Service had implemented the first selected practice, we analyzed relevant program management and governance documentation for IITT’s Enabling Capabilities program, and Multi-Level Security, Uniformed Division Resource Management System, and Events Management projects. In particular, we analyzed acquisition program baselines, DHS acquisition decision event memorandums, artifacts from DHS and Secret Service program oversight reviews, cost monitoring reports, program integrated master schedules, and program status briefings, and compared this documentation to the selected practice. We also interviewed Secret Service OCIO officials regarding the Secret Service’s and DHS’s efforts to monitor the IITT investment’s performance and progress. To determine the extent to which the Secret Service had implemented the second selected practice related to measuring and monitoring agile projects on agile metrics (i.e., velocity, development progression, product quality, and post-deployment user satisfaction), we obtained and analyzed agile-related documentation for the two projects that the Secret Service was implementing using an agile methodology—Uniformed Division Resource Management System and Events Management. Specifically, to determine the extent to which the Secret Service was measuring and monitoring these two projects on metrics for velocity and development progression, we obtained and analyzed documentation, such as sprint burndown charts and monthly program status reports, and compared it to the selected practice. In addition, the agile metrics for product quality and post-deployment user satisfaction were only applicable to projects that had been deployed to users. As such, these metrics were applicable to the Uniformed Division Resource Management System (which the Secret Service had deployed to users) and were not applicable to Events Management (which the Secret Service had not yet deployed to users, as of early May 2018). We therefore obtained and analyzed documentation demonstrating that Secret Service OCIO measured product defects for the Uniformed Division Resource Management System. We also requested documentation demonstrating that OCIO had measured and monitored post-deployment user satisfaction for this project, including via a survey. OCIO officials stated that they had not conducted such a survey and were unable to provide documentation demonstrating they had measured post- deployment user satisfaction for the Uniformed Division Resource Management System. To assess the reliability of the cost, schedule, and agile-related data that were in DHS and the Secret Service’s program management and governance documentation for the IITT investment, we (1) analyzed related documentation and assessed the data against existing agency records to identify consistency in the information, and (2) examined the data for obvious outliers, incomplete, or unusual entries. We determined that the data in these documents were sufficiently reliable for our purpose, which was to evaluate the extent to which DHS and the Secret Service had implemented processes for monitoring the IITT investment’s performance and progress. We conducted this performance audit from May 2017 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Description of the U.S. Secret Service’s Information Integration and Technology Transformation Investment’s Programs and Projects As of June 2018, the Secret Service’s Information Integration and Technology Transformation (IITT) investment included two programs (one of which included three projects) and one project that had capabilities that were in planning or development and modernization, as described below: Enabling Capabilities. This program is intended to, among other things, (1) modernize and enhance the Secret Service’s information technology (IT) network infrastructure, including increasing bandwidth and improving the speed and reliability of the Secret Service’s IT system performance; (2) enhance cybersecurity to protect against potential intrusions and viruses; and (3) provide counterintelligence and data mining capabilities to improve officials’ ability to perform the Secret Service’s investigative mission. Enterprise Resource Management System. This program comprises three projects that are intended to provide: a system that will enable the Secret Service’s Uniformed Division to efficiently and effectively plan, provision, and schedule missions (this project is referred to as Uniformed Division Resource Management System), a system that will unify the logistical actions (e.g., assigning personnel) surrounding special events that Secret Service agents need to protect, such as the United Nations General Assembly (this project is referred to as Events Management), and a capability for creating schedules for Secret Service agents and administrative, professional, and technical staff, as well as the ability to generate reports on information such as monthly hours worked (this project is referred to as Enterprise-wide Scheduling). Multi-Level Security. This project is intended to enable authorized Secret Service users to view two levels of classified information on a single workstation. Previously, data at various security levels were contained and used in multiple disparate systems. Multi-Level Security is intended to streamline users’ access to information at different security levels in order to enable them to more quickly and effectively perform their duties. Table 15 provides the planned life cycle cost and schedule estimates (threshold values) for each IITT program and project that had capabilities in planning or development and modernization, as of June 2018. In addition, the table describes any changes in those cost and schedule estimates, as well as the key reasons for any changes, as identified by officials from the Secret Service’s Office of the Chief Information Officer. Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, the following staff made key contributions to this report: Shannin O’Neill (Assistant Director), Emily Kuhn (Analyst-in-Charge), Quintin Dorsey, Rebecca Eyler, Javier Irizarry, and Paige Teigen.
Why GAO Did This Study Commonly known for protecting the President, the Secret Service also plays a leading role in investigating and preventing financial and electronic crimes. To accomplish its mission, the Secret Service relies heavily on the use of IT infrastructure and systems. In 2009, the component initiated the IITT investment—a portfolio of programs and projects that are intended to, among other things, improve systems availability and security in support of the component's business operations. GAO was asked to review the Secret Service's oversight of its IT portfolio and workforce. This report discusses the extent to which the (1) CIO implemented selected IT oversight responsibilities, (2) Secret Service implemented leading IT workforce planning and management practices, and (3) Secret Service and DHS implemented selected performance monitoring practices for IITT. GAO assessed agency documentation against 14 selected component CIO responsibilities established in DHS policy; 15 selected leading workforce planning and management practices within 5 topic areas; and two selected leading industry project monitoring practices that, among other things, were, in GAO's professional judgment, of most significance to managing IITT. What GAO Found The U.S. Secret Service (Secret Service) Chief Information Officer (CIO) fully implemented 11 of 14 selected information technology (IT) oversight responsibilities, and partially implemented the remaining 3. The CIO partially implemented the responsibilities to establish a process that ensures the Secret Service reviews IT contracts; ensure that the component's IT policies align with the Department of Homeland Security's (DHS) policies; and set incremental targets to monitor program progress. Additional efforts to fully implement these 3 responsibilities will further position the CIO to effectively manage the IT portfolio. Of the 15 selected practices within the 5 workforce planning and management areas, the Secret Service fully implemented 3 practices, partly implemented 8, and did not implement 4 (see table). Within the strategic planning area, the component partly implemented the practice to, among other things, develop IT competency needs. While the Secret Service had defined general core competencies for its workforce, the Office of the CIO (OCIO) did not identify all of the technical competencies needed to support its functions. As a result, the office was limited in its ability to address any IT competency gaps that may exist. Also, while work remains to improve morale across the component, the Secret Service substantially implemented the employee morale practices for its IT staff. Secret Service officials said the gaps in implementing the workforce practices were due to, among other things, their focus on reorganizing the IT workforce within OCIO. Until the Secret Service fully implements these practices for its IT workforce, it may be limited in its ability to ensure the timely and effective acquisition and maintenance of the component's IT infrastructure and services. Of the two selected IT project monitoring practices, DHS and the Secret Service fully implemented the first practice to monitor the performance of the Information Integration and Technology Transformation (IITT) investment. In addition, for the second practice—to monitor projects on incremental development metrics—the Secret Service fully implemented the practice on one of IITT's projects and partially implemented it on another. In particular, OCIO did not fully measure post-deployment user satisfaction with the system on one project. OCIO plans to conduct a user satisfaction survey of the system by September 2018, which should inform the office on whether the system is meeting users' needs. What GAO Recommends GAO is making 13 recommendations, including that the Secret Service establish a process that ensures the CIO reviews all IT contracts, as appropriate; and identify the skills needed for its IT workforce. DHS concurred with all recommendations and provided estimated dates for implementing each of them.
gao_GAO-18-240
gao_GAO-18-240_0
Background GME Training Following medical school, GME training provides the clinical training required for a physician to be eligible for licensure and board certification to practice medicine independently in the United States. Physicians pursue GME training within a variety of specialties or subspecialties. Initially, these physicians, known as residents, go through GME training for a specialty—such as internal medicine, family medicine, pediatrics, anesthesiology, radiology, or general surgery. Of the specialties, family medicine, internal medicine, and pediatrics are generally considered primary care specialties. However, a resident who trained in a primary care specialty may not ultimately practice as a primary care physician. Some residents may choose to subspecialize and seek additional GME training. For example, a resident who completed an internal medicine GME training program may decide to subspecialize in cardiology. The percentage of residents who later subspecialize varies based on specialty type. To operate and maintain GME training programs, teaching sites, including hospitals, health centers, medical schools, and other settings, incur medical education costs that can generally be categorized into two groups—direct costs and indirect costs. Direct costs include, for example, residents’ salaries and benefits; compensation for faculty who supervise the residents; and overhead costs. Indirect costs are the portion of higher patient care costs that teaching sites are thought to incur as a result of training residents, such as increased diagnostic testing and procedures performed. (See table 1.) While they may generate costs, residents may also produce financial benefits for a teaching site. Teaching sites may incur lower personnel costs because residents perform services at lower pay than more experienced clinicians or other health care professionals. And, residents may have more flexibility to work long or irregular hours. For example, residents can provide on-call services in lieu of fully trained physicians at a much lower cost to the teaching site. Residents may also increase the efficiency and productivity of faculty with whom they work by, for example, enabling the faculty to increase the number of patient services for which they can bill. Funding of GME Training through Federal Programs and State Medicaid Agencies Within the federal government, funding of GME training is fragmented. Most federal GME funding is provided through five programs—Medicare GME payments, Medicaid GME payments, HRSA’s CHGME and THCGME payment programs, and the VA’s physician GME training programs. For most of the programs, the funding is formula-driven and essentially guaranteed if eligibility requirements are met. Each program uses a different methodology to determine the amount of payments to funding recipients, though there are some similarities between programs. GME training programs generally must be accredited by an independent organization in order to receive federal funding. Medicare GME Payments Medicare—a federally financed program that provides health insurance coverage to people age 65 and older, certain individuals with disabilities, and those with end-stage renal disease—pays for GME training. It does so through two mechanisms—Direct Graduate Medical Education (DGME) payments and Indirect Medical Education (IME) payments—both of which are formula-based payments set by statute. These payments are made to reflect Medicare’s “share” of the costs associated with providing GME training. Medicare DGME payments are made to cover a hospital’s direct costs associated with GME training, such as stipends, supervisory physician salaries, and administrative costs. The payments are the product of a hospital’s weighted 3-year average number of FTE residents, subject to a cap; a per resident amount (PRA); and the hospital’s Medicare patient load—the portion of a hospital’s total inpatient bed days that were paid for by Medicare. In part to constrain spending, the Balanced Budget Act of 1997 capped, for most hospitals, the number of FTE residents that hospitals may count for DGME and IME payment at the number of FTE residents in place in 1996. Rather than reimburse teaching hospitals for actual direct costs incurred each year from training residents, DGME payments are calculated using a PRA. A hospital’s PRA is based on its direct costs and its number of FTE residents when the PRA was set in a base year, which is fiscal year 1984 for most hospitals, and is adjusted annually for inflation. Congress set a base year for calculating DGME costs that would incentivize local providers to keep down their costs and for local communities to assume a greater role in the costs of medical education. After fiscal year 1984, for hospitals that did not previously have any approved residency programs or did not participate in Medicare but began doing so, a PRA for the hospital is established using direct costs the hospital reported that it incurred on its cost report during its base year, which is generally the first cost reporting year it began training residents. In general, each hospital has two separate PRAs—a primary care PRA and a nonprimary care PRA—whereby teaching hospitals receive slightly higher payments for residents training in primary care specialties. Medicare IME payments, which are made to cover a hospital’s indirect costs associated with GME training, are an add-on to the hospital’s Medicare reimbursement for each discharge. IME payments are not based on teaching hospitals’ actual indirect costs. Rather, the adjustment is based on the number of FTE residents per hospital bed, referred to as the resident-to-bed ratio, and a statistically estimated factor that represents the incremental patient care cost due to providing GME training. Federal and State Medicaid GME Payments Medicaid is a joint federal-state program that finances health care coverage for low-income and medically needy individuals. While there is no federal requirement for state Medicaid programs to fund GME training, states may elect to recognize GME training costs as a component of the overall costs incurred by hospitals. And, payment for these expenses is shared by the federal government through federal matching funds. GME training costs may be reimbursed as an add-on adjustment to the state’s payment rates to eligible providers or as an enhanced payment made as a lump sum supplemental to the initial payment rate. CHGME Payment Program Because children’s hospitals treat very few Medicare patients and consequently receive few GME payments from Medicare, the CHGME Payment Program was created in 1999 and reauthorized through fiscal year 2018 to support pediatric and pediatric subspecialty GME training in freestanding children’s hospitals. Unlike Medicare GME, which is a mandatory spending program, the CHGME program relies on discretionary spending. And, the total amount of payments available to each hospital varies from year to year depending on the total amount of funding made available from annual appropriations and the total number of hospitals that participate. The CHGME program makes both DGME and IME payments where one-third of program funds are allocated for DGME payments and two-thirds for IME payments. Both payments are calculated using formulas similar to Medicare. For example, the program’s DGME payments are based, in part, on the number of FTE residents, subject to a cap, and an updated national standardized PRA. And, the IME payment is based, in part, on an estimated factor that represents the incremental patient care cost due to providing GME training, rather than the hospital’s actual indirect costs. THCGME Program The THCGME program was created under the Patient Protection and Affordable Care Act and reauthorized through fiscal year 2017 to increase the number of primary care residents who trained in community-based, ambulatory patient settings. HRSA awards funds to eligible teaching health centers for the purpose of covering both direct and indirect GME costs of new or expanded community-based primary care residency programs. HHS established an interim annual payment rate of $150,000 per resident until it establishes formulas for determining the payments. However, the payment rate for THCGME recipients may fluctuate over time, depending on available appropriations, the number of eligible applicants, and the number of FTE residents supported. THCGME awards can supplement GME payments from other federal sources, including Medicare, Medicaid, and CHGME, but recipients generally cannot use funds to pay for the same portion of resident time that they used to count toward funding in these other GME programs. VA GME Program GME training is a statutory requirement of VA to enhance the nationwide supply of health care professionals and assists VA in the recruitment and retention of staff at its medical facilities. Nearly all of VA’s GME training is conducted through academic affiliations with medical schools and teaching hospitals where residents from those institutions do clinical rotations at VA medical facilities. VA provides financial support for GME training at its facilities in two ways—disbursement payments to its academic affiliates and educational support payments for its VA medical facilities. VA reimburses academic affiliates through disbursement agreements to cover the costs of stipends and benefits for the period of time that a resident serves in a VA medical facility. Reimbursement is based on the number of FTE residents completing a VA rotation and the approved per diem rate of the academic affiliates’ stipend and benefit costs by residents’ postgraduate year level of training. In addition, VA allocates a portion of VA-wide funding for educational support using a formula that accounts for the number of FTE resident positions and a per resident cost factor. According to VA officials, the funding is used to pay for compensation of faculty and other staff, overhead costs, and other costs necessary to host and manage the GME training at VA medical facilities. Federal Oversight of GME Funding Like its funding, federal oversight of programs that fund GME training is fragmented. Federal agencies are responsible for the management and oversight of their respective GME training program or programs. For Medicare, CMS uses regional contractors—MACs—to process and audit payments for health care items and services submitted by enrolled Medicare providers on their annual cost report, including Medicare DGME and IME payments. For example, MACs audit the number of FTE residents that hospitals report on their annual cost report by reviewing relevant rotation schedules. Hospitals claiming reimbursement for GME training are also required to submit Intern and Resident Information System (IRIS) files that provide data on each resident that the hospital trained, including the resident’s specialty type, postgraduate year, and proportion of time spent on rotation at each training site. CMS is responsible for broad oversight of the Medicaid program, while states are responsible for the daily administration of their individual Medicaid programs, including program integrity activities. In its broad oversight role, CMS develops guidance and provides assistance to the states. However, state Medicaid programs are not required to make GME payments, and CMS has not established requirements or guidance specifically related to Medicaid GME payments. Instead, CMS reviews states’ Medicaid payments to providers, including GME payments, as part of its review of Medicaid state plans. HRSA is responsible for the management and oversight of the CHGME and THCGME programs. Specifically, it is responsible for determining applicants’ program eligibility, making payments, and auditing those payments. HRSA is also responsible for collecting information about, and reporting on, the performance of the CHGME and THCGME programs. Oversight of GME training at VA medical facilities is shared between the VA medical facilities and their academic affiliates. Through affiliation agreements, academic affiliates provide for the central administration of residents’ stipends and benefits. Academic affiliates are also responsible for the overall quality of the GME training program, monitoring all resident educational activities, obtaining and maintaining accreditation, developing educational objectives and curriculum, selecting residents, creating resident rotation schedules, and submitting residents’ schedules of educational activities to VA for reimbursement. VA has the responsibility of overseeing and managing clinical training in VA medical facilities, and must ensure that there are sufficient patient care opportunities, educational infrastructure, and qualified teaching physicians to accommodate trainees from the affiliates. Each VA medical facility must also track the educational activities of all residents, including the amount of time the resident spent training at its facility. Federal Agencies and State Medicaid Agencies Spent Over $16.3 Billion on GME Training in 2015, and the Amount Spent Per FTE Resident Varied Federal agencies and state Medicaid agencies spent over $16.3 billion on GME training in 2015 to support direct and indirect costs of training. The amount spent per FTE resident varied across programs, and the largest variation across payment recipients and regions was within Medicare due to variation in the values of factors used to calculate Medicare payment amounts. Almost half of participants received payments from more than one program, and the designs of federal programs may reduce the potential for duplicate payments. Federal Agencies and State Medicaid Agencies Spent Over $16 Billion on GME Training in 2015 to Support Direct and Indirect Costs of Training Federal agencies and state Medicaid agencies spent over $16.3 billion on GME training in 2015 through five federal programs and 45 state Medicaid agencies. Of this, the federal government spent $14.5 billion through Medicare, Medicaid, VA, the CHGME program, and the THCGME program. (See table 2). Most spending on GME training came from Medicare, accounting for 71 percent of federal spending, with over $10.3 billion in payments to teaching hospitals. Medicaid spending accounted for 16 percent of federal spending on GME training, or $2.4 billion. These federal Medicaid funds matched an additional $1.8 billion that Medicaid agencies in 45 states spent on GME training in 2015. (For information about state Medicaid agency and other non-federal sources of funding on GME training, see appendix I.) These payments supported both direct and indirect costs associated with GME training, though data were limited for some programs. We calculated that about one-third of Medicare payments were made to cover the direct costs of GME training. Similarly, HRSA reported that one-third of CHGME payments were made to cover direct costs. For the VA GME program, we calculated that 44 percent of payments were made to academic affiliates to reimburse them for resident salaries and benefits, a category of direct costs. HRSA does not separate payments for direct costs from those for indirect costs under the THCGME program. And, the data we received from state Medicaid directors did not separate them, though 8 of 45 states specifically reported paying providers for indirect costs in addition to direct costs. Providers in all 50 states and the District of Columbia received payments for training GME residents, but some regions received a notably higher amount compared to others. In particular, federal agencies spent $5.47 billion ($97 per-capita) in the Northeast region, which represents 38 percent of total federal spending, compared with the West where federal agencies spent $1.83 billion ($24 per-capita or 13 percent of total federal spending). (See table 3). State Medicaid agencies in the Northeast also spent significantly more on GME training than did agencies in other regions. Agencies in the Northeast spent $1 billion ($18 per-capita), whereas agencies in the West spent $120 million ($2 per-capita). Notably, New York accounted for about half (48 percent) of nationwide state Medicaid agency spending on GME and 86 percent of spending in the Northeast. Overall, GME spending was somewhat more concentrated in the Northeast than was the number of GME residents; in a May 2017 study, we found that 31 percent of GME residents were located in the Northeast. The Northeast was the only region for which the percentage of the GME spending in the region was higher than the percentage of GME residents. Available data show that almost all spending on GME training (99 percent) went to recipients located in urban areas. However, it is likely that more than 1 percent of spending was used to support training in rural areas; data limitations in HHS and state Medicaid agency data preclude calculation of the amount of spending on GME training in rural areas. The data we received from HHS listed only the direct recipient of the payments, such as a hospital or a medical school, which can arrange rotations at other teaching sites that may be located in rural areas. Data limitations also preclude calculation of the overall amount of spending on GME resident training in specific specialties, such as primary care. With data that were available, we found: Of the 10,367 FTE residents that VA funded, 53 percent were training in a primary care specialty. We also estimated that 52 percent of VA’s spending supported primary care training. The THCGME program is intended to train residents in primary care, with 100 percent of the $76.3 million used to support 630 primary care residency positions. HRSA reported that 43 percent of the 11,667 trainees supported by CHGME funds trained in general pediatrics or combined pediatrics programs. HRSA did not report how much it spent on primary care training, or the number of FTE residents training in primary care specialties. Of the 87,980 FTE residents that Medicare funded, 44 percent were denoted as primary care residents. However, Medicare is likely supporting more residency positions than these data indicate, and these residents are unlikely to be training in primary care. The program counts each resident pursuing additional training, such as a resident training in a subspecialty, as half of an FTE when calculating DGME payments. The Amount Paid Per FTE Resident Varied Across Programs, and the Largest Variation Across Recipients and Regions Was within Medicare We found that in 2015, the average amount that a program paid per FTE resident ranged from $34,814 for Medicaid GME payments to $137,491 for the VA GME program. (See table 4.) Programs use different methods to calculate how much to pay providers on a per resident basis, thus payment amounts are not comparable across programs. For example, Congress appropriated funding for the THCGME program for each of the fiscal years 2011 through 2017 and eligible entities received the same amount per FTE resident. In contrast, Medicare GME payments to eligible entities are determined according to formulas that take many factors into account, including the share of a hospital’s patients that are covered under Medicare. Consequently, the amount that Medicare pays recipients varies widely based on variation in the values of factors used to calculate payments. Nationwide, hospitals received $116,997 on average from Medicare for each FTE resident, and the middle 50 percent of hospitals received between $85,478 and $150,610. Given the wide variation in overall Medicare per FTE resident payment amounts by hospital, we examined variation among regions and states. Regionally, the average total Medicare per FTE resident payment ranged from $127,503 in the Midwest to $87,172 in the West. (See table 5.) Across individual states, the average total Medicare per FTE resident payment amount ranged from $65,672 in California to $170,591 in New Hampshire. (See fig. 1.) Some of this variation is due to significant variation in the values of certain factors used to calculate Medicare DGME payments—specifically, the PRA and Medicare patient load. (See table 6.) The Medicare PRA varies among recipients and across regions, though to a lesser degree than the overall per FTE resident payment. For example, the average PRA for the middle 50 percent of primary care residents ranged from $87,962 to $117,144 per FTE resident, compared to $85,478 to $150,610 for the overall per FTE resident payment. The PRA also varied by region and, as with the overall payment amounts, the average PRA was lowest in the West. However, in contrast to the nationwide average per FTE resident payment, which was highest in the Midwest, the recipients in the Northeast had the highest average PRA. The Medicare patient load also varies across regions, which affects DGME payments. Medicare DGME payment recipients in the West reported an average Medicare patient load of 24 percent, which is significantly lower than the 34 to 36 percent reported in other regions. A hospital’s Medicare patient load also affects Medicare IME payments per FTE resident. A hospital’s IME payment is calculated by increasing Medicare’s payments for inpatient services to a hospital by an IME adjustment factor. Therefore, a hospital that received more Medicare payments for inpatient services will receive a larger IME payment. About Half of GME Program Participants Received Payments from More than One Program Over half (51 percent) of providers that participated in any of the five GME programs received payments from more than one federal program. For example, 69 percent of providers that participated in Medicare also participated in another program, and 84 percent of CHGME awardees participated in another program. However, in each case, these programs provided most of these recipients’ total funding (74 percent and 66 percent respectively). In contrast, recipients of Medicaid or VA payments also generally participated in another program, but received only 22 percent and 10 percent of their total funding for GME training through Medicaid and VA, respectively. (See table 7.) Though the high portion of providers that receive payments from multiple sources creates the potential for providers to receive duplicate payments, this risk of duplication is reduced by the programs’ designs. The CHGME program was established for children’s hospitals because they did not traditionally receive significant Medicare GME payments. The THCGME program provides payments to outpatient facilities, whereas residency training has been, in general, hospital based. VA only pays for residents’ time spent training at a VA medical facility, and not for time residents spent training in non-VA settings that may receive other federal payments for GME training. Medicare adjusts all DGME payments by the ratio of a hospital’s patients covered under Medicare. CMS has not established requirements or guidance specifically related to Medicaid GME payments, including how the payments are to be calculated. However, 10 states adjust payments by the ratio of a teaching site’s patients covered under Medicaid. GME Training Costs Vary by Residency Program Characteristics, and Teaching Sites Face Challenges in Measuring These Costs GME training costs vary by program characteristics, such as size, type, training setting, and age, and some training costs are more prone to variation than others. Challenges exist in measuring and comparing GME training costs due to a lack of standard cost methodologies across teaching sites and some training costs being difficult to measure. Further, little is known about how GME training costs relate to federal GME funding. GME Training Costs Vary by Program Size, Type, Setting, Age, and Location According to literature we reviewed and experts we interviewed, GME training costs vary by residency program characteristic, and some costs, such as faculty teaching time, are more prone to variation than others. Specifically, variation in training costs can be explained by one or more of the following program characteristics: Program size: Larger residency programs may be more cost efficient than smaller ones in that fixed costs, such as infrastructure and program administration, can be spread out over a larger number of residents. Therefore, adding another resident increases variable costs, but lowers per resident fixed costs. Type of Specialty: Residency training in some specialties costs more than others, and accreditation requirements are one of several factors driving this variation. For example, compared to internal medicine programs, accreditation standards for family medicine programs require more hours of faculty involvement and higher faculty-to- resident ratios. Therefore, these residency programs may incur higher per resident costs. The complexity of a specialty program also affects its training costs—for example, subspecialty programs, such as vascular surgery or gastroenterology, require additional GME training or specialized equipment and will thus incur more training costs. In addition, costs can be affected by variation in faculty compensation. According to a 2013 analysis of available data on residency training costs, the median compensation for attending physicians in academic health centers ranged from $163,319 for family medicine to $336,136 for radiation oncology. Further, malpractice insurance premium costs can vary based on the degree of surgical involvement, with primary care specialties having the lowest premium costs and general surgery physicians the highest. Type of Training Setting: GME training in outpatient settings, such as community-based clinics, is considered less efficient and more expensive than in inpatient hospital settings, according to reviewed literature and experts we interviewed. One reason for this may be differences in the models of teaching used in each of these settings. According to one group of experts we interviewed, residents in inpatient settings are part of teams that do rounds together, where much of the teaching time involves one clinical teacher and a team of residents, nurses, and other affiliated professionals. This method of teaching may not be feasible in outpatient settings where teaching is more often provided on a more expensive one-to-one basis. Outpatient settings, particularly smaller ones, may also have to incur more fixed costs relative to inpatient settings that may have more facility space and other resources in place to meet accreditation requirements. Location: Geographic location also drives the variation in training costs. For example, resident salaries vary based on general salary patterns across the United States. According to one group of experts we interviewed, there is a range of compensation packages for residents, and base salaries can vary from $35,000 to $55,000 per year. Malpractice insurance may also vary by geographic location. Further, rural training sites may incur higher costs because their training may have to utilize multiple training sites—such as community hospitals or rural health clinics—in order to meet accreditation requirements for resident rotations and patient case-mix. The added administrative work of coordinating with other sites to provide these resources can be a challenge. Age of the program: Newer residency programs may have higher costs than older, more established programs. According to some GME experts we interviewed, the first year a teaching site operates a residency program is more expensive because new programs may be smaller and cannot spread out fixed costs. In addition, it can be expensive for a new GME program to meet accreditation requirements, such as required infrastructure and minimum faculty. Studies estimating GME training costs show these costs vary by program characteristics. For example, we identified 10 studies that estimated GME training costs; however, these studies were not comparable because they focused on discrete programs with different characteristics, utilized different methodologies, were conducted at different points in time, and did not examine the same cost elements. Further, these studies are not generalizable due to limitations in study methodology, such as small sample sizes. And, given the age of some of these studies, they may not be reflective of current GME training costs. Across the 10 studies we reviewed, estimates of costs ranged from $35,164 to $226,331 per resident. (See table 8.) The Medicare cost reports that hospitals submit annually to CMS, though they have certain limitations, also suggest variability in residency training costs. For example, according to the cost reports, in 2015, direct costs varied from $56,998 to $333,565 per resident (excluding outliers). (See table 9.) However, these costs are limited to direct GME costs specified in Medicare guidance, and they have other limitations due to their collection and reporting. Challenges Exist in Measuring and Comparing GME Training Costs and Little is Known about their Relationship to Federal GME Funding We found that there is no standard method or tool across teaching sites for identifying and capturing GME training costs. One expert told us that, therefore, the reporting of costs depends on how each teaching site, and the individuals at each site, are tracking and defining those costs. Another group of experts who conducted a study to estimate GME training costs in teaching health centers told us they were unable to identify a common instrument and had to develop their own instrument to standardize costs. According to literature we reviewed and experts we interviewed, Medicare GME guidance for reporting training costs is not always clear, and differences in how teaching sites define costs can lead to inconsistent measurement. One expert told us that Medicare GME payment rules are subject to interpretation, and thus there is variation between teaching sites in how costs are reported on Medicare cost reports. Other GME experts told us that many teaching health center residency programs rely on in-kind benefits, such as building space donated by organizations, but health centers vary in how they account for the costs of these benefits. Some teaching health centers will score them as in-kind contributions, others will provide a square footage cost amount, and others may not track and report these costs at all. While one group of experts suggested there be national guidelines to ensure all teaching sites are using the same rules to define and report costs, one expert cautioned that a common tool would make it impossible to reflect the unique characteristics of each program. Factors specific to teaching sites may affect how they identify their training costs. The varying relationships and financial arrangements between the teaching site, its partners, and its faculty affect how it allocates and reports training costs. For example, a teaching site may have various educational partners, such as medical schools and community-based training sites, and be affiliated with multiple hospitals, each of which tracks costs differently. Teaching sites differ in how they share training costs with these partners. In addition, faculty arrangements vary. For example, in some cases faculty are employees of the teaching site and in other cases, faculty bill for their services independently. Moreover, facilities vary in the experience of their personnel responsible for identifying GME training costs. For example, program directors may not have the financial experience needed to identify costs, and some teaching sites may use outside consultants to identify costs. Turnover in the staff responsible for tracking costs, lack of communication between program staff and the accounting departments, or a change in ownership of the teaching site may add to the challenge of accurately identifying costs. According to studies we reviewed and experts we interviewed, some GME training costs are difficult to accurately identify and measure. For example: Faculty Costs: Faculty responsibilities are spread out across education, research, administrative, and patient care activities, and the time spent in each activity is not always clear. The only allowable faculty costs on Medicare cost reports are those for education-related activities, such as the clinical supervision of residents. For example, if a faculty member performs a procedure while doing rounds with residents, the teaching site must determine how much of that time was for patient care and how much was for education. However, making this determination can be challenging for teaching sites. One group of experts told us that while most teaching sites have a formula to calculate these education costs, they are most likely an undercount. However, another expert said that officials preparing the cost reports are not systematically splitting faculty time between education and patient-care activities and are most likely guessing. Facility Costs: MAC officials told us that facility costs that hospitals report on their cost reports should be allocated based on square footage, building depreciation, and utility costs, but there is some variation in how teaching sites calculate their square footage. Further, as previously described, donated building space may not be accurately identified by teaching sites. Experts who conducted a study to estimate teaching health center program costs told us that several centers in their study were not accustomed to thinking of donated space as a residency program expense. Indirect Medical Education Costs: There is not a clear and consistent definition of the indirect medical education costs, and there may be variability in these costs. Furthermore, there is little incentive for teaching sites to accurately identify these costs because Medicare does not require them for purposes of determining IME payments, according to one reviewed study. As a result, it is unclear what indirect costs the Medicare IME payment adjustment is meant to cover. Additionally, experts told us that it is difficult to measure the extent to which costs associated with the unique services that teaching sites provide, such as stand-by services or their role as a safety net provider, are attributable to GME training. Resident benefits for teaching site costs and productivity: The benefits that residents provide can generate cost savings and revenue for the teaching site, yet the extent of these benefits can be difficult to calculate. According to one study we reviewed, the value that residents provide cannot be measured directly; rather, the value is reflected in the teaching site’s patient care costs and on the clinical productivity of attending physicians. One expert we interviewed said that identifying when residents move from a cost to a financial benefit is complicated and depends, for example, on a resident’s year of training and residency program requirements. Also, the value of resident services can vary by specialty. For example, residents in general surgery or internal medicine provide more on-call services than residents in dermatology or radiation oncology. Although the cost savings and revenue generated by residents has an effect on the net costs of GME training, it is typically not accounted for when estimating costs. In addition to these challenges, federal agencies do not systematically collect and standardize cost information at the national level, according to literature we reviewed and experts we interviewed. For example, a HRSA study identified training costs in teaching health centers, but the study only captured costs over one year and did not include all THCGME programs. Further, in addition to inconsistencies in how teaching sites collect data for Medicare cost reports, the data do not include the revenue impact and actual indirect costs associated with training residents and cannot be broken down by specialty programs. In addition, they are not a comprehensive source of training costs because they are limited to teaching sites that received Medicare GME payments. It does not include other teaching sites, such as medical schools, teaching health centers, and teaching hospitals that may have only received other federal funding for GME training, such as VA GME payments. Finally, because Medicare cost report data are not generally used to calculate GME payments, they are not reviewed or audited by contractors except when new teaching sites establish their base year PRA. Further, teaching sites may not have accurately reported costs used to calculate Medicare DGME payments. According to experts we interviewed, at the time that most teaching sites established the base year PRAs used to calculate DGME payments, teaching site accounting practices and their varying financial relationships with affiliated education partners may have led them to over-report or under-report their costs. As a result, there is variation in sites’ PRAs, which may not reflect actual variation in direct costs. To identify how the PRA compares to reported direct training costs, we compared teaching site PRAs with the direct training costs that they reported for 2015 (though reported costs may not accurately reflect all GME training costs, as previously noted). For teaching sites in the median range, their Medicare DGME payment covered 67 percent of their reported direct training costs in 2015. However, we found wide variation across teaching sites—the PRA ranged from 31 to 157 percent (excluding outliers) of teaching sites’ reported direct costs. (See table 10.) In addition to the challenges of identifying and comparing costs, little is known about their relationship to federal GME funding. Some studies have analyzed federal GME funding relative to GME training costs but do not consistently indicate whether federal payments accurately reflect training costs. For example, both the Medicare Payment Advisory Commission and HHS found that the Medicare IME payment adjustment exceeds the actual indirect costs that teaching sites incur from operating GME programs. The studies recommended modifying the IME payment adjustment. However, another study found that indirect medical education costs and other costs, such as stand-by services, add to patient care costs in teaching hospitals, and concluded that a reduction in the Medicare IME payment adjustment could result in insufficient Medicare payments to cover these costs. Other studies found that federal funding is lower than actual program costs. For example, one study estimated the per resident training cost in teaching health centers in fiscal year 2017 to be $157,602, compared to the $95,000 per resident that was being provided in federal funding. Another study found that their average $183,138 per resident cost estimate for internal medicine programs of 120 residents exceeded Medicare DGME payments in 2012 by approximately $160,000 per resident, and noted that other sources of funding, including Medicare IME payments, subsidized training costs. The relationship between training costs and federal GME funding is complicated by the nature of how most GME payments are made. For example, with respect to Medicare GME payments, the largest source of federal GME funding, payments are not based on actual costs, and there are no reporting requirements for how teaching sites use the payments. Specifically, teaching sites distribute these payments depending on their needs and the needs of their affiliates, making it difficult to understand the relationship between GME funding and training costs. Information the Federal Government Collects to Manage Programs Is Not Sufficient to Comprehensively Understand Its Investment in GME Training Agencies generally collect information to manage their respective programs, ensure the accuracy of payments, and reduce the potential for duplicative payments within or across federal programs that fund GME training. However, HHS does not have sufficient information available to comprehensively evaluate the federal programs that fund GME training, identify gaps between federal GME programs’ results and physician workforce needs, and make or recommend to Congress changes in order to improve the efficient and effective use of federal funds. Each Federal Agency Generally Collects Information Needed to Manage Its Respective Program and Ensure Payment Accuracy Federal agencies generally collect information to manage their respective programs and ensure the accuracy of payments. To manage their programs, agencies use information, such as the total number of FTE residents and training costs, to calculate payments. For example, VA medical facilities use information that academic affiliates report about the costs of their resident salaries and benefits to set payment rates used to reimburse the affiliates. And, information about individual residents is used to verify that recipients accurately reported, according to resident counting rules, the number of FTE residents used to calculate payments. For example, MACs use IRIS data about residents’ number of years completed in all types of GME training programs to verify that residents who have completed their initial residency period were only counted as half (50 percent) when determining the DGME payment amount. (For a summary of the information that agencies collect for each of the five programs we reviewed, see appendix II. See table 11 for a summary of how agencies use the collected information.) In contrast to the other programs, states establish and administer Medicaid GME payment policies and CMS generally collects limited information about states’ Medicaid GME payments. CMS does not use this information except to determine the amount of federal matching funds for each state. While state Medicaid agencies report the aggregate amount of GME supplemental payments they make to CMS, there are no federal requirements that states or teaching institutions report information about supplemental payments at the provider level, the aggregate or provider-level amount of add-on adjustments to the state’s payment rates for GME training, or how these payments support GME training. Rather, CMS officials said that states have the option to collect information about Medicaid GME payments. However, of the 45 state Medicaid agencies that reported on our survey that they paid for GME training, less than half (20 states) indicated that they require funding recipients to report any information related to Medicaid GME payments, such as the number or type of residents supported. While the risk of duplication is reduced by each program’s design, federal agencies also use the information collected to identify duplicative payments within and between most of the federal programs, with the exception of Medicaid. For example, IRIS data is used to identify whether more than one hospital claimed the same resident’s time for purposes of Medicare GME payments. Also, according to HRSA officials, contractors conduct assessments of the FTE resident counts reported by recipients of CHGME or THCGME program funding to identify duplication with FTE residents reported for Medicare GME payments. For example, HRSA officials told us that its combined academic years 2012-2013, 2013-2014, and 2014-2015 FTE assessment of the 59 teaching health centers in the THCGME program identified 6 centers, from 3 unique organizations, that had a combined total of 6.63 FTE residents that were duplicative with Medicare FTE resident claims, out of over 1,000 FTE residents reviewed over that 3-year time period. In addition, HRSA has worked with CMS to maintain data for this assessment. For example, at HRSA’s request, CMS added a field to the cost reports to check whether any residents from a teaching health center rotated to the hospital and, if so, the number that rotated from a teaching health center. However, these agencies do not have procedures in place to identify potentially duplicative payments between their programs and Medicaid GME payments, which totaled $2.3 billion in federal Medicaid spending in 2015. There is no federal requirement that CMS identify potentially duplicative payments between Medicaid GME payments and other federal GME programs. And, without better data collected about Medicaid GME payments, there is limited information available to identify potentially duplicative payments between, for example, HRSA’s GME programs and Medicaid GME payments. HRSA and VA, which combined provided 13 percent of total federal GME funding in 2015, use the information collected for ongoing program performance measurement and program evaluation. HRSA evaluates the performance of its payment programs. To do so, HRSA collects information on program outcomes, such as whether supported residents received training in, or went on to practice in, a medically underserved area, a primary care setting, or rural area. HRSA uses these performance measures for ongoing evaluations, for internal and congressional reporting, and in its budget justification. In addition, HRSA is authorized to implement a quality bonus system for the CHGME program, which it plans to do by fiscal year 2019. VA issues a survey to VA residents to assess, among other things, a resident’s likelihood of considering a future employment opportunity at a VA medical facility. VA medical facilities are required to collect detailed records of residents’ participation in assigned educational activities and they must evaluate each resident according to accrediting body requirements, such as patient care and medical knowledge. VA medical facilities are also required to produce an annual report on each GME training program that includes, among other things, the accreditation status of its GME training programs, its response to results of the resident satisfaction survey, and opportunities for improvement in residents’ education. CMS, however, does not use the information it collects for Medicare or Medicaid to evaluate the performance of these programs toward meeting physician workforce goals, even though they accounted for 87 percent of federal GME spending in 2015. As noted, Medicaid programs are administered at the state level. For Medicare, CMS officials said that their goal is to ensure hospitals are paid according to the GME statutes and regulations. It does not use information collected to evaluate the performance of Medicare GME payments, such as evaluating the number of residents supported by specialty or whether residents went on to practice in rural areas, primary care, or in medically underserved areas. The officials further noted that Medicare is an insurance program, and not among the health care workforce programs that are under the purview of HRSA. Although CMS officials told us that they coordinate with HRSA regarding Medicare GME payments, HRSA does not conduct research to inform GME policy related to CMS’s GME payments. Also, in a 2015 report, we found that HHS lacks performance measures of Medicare GME payments that are directly aligned with areas of health care workforce needs identified in HRSA workforce projections. Agencies Do Not Collect Sufficient Information for HHS to Comprehensively Understand the Federal Investment in GME Training Information that agencies collect is not always complete, especially information about Medicaid GME spending. As previously noted, CMS collects limited information about the amount of Medicaid GME payments and how these payments support GME training, such as the number or type of residents supported. In addition, agencies did not collect or use the following information, with some exceptions, to understand the federal investment in GME training: Payment Amounts by Recipient Characteristics: With the exception of HRSA’s CHGME and THCGME programs, agencies do not collect information on payment amounts to training programs with specific characteristics, such as payment amounts by the type of training programs supported. This information would be needed, for example, to compare the payment rates of each program to the costs of training residents in the teaching sites supported. GME Costs and Revenues: Agencies did not collect information about funding recipients’ indirect costs or revenue generated from resident activities, with the exception of HRSA’s THCGME program. Also as previously noted, the costs that hospitals are required to report annually on their Medicare cost report may not be complete or consistent, nor, according to CMS officials we interviewed, is this information audited and used except in limited cases. No information is collected by CMS about direct or indirect training costs incurred by recipients of Medicaid GME payments, and only eight state Medicaid agencies reported on our survey that they require recipients to report information about their direct costs. Output or Outcome Measures: Unlike HRSA and VA, CMS does not collect information for the GME training programs that it supports through Medicaid to assess outputs or outcomes related to health care workforce planning. In addition, while CMS uses IRIS to collect information on the number and type of residents and their number of years completed in all types of GME training programs of residents supported by Medicare GME payments, it does not use it to understand the output of such spending or for health care workforce planning. CMS also does not collect information on the outcomes associated with Medicare GME payments, such as whether residents who were supported by Medicare went on to practice primary care specialties or in rural or medically underserved areas. Further, although HRSA collects data about the outcomes of its CHGME and THCGME programs, this information is self-reported by funding recipients. However, HRSA officials told us that it has taken steps to validate the information reported. For example, it has started to collect residents’ national provider identifiers for residents supported by the CHGME and THCGME programs, which is used to validate resident FTE counts and reported outcomes, such as whether residents went on to practice in primary care. Quality Measures: Agencies generally require that GME training programs be accredited in order to receive funding, and accrediting bodies are responsible for evaluating the educational quality of GME training programs. In addition, HRSA and VA collect some information about the learning experiences of residents in GME training programs supported, such as whether residents received training in certain topic areas. HHS and its advisory bodies have proposed tying federal funding to the performance of the programs. For example, the President’s budget proposals for fiscal years 2015, 2016, and 2017 for HHS proposed to Congress that it be allowed to set standards for teaching hospitals that receive Medicare GME payments to emphasize skills that promote high quality and high value in health care. In addition, the National Academy of Medicine has called for improved measures of the performance of GME training programs, and as of October 2017, it had an initiative to identify quality and other measures, such as residents’ competency or patient outcomes of care provided by residents. Information is also not always consistently collected within programs or standardized across programs. For example, VA medical facilities report information centrally to VA about their total payments to academic affiliates, but they inconsistently used accounting codes to report the total amount that they spent and did not report the amount they paid each academic affiliate, limiting the reliability of data VA collects on the total amount spent on GME. Additionally, VA medical facilities are required to report annually to VA their approved payment rates that each affiliate charges, but VA was unable to provide payment rate schedules for all affiliates in fiscal year 2015. Across all agencies, information about the number of FTE residents supported was collected at, and for, generally different points in time and through different reporting systems. (See table 12.) For example, HRSA generally collects FTE resident information through applications or supporting documentation prior to and at the end of a fiscal year, while VA collects such information in monthly or quarterly invoices throughout an academic year. And, CMS collects similar FTE resident information through cost reports and IRIS files based on each hospital’s own cost reporting period, which can vary by hospital. In addition, the five federal programs do not consistently use the same unique identifiers for their funding recipients, such as a hospital’s Medicare provider identification number, or individual residents supported, such as their national provider identifier, which limits the ability to link data across programs. In some cases, data collection may vary across the various GME programs based on program requirements. Additionally, GME funding recipients may be required by law to report certain types of information for some programs, but not for others. For example, THCGME recipients are required to report on the number of residents trained at the health centers who completed their residency and care for vulnerable populations living in underserved areas. Relatedly, CHGME funding recipients are required to report the number of residents trained at the hospital who completed their residency training and care for children within the service area of the hospital or state in which the hospital is located. No similar requirements apply to Medicare GME recipients. Because the information that agencies collect is not always complete or consistent, HHS does not have sufficient information available to comprehensively evaluate the federal programs that fund GME training. As a result, HHS cannot identify problems and make or recommend changes to Congress in order to improve the efficient and effective use of federal funds. Under leading practices we derived from GPRA and GPRAMA and federal standards for internal controls, agencies should identify and collect complete and reliable information needed to evaluate the performance of federal programs, while balancing the administrative costs of such efforts. In addition, agencies should use that information to monitor performance of programs in order to identify problems and make changes or recommendations to Congress for improvements. Improvements in the performance monitoring can enhance and sustain collaboration and reduce fragmentation within and across federal agencies that administer programs that fund GME training. However, because of limitations with the information agencies collect, HHS does not have information available to comprehensively understand across all programs that fund GME training, for example, the: 1. Total amount that the federal government spends on GME training that includes total Medicaid GME spending and the total amount VA medical facilities paid to academic affiliates; 2. The amount the federal government paid each recipient for GME training, such as the amount paid to each VA academic affiliate; 3. Distribution of funding—that is, the amount of funding by GME training program characteristics, including program type; 4. Extent to which the net cost of training residents, including the variation in costs along different factors that were previously discussed, are accurately represented by formulas used to calculate payments; 5. Output and outcomes of GME training funded by federal programs— that is, how many and what type of residents the federal government supports, where those residents trained and went on to practice, and whether those residents will help address future health care workforce needs; and 6. Quality of GME training programs that are supported by the federal government, such as whether residents participated in certain educational activities or the practice readiness or competence of residents who completed GME training programs supported. HHS’s advisory bodies and stakeholders have made calls for improvements in the accountability and transparency of federal programs that fund GME training. For example, the Medicare Payment Advisory Commission recommended greater accountability and transparency for Medicare GME payments by making information about Medicare GME payments and teaching costs available to the public. And, the National Academy of Medicine recommended that a GME Center within the Centers for Medicare & Medicaid Services be created to be responsible for, among other things, data collection and detailed reporting to ensure transparency in the distribution and use of Medicare GME payments. Conclusions The federal government is an important source of funds for GME training, and through its funding and workforce planning efforts, HHS, as the largest funder of GME training, has an important role in ensuring federal programs are meeting the nation’s workforce needs. For HHS to carry out the comprehensive planning approach that we recommended in 2015, complete and consistent information on GME training is important. However, the information currently collected is insufficient for this purpose. For example, HHS lacks comprehensive information on the total number and specialty type of residents supported by all of the federal programs that fund GME training. But, HHS may have the opportunity to improve the information that its component agencies collect about how federal funding is used to support GME training to determine whether these programs are meeting these needs. New data collection efforts could potentially increase certain administrative costs for the federal government and providers. However, unless HHS collects more complete and consistent information, it will be limited in its ability to conduct comprehensive, ongoing evaluations of the federal government’s $14.5 billion annual investment in GME training. Such evaluations could allow HHS and other federal agencies to make programmatic changes, or make recommendations to Congress if legislative authority is needed, to improve the cost effectiveness of current federal funding. In addition, collecting more complete information could help HHS and other federal agencies better manage fragmentation in spending, management, and oversight of federal programs that fund GME training. Recommendation for Executive Action We are making the following two recommendations to HHS: The Secretary of HHS should coordinate with federal agencies, including VA, that fund GME training to identify information needed to evaluate the performance of federal programs that fund GME training, including the extent to which these programs are efficient and cost-effective and are meeting the nation’s health care workforce needs. (Recommendation 1) The Secretary of HHS should coordinate with federal agencies to identify opportunities to improve the quality and consistency of the information collected within and across federal programs, and implement these improvements. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this product to HHS and VA for comment. In its comments, reproduced in appendix III, HHS concurred with our two recommendations to identify and improve information collected to evaluate the performance of federal GME programs. HHS noted that the President’s fiscal year 2019 budget for HHS, released on February 12, 2018, proposed consolidating federal spending from Medicare, Medicaid, and the CHGME Payment Program into a single grant program for teaching hospitals. The proposed program would be jointly operated by CMS and HRSA and grant HHS authority to modify GME payment amounts based on criteria, including addressing health care workforce shortages. HHS stated that the program would allow the department to set priorities, reward performance, and align reporting metrics across its GME efforts. HHS indicated that, if the Congress adopts this proposal, it could work toward addressing both recommendations. It is important to note, however, that the recommendations in this report stand on their own and are separate from any efforts to modify how federal GME funds are distributed. Whether or not legislation is enacted to implement a consolidated federal GME grant program, HHS still needs to take actions to improve the information that agencies collect about how federal funding is used to support GME training. Such actions are important for HHS to assess the cost effectiveness of federal efforts to help meet the nation’s physician workforce needs. HHS also provided technical comments, which we incorporated as appropriate. In its comments, reproduced in Appendix IV, VA said that it has significant relationships with other federal funders of GME, including HRSA. In addition, VA said it looks forward to further dialogue with other agencies to better share GME information. VA did not provide technical comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 20 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and the Secretary of Veterans Affairs. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix V. Appendix I: State Medicaid Agency and Other State and Private Sources of Graduate Medical Education Funding In addition to federal funding, state governments—including state Medicaid agencies—and private sources also support graduate medical education (GME) training. However, little is known about these other sources. Therefore, we analyzed Medicare cost report data to determine the extent to which teaching hospitals were operating above their FTE resident caps in 2015—an indication of the extent to which hospitals may receive other sources of GME funding, such as state or private sources. We also surveyed state Medicaid Directors from 50 states and the District of Columbia to collect information on how and the extent to which states paid for GME training through Medicaid payments, and the states’ related reporting requirements and oversight activities. As part of our interviews with experts from research and industry organizations, we asked about state and private sources of funding for GME training and what is known about the amount of such funding. Teaching hospitals likely utilize state and private sources of funding, as well as other federal funding, to pay for residents beyond those paid for by Medicare—the largest federal funder of GME training. Hospitals have continued to add residents over time even though for most hospitals Medicare capped funding based on their number of full-time-equivalent (FTE) residents in 1996. In 2015, about half of teaching hospitals that receive Medicare GME payments had expanded their GME training programs above their Medicare FTE cap, and the extent to which they operate above their cap varied by hospital. We found that 47 percent of teaching hospitals were operating their GME training programs above their Medicare FTE cap on direct GME (DGME) payments. These hospitals had an average of 30.8 additional FTE residents above their DGME cap, ranging from 1.0 to 284.3 additional FTE residents. Most states (45) paid for GME training through their Medicaid programs in 2015; however, states varied in the payment model that they used to make Medicaid payments for GME training, though most used fee-for- service payments, including supplemental payments. Of the 45 state Medicaid agencies that paid for GME training, 25 states did so through fee-for-service payments only; 19 states did so through both fee-for- service and managed care payments; and 1 state (New Jersey) made managed care payments only. Of the 44 states that paid for GME through Medicaid fee-for-service payments, 21 states paid as an add-on to its fee-for-service rate, and 31 states paid through lump sum supplemental or other payments. Of the 20 states that made Medicaid managed care payments for GME, 12 paid teaching sites directly and 10 states made GME payments through managed care plans. Of the 19 states that paid for GME through both Medicaid fee-for- service and managed care, fee-for-service GME payments made up 48 percent of all Medicaid GME payments, on average, while managed care payments made up 52 percent. (See table 13.) While some states followed the Medicare formula for calculating GME payments, most have deviated from this method. Of the 43 states that responded about how they calculated the amount of GME payments, 10 states reported that they followed the Medicare GME payment formula to calculate Medicaid fee-for-service payments for GME training. In addition, two states followed Medicare’s formula for making managed care payments for GME training. Most states (32) followed another method. Medicaid GME payments per FTE resident varied by state and within states, even after adjusting for geographic differences in labor costs. Specifically, the average combined federal and state payment per FTE resident ranged from $2,108 in Rhode Island to $100,587 in Arizona. (See table 14.) The payment per FTE resident also varied within states. The Medicaid payment per FTE varied the most within Ohio, where the state reported payments ranging from $1,415 per FTE to $453,098 per FTE. About half of the states (22 of 45) reported that they specified the type of expenses that its Medicaid GME payments were intended to cover. Of these 22 states, payments were intended to cover the costs of residents’ salaries and benefits (14 states), faculty salaries and benefits (11 states), program administration costs (10 states), or indirect medical education costs (8 states). Some state Medicaid agencies have tied their payments to incentives to expand the physician workforce. Of the 45 states that reported Medicaid GME payments in 2015, 4 states—Alabama, Montana, New Mexico, and South Dakota—reported that they restrict payments to the training of primary care physicians only. (See table 15.) An additional 9 states required that the funding recipient have a primary care residency program. In addition, according to experts we interviewed, states have been considering how to target Medicaid GME payments to meet state workforce needs. For example, one expert said some states have used Medicaid payments to expand GME training of physicians in outpatient, ambulatory care settings. However, Medicaid GME payments generally go to hospitals. Specifically, 44 of the states reported making payments to hospitals and 7 states paid other teaching sites, such as teaching health centers. The one state that did not make payments to teaching hospitals directed all Medicaid payments for GME training to medical schools. Further, one expert we interviewed told us that it is difficult for states to change their GME financing models to direct funding to specific workforce goals because hospitals are reliant on state GME payments to support certain residency positions. Instead, states have used a moderate approach, such as providing additional funding targeted to specific training, rather than a complete funding overhaul that would redistribute existing funds. Despite the significant investment in GME training by state Medicaid agencies, which is matched by the federal government, the extent of state oversight of Medicaid GME spending varied by state. As previously mentioned, less than half of the states (20 of 45) required teaching sites that received Medicaid GME payments to report information to the state. (See table 16.) Among these 20 states, 16 required recipients to report information on the number of residents or FTE residents, 8 states required information about direct medical education costs, 6 states required information about the GME training program specialties supported, and 4 states required recipients to report information about the residents’ characteristics, such as their post-graduate year. Of the 10 states that made Medicaid GME payments to managed care plans, 4 states—Kansas, Kentucky, Michigan, and Minnesota—set the methodology or base rate that managed care plans were required to use to calculate GME payments. None of the states reviewed and approved payments. Further, 44 of the 45 states were able to provide at least some information on the total amount the state spent on Medicaid GME payments, but the amount of information they were able to provide varied. While most states (38) were able to provide data on all GME payments by recipient, 4 states could provide data on some but not all payments, and 2 states could not provide data on the amount of GME payments by recipient. And, less than half of the states (18 of 45) were unable to provide data on either the number of FTE residents or resident counts at teaching entities that received Medicaid GME payments. (See table 17.) Experts we interviewed identified other sources of state and private funding for GME training. Hospitals and health systems: Hospitals may rely on their own funding to support their residency programs. One expert we interviewed said that hospitals that sponsor GME residency programs provide funding for certain specialty residency programs that make money for the hospital. State government grant or other funding: Aside from GME funding through Medicaid, one expert told us that some states make direct grants to residency programs, mostly primary care residency programs, or through state appropriations specifically for GME training. For example, Florida created an $80 million fund to support state training in outpatient or community-based programs. And, one expert told us that some states have developed innovative funding mechanisms. This was the case in Georgia, which established a hospital coalition that funded 400 new residency slots to meet the needs of medically underserved populations. Private health insurers: Experts said GME funding from private health insurers is generally thought to be provided through higher reimbursement rates to teaching hospitals than nonteaching hospitals, including through Medicare reimbursement. While private insurers fund GME training through their contracts with individual hospitals, one expert told us that those contracts do not likely differentiate the amount of funding that is used toward GME training versus other activities. However, one expert raised concerns that private insurers are not paying their share of GME costs. Another expert noted that there have been some state-level efforts to require all payers, including private insurers, to have some responsibilities in paying for the education of the health care workforce, even beyond physician GME training. Other: Experts also identified other possible sources of private funding. For example, one expert told us that, while the amount of funding from pharmaceutical or medical device companies has not been identified in existing studies, anecdotally there is a growing use of these funding sources. Experts also said that some funding is provided by philanthropic organizations or medical schools that are affiliated with residency programs. Appendix II: Information that Federal Programs Collect about Funding for Graduate Medical Education Training Appendix II: Information that Federal Programs Collect about Funding for Graduate Medical Education Training Medicaid Services (CMS) Administration (HRSA) Affairs (VA) Medicaid Services (CMS) Administration (HRSA) Affairs (VA) Appendix IV: Comments from the Department of Veterans Affairs Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, William Hadley, Assistant Director; Christine Brudevold, Assistant Director; Katherine Mack, Analyst-in-Charge; A. Elizabeth Dobrenz; Maggie G. Holihan; Daniel Lee; and Todd Anderson made key contributions to this report. Also contributing were Sam Amrhein, Muriel Brown, Lisa Opdycke, and Jennifer Whitworth.
Why GAO Did This Study An adequate, well-trained physician workforce is essential for providing access to quality health care. While a number of factors affect the supply and distribution of physicians, GME is a significant determinant. A significant portion of GME training funds come from federal programs and states. This report (1) describes the amount and distribution of federal government and state Medicaid agency spending on GME; (2) describes what is known about GME costs; and (3) examines the extent to which the federal government collects information to understand its investment in GME. GAO reviewed reports, agency websites, and interviewed agency officials to identify federal programs that fund the clinical training of residents and were authorized through 2017. GAO analyzed 2015 data—the most recent data available at the time of GAO's analysis—including from a state survey. All 50 states and the District of Columbia responded to the survey. GAO reviewed literature, interviewed experts from seven organizations knowledgeable about GME costs, and analyzed Medicare data. GAO also reviewed documentation from HHS and the Department of Veterans Affairs (VA) and interviewed agency officials. What GAO Found Federal agencies and state Medicaid agencies spent over $16.3 billion in 2015 to fund graduate medical education (GME) training for physicians—commonly known as residency training. The federal government spent $14.5 billion through five programs, and 45 state Medicaid agencies spent $1.8 billion. About half of teaching sites that received funding—such as teaching hospitals—received funds from more than one of the five programs. GME training costs vary due to the characteristics of teaching sites, such as the number of residents trained and their specialty, which can make it difficult to compare training costs across sites. Further, challenges exist in measuring training costs because some costs, such as faculty teaching time, are difficult to identify. Also, there is no standard method for identifying and capturing training costs, and each teaching site may vary in how it does so. While federal agencies generally collect information needed to manage their individual programs, this information is not sufficient to comprehensively understand whether the federal investment in GME training meets national physician workforce needs. The information agencies collect is not always complete or consistent within or across programs. For example, national data on GME training costs are not systematically collected, and some agencies lacked data to understand the total amount spent, or the outcomes of their programs, such as where supported residents went on to practice. GAO recommended in 2015 that the Department of Health and Human Services (HHS) develop a comprehensive planning approach to identify and address areas of health care workforce need. HHS concurred and identified steps it could take. While HHS has yet to take these steps, the information currently available is also insufficient for such planning. Comprehensive information is needed to identify gaps between federal GME programs and national physician workforce needs—particularly the distribution of physicians geographically or across specialties—and to make or recommend to Congress changes to improve the efficient and effective use of federal funds to meet those needs. What GAO Recommends GAO recommends that HHS coordinate with federal agencies, including VA, to (1) identify information needed to evaluate federal GME programs, and (2) identify opportunities to improve the quality and consistency of information, and implement these improvements. HHS concurred with both recommendations.
gao_GAO-18-291
gao_GAO-18-291_0
Background In accordance with the Improper Payments Information Act of 2002 (IPIA), as amended, and OMB guidance, CMS developed the PERM to estimate the national Medicaid improper payment rate. CMS has other mechanisms to review and assess program integrity risks in state Medicaid managed care programs, and it uses information from the PERM to target its program integrity activities and oversight of states’ Medicaid programs. IPIA and OMB Guidance for Estimating Improper Payments IPIA requires federal executive branch agencies to, among other things, (1) identify programs and activities that may be susceptible to significant improper payments; and (2), on an annual basis, estimate the amount of improper payments for susceptible programs and activities. Agency heads must produce a statistically valid estimate or an estimate that is otherwise appropriate, using an OMB-approved alternate methodology. Those agencies with programs identified by OMB as being high priority for additional oversight and review are required to submit annual reports to their Inspectors General detailing the actions the agency plans to take to recover improper payments and prevent future improper payments. The Inspector General of each agency submitting such a report is required to review the quality of the improper payment estimates and methodology, among other things. OMB designated Medicaid as a high priority program. In addition, the Improper Payments Elimination and Recovery Act of 2010 requires the Inspector General of each agency to conduct a compliance review to report on the agency’s compliance with several criteria, one of which is that an agency has reported an improper payment rate of less than 10 percent for each program and activity. IPIA also directed OMB to issue guidance for agencies in implementing the IPIA improper payments requirements. Among other things, the OMB guidance requires that agencies review payments made at the point that federal funds are transferred to nonfederal entities and report on the root causes of identified improper payments. Payment Error Rate Measurement To calculate the Medicaid improper payment rate through the PERM, CMS computes an annual rolling average of improper payment rates across all states based on a 17-state, 3-year rotation cycle. In accordance with IPIA, as amended, OMB approved CMS’s PERM methodology, and the HHS-OIG conducts annual compliance reviews. Beginning with its annual improper payment compliance review for fiscal year 2014, the HHS-OIG established a rotating approach to reviewing the estimation methodology for high-priority programs, including Medicaid, that OMB deemed susceptible to improper payments. Due to the number and complexity of the programs, the HHS-OIG methodology reviews are scheduled to be performed over a 4-year period; the PERM estimation methodology will be reviewed as a part of its fiscal year 2017 compliance review. Each of the three components of the Medicaid PERM—FFS, managed care, and eligibility—is estimated differently: The FFS component of the PERM measures errors in a sample of FFS claims, which are records of services provided and the amount the Medicaid program paid for these services. For the majority of sampled FFS claims, the PERM review contractor performs a medical review, which includes a review of the medical documentation to determine errors that do not meet federal and state policies, such as medically unnecessary services, diagnosis coding errors, and policy violations. Any FFS claims that were paid for services that should have been covered under a managed care plan’s capitated payment are also considered errors. The managed care component of the PERM measures errors that occur in the capitated payments that state Medicaid agencies make to managed care organizations (MCO) on behalf of enrollees. Capitated payments are periodic payments approved by CMS that state Medicaid agencies make to contracted MCOs to cover the provision of medical services to enrollees, as well as the MCOs’ administrative expenses and their profits or earnings. The PERM assesses whether any payments made to the MCOs were in amounts different than those the state agency is contractually required to pay, which are approved by CMS. In contrast to the FFS component, the managed care component of the PERM neither includes a medical review of services delivered to enrollees, nor reviews of MCO records or data. The eligibility component of the PERM measures errors in state determinations of whether enrollees meet categorical and financial criteria for receipt of benefits under the Medicaid program. The eligibility component assesses determinations for both FFS and managed care enrollees. This component has not been calculated since 2014; instead, CMS piloted different approaches to update the methodologies used to assess enrollee eligibility, as the Patient Protection and Affordable Care Act changed income eligibility requirements for nonelderly, nonpregnant individuals who qualify for Medicaid. Beginning in the 2019 reporting year, eligibility reviews under the PERM will resume and will be conducted by a federal contractor. Medicaid Program Integrity and Oversight in Managed Care Medicaid program integrity consists of efforts to ensure that federal and state expenditures are used to deliver quality, necessary care to eligible enrollees, and efforts to prevent fraud, waste, and abuse. We have found in prior work that CMS’s and states’ program integrity efforts focused primarily on payments and services delivered under FFS and did not closely examine program integrity in Medicaid managed care. For Medicaid managed care, CMS has largely delegated program integrity oversight of MCOs to the states. States, in turn, generally oversee MCOs and the providers under contract to MCOs through their contracts with the MCOs and reporting requirements. Some program integrity risks for managed care are similar to those in FFS, including payments made for nonenrolled, ineligible, or deceased individuals; payments to ineligible, excluded, or deceased providers; and payments to providers for improper or false claims, such as payments for services that are not medically necessary. Other program integrity risks are more unique to managed care. For example, capitated payments generally reflect the average cost to provide covered services to enrollees, rather than a specific service. Federal law requires capitation rates to be actuarially sound, meaning that, among other things, they must be reasonably calculated for the populations expected to be covered and for the services expected to be furnished under contract. In order to receive federal funds for its managed care program, a state is required to submit the rates it pays MCOs and the methodology it uses to set those rates to CMS for review and approval. Additionally, federal and state oversight of Medicaid managed care can include ensuring that MCOs fulfill contractual provisions within their managed care contracts. In some cases, these provisions relate directly to program integrity activities, including plans and procedures for identifying, recovering, and reporting on overpayments made to providers. Payment Error Rate Measurement for Managed Care Has Limitations, Which Are Not Mitigated by Current CMS and State Oversight The managed care component of the PERM measures the accuracy of the capitated payments state Medicaid agencies make to MCOs. Specifically, a CMS contractor examines whether the state agency made capitated payments only for eligible enrollees, made capitated payments for the correct amount based on the contract and coverage requirements (time period and geographic location), made capitated payments based on the correct rate for enrollees, and did not make any duplicate payments for enrollees. CMS’s Payment Error Rate Measurement Measures the Accuracy of Medicaid Managed Care Payments, but Does Not Account for Overpayments and Unallowable Costs CMS officials noted that the agency established capitated payments as the level of review, because the capitation rate is the transaction used to determine the federal match in managed care. In general, the federal government matches most state expenditures for Medicaid services on the basis of a statutory formula. In FFS, the federal match is provided for the amount the state pays a health care provider for delivering services to enrollees. With managed care, the federal match is provided for the amount of the capitation rate the state pays the MCO. Capitated payments do not directly relate to the provision of a specific service, but reflect the average cost to provide covered services to enrollees. As a result, CMS officials maintain that the capitated payment is the lowest transaction level at which the agency can clearly identify federal funds without making significant assumptions. Because the managed care component of the PERM review is limited to measuring capitated payments, it does not account for other program integrity risks—such as overpayments to providers and unallowable MCO costs. In addition to errors in capitated payments included in PERM reviews, CMS regulations state that overpayments in managed care include any payment made to an MCO or provider under contract to an MCO to which the MCO or provider is not entitled under Medicaid. Such overpayments included payments for services that were not provided or medically necessary; or to ineligible, excluded, or deceased providers, which are not measured by the PERM. Unallowable MCO costs refers to operating costs that MCOs cannot claim under their managed care contracts, such as certain marketing costs, or that the MCO reported incorrectly. Among the 27 audits and investigations of Medicaid managed care programs we reviewed, 10 identified about $68 million in MCO overpayments to providers and unallowable MCO costs that were not accounted for in PERM estimates. In addition, one investigation of an MCO operating in nine states resulted in a $137.5 million settlement to resolve allegations of false claims. (See app. I for a complete list of the audits and investigations we identified.) However, the full extent of these overpayments and unallowable costs is unknown, because these audits and investigations were conducted over more than 5 years and involved a small fraction of the more than 270 MCOs operating nationwide as of September 2017. Specifically, 24 of the audits and investigations represented reviews in 10 states and, in many cases, focused on individual providers or MCOs; there were about 90 MCOs operating in the 10 states as of September 2017, according to the Kaiser Family Foundation. Some examples of the audits and investigations that identified overpayments and unallowable costs include the following: The Washington State Auditor’s Office found that two MCOs made $17.5 million in overpayments to providers in 2010, which may have increased the state’s 2013 capitation rates. The New York State Comptroller found that two MCOs paid over $6.6 million to excluded and deceased providers from 2011 through 2014. The Massachusetts State Auditor found that one MCO paid $420,000 for health care services and unauthorized prescriptions from excluded providers in 2013 and 2014. The Department of Justice alleged that an MCO operating in several states submitted inflated expenditure information to the state Medicaid agencies, falsified encounter data, and manipulated claims costs and service provision costs in nine states. The MCO agreed to pay over $137.5 million to resolve these claims. The Texas State Auditor’s Office found that an MCO reported $3.8 million in unallowable costs for advertising, company events, gifts, and stock options, along with $34 million in other questionable costs in 2015. The New York State Comptroller also found that an MCO claimed over $260,000 in unallowable administrative expenses, which contributed to an increase in capitation rates across the state. To the extent that the state does not identify or know of MCO overpayments to providers or unallowable MCO costs, the overpayments and unallowable costs could inflate future capitation rates, as the Washington State Auditor and New York State Comptroller noted in their findings. The PERM assesses the accuracy of capitated payments that states make to MCOs. States set capitation rates based on cost data— historical utilization and spending—that MCOs submit to the state Medicaid agencies, but the PERM does not consider these data. Unless removed from these cost data, unidentified overpayments and unallowable costs would likely inflate the MCO cost data that states use to set capitation rates. (See fig. 1.) As a result, future capitation rates would also be inflated, resulting in higher state and federal spending. In fiscal year 2017, the Medicaid managed care improper payment rate was 0.3 percent, while the FFS improper payment rate was 12.9 percent, leading to an assumption that the estimated risks in managed care are less significant than those estimated in FFS. However, the managed care component of the PERM does not determine whether MCO payments to providers were for services that were medically necessary, actually provided, accurately billed and delivered by eligible providers, or whether the MCO costs were allowable and appropriate. As a result, the PERM improper payment estimate potentially understates the extent of program integrity risks in Medicaid managed care. Moreover, this potential understatement in the PERM’s improper payment rate estimate may curtail investigations into the appropriateness of MCO spending. We previously reported that CMS and state program integrity efforts did not closely examine program integrity in Medicaid managed care, focusing primarily on payments and services delivered under FFS. Our current review of the 27 audits we identified encompassed a 5-year period, suggesting that reviews of managed care continue to be limited. An official from a state auditor’s office we spoke with suggested that some states may not audit services delivered under managed care, because of a low improper payment rate. In addition, he noted that his state Medicaid agency used the relatively low payment error rate in managed care as an indicator of few program integrity problems. CMS and State Oversight of Managed Care Do Not Ensure the Identification and Reporting of Overpayments and Unallowable Costs As noted, CMS has increased its focus on and worked with states to improve oversight of Medicaid managed care; however, these efforts and the oversight efforts of states do not ensure the identification and reporting of overpayments and unallowable costs. In recent years, the agency has sought to strengthen oversight of managed care programs through updated regulations; reviews of states’ managed care programs (Focused Program Integrity Reviews) and collaborative audits, which are conducted jointly by federal program integrity contractors and states; and state monitoring of overpayments. Regulations. In May 2016, CMS updated its regulations for managed care programs in order to strengthen oversight. The updated regulations require a number of additional program integrity activities, such as those listed below. If fully implemented, these updated regulations may help with the identification and removal of overpayments and unallowable costs from data used to set future capitation rates. Under these regulations States must arrange for an independent audit of the accuracy, truthfulness, and completeness of the encounter and financial data submitted by MCOs, at least once every 3 years. Through contracts with MCOs, states must require MCOs to have a mechanism through which providers report and return overpayments to the MCOs. States must also require MCOs to promptly report any identified or recovered overpayments—specifying those that are potentially fraudulent—and submit an annual report on recovered overpayments to their state. States must use this information when setting actuarially sound capitation rates. Through contracts with MCOs, states must also require MCOs to report specific data, information, and documentation. In addition, the MCO’s chief executive officer or authorized representative must certify the accuracy and completeness of the reported data, information, and documentation. States must enroll MCO providers that are not otherwise enrolled with the state to provide services to enrollees in Medicaid FFS, and revalidate the enrollment at least once every 5 years. Initially this requirement was to start for MCO contracts beginning on July 1, 2018. Subsequently enacted legislation codified this requirement in statute and moved the implementation to January 1, 2018. It is too early to know if these regulations will assure better oversight of MCO payments to providers and the data used to set future capitation rates. The above program integrity requirements only went into effect recently—for contracts starting on or after July 1, 2017, and January 1, 2018. In addition, CMS issued a notice in June 2017 stating that the agency will use its enforcement discretion to assist states that are unable to implement new requirements by the required compliance date. Also, CMS has delayed issuance of implementing guidance for certain provisions until the agency completes its review, a step that may further delay states’ implementation. The agency has designated Medicaid managed care for “deregulatory action” and plans to propose a new rule, but has not indicated which of these provisions, if any, would be revised. Focused Program Integrity Reviews. In fiscal year 2014, CMS implemented its Focused Program Integrity Reviews in order to target high-risk program integrity areas in each state, including managed care. As we previously reported, these focused reviews are narrower in scope than the prior reviews conducted by CMS, but they still involve on-site visits to states. In its focused reviews of managed care, CMS found that several states had incomplete oversight of MCO payments to providers, even though the agency relies on states to verify reported MCO overpayments and to ensure the overpayments are excluded from the data used to set capitation rates. In the 27 focused reviews of managed care from 2014 to 2017, CMS found that MCOs in 17 states reported fewer overpayments to their state Medicaid agencies than CMS would expect. For example, MCOs in at least 5 states reported that overpayments were less than 0.1 percent of their total managed care expenditures; while CMS noted in 1 focused review that overpayments typically equal 1 to 10 percent of total expenditures in managed care. CMS also found that 5 of the 27 states did not verify that MCOs excluded overpayments from these data, and 1 state did not exclude overpayments from the capitation rate setting. This is consistent with our March 2017 report in which we noted that CMS commonly found that MCOs reported low amounts of recovered overpayments and conducted few reviews to identify overpayments. Also, officials from three of the five states we interviewed for that report said the focused reviews gave them leverage in dealing with MCOs or led MCOs to focus more on program integrity. We also reported that CMS officials recommended states take steps to improve their oversight of MCOs, based on the focused review findings. The findings from CMS’s focused reviews of managed care also highlight the need for greater federal oversight of states. Without these reviews, it is unclear if states would independently identify MCOs’ reporting of overpayments or work to strengthen MCO reporting. Yet, CMS has not yet published the focused reviews of managed care in 13 states, and it may only conduct a focused review in a state once every three or more years. Given CMS’s timeline for the focused reviews, it may take years to determine if corrective actions result in improved program integrity in services delivered through managed care. Collaborative audits. CMS has expanded the federal-state collaborative audits beyond FFS, and has begun to engage states to participate in collaborative audits of MCOs and providers under contract to MCOs. As a part of the collaborative audit process, the state volunteers to jointly develop the audit processes the federal contractors follow. CMS officials told us that federal contractors have completed 14 collaborative audits of providers under contract to MCOs in three states—Arizona, the District of Columbia, and Tennessee. Only the audit of Trusted Healthcare, an MCO in the District of Columbia, has been published. That audit identified $129,000 in overpayments in a sample of MCO payments to providers, which, if generalized to all of the MCO’s payments over 6 months, would equate to over $4 million in overpayments. According to CMS, three additional states—Louisiana, Nebraska, and New Hampshire—have shown interest in collaborative audits of their MCOs, although such audits require states to prepare data files for the federal contractor and commit staff time. In our March 2017 report, we found that states’ participation in FFS collaborative audits varied and some states reported barriers to their participation. Expanding collaborative audits in managed care will require commitment from and coordination with states. State monitoring of overpayments in managed care. States are required to report overpayments they have identified and recouped along with state expenditures on a quarterly basis. However, based on the responses of the program integrity officials in 13 of the 16 states we contacted, most officials were unable to define the magnitude of overpayments in their managed care programs, which may signify a need for greater federal oversight or coordination. Specifically, officials in 7 of the 13 states could not or did not identify the share of total reported Medicaid overpayments that occurred in managed care. In 11 of the 13 states, officials responded that they did not directly monitor MCO payments to providers. Of those 11 states, officials in 4 said they depend on MCOs to report overpayments and exclude the overpayments from the data used to set capitation rates. As long as states are not taking action to identify overpayments in managed care, they cannot be assured that they are accurately paying MCOs for medically necessary services provided to enrollees. Federal internal control standards call for agency management to identify, analyze, and respond to risks. CMS has taken some steps to identify, analyze, and respond to risks through its regulations, Focused Program Integrity Reviews, and collaborative audits. However, key CMS and state oversight efforts fall short of mitigating the limitations of the PERM estimates of improper payments for managed care, because they do not ensure the identification and reporting of overpayments to providers and unallowable MCO costs. Without addressing these key risks, CMS and states cannot ensure the integrity of Medicaid managed care programs. Conclusions The 0.3 percent improper payment rate for Medicaid managed care, as measured by the PERM, is significantly lower than the improper payment rate of 12.9 percent for Medicaid FFS. However, this difference does not signal better oversight; rather, it represents differences in the review criteria between FFS and managed care, which result in a less complete accounting for the program integrity risks in managed care. The PERM does not account for key program integrity risks in Medicaid managed care: specifically, unidentified overpayments and unallowable costs. One federal investigation of an MCO operating in nine states resulted in a settlement of $137.5 million to resolve allegations of false claims that were not captured in the national Medicaid improper payment rate estimate. Further, CMS found that MCOs and states do not provide sufficient oversight in Medicaid managed care to address the risks that are not accounted for in the PERM, findings that are reinforced by our reports on Medicaid managed care program integrity. CMS has taken steps to improve its oversight of Medicaid managed care, yet these efforts fall short of ensuring that the agency and states will be able to identify and address overpayments to providers and unallowable MCO costs. Without better measurement of program risks—particularly as expenditures for Medicaid managed care continue to grow—CMS cannot be certain that the low improper payment rate for managed care, as measured by the PERM, accurately reflects lower risks in managed care. Recommendation The Administrator of CMS should consider and take steps to mitigate the program risks that are not measured in the PERM, such as overpayments and unallowable costs; such an effort could include actions such as revising the PERM methodology or focusing additional audit resources on managed care. (Recommendation 1) Agency Comments We provided a draft of this report to the Department of Health and Human Services (HHS) for comment. In its written comments, HHS concurred with our recommendation, and indicated that it will review regulatory authority and audit resources to determine the best way to account for Medicaid program risks that are not accounted for in the PERM. However, HHS stated that the PERM is not intended to measure all Medicaid program integrity risks, and utilizing the PERM measurement in that way would be a misunderstanding and misuse of the reported rate. HHS also commented that a review of payments from MCOs to providers is outside the scope of IPIA. In addition, HHS asserted that including such a review would diminish the value of PERM reporting—because it would require significant assumptions about the amount of federal share in MCO payments to providers. Further, HHS maintained that such a review also would result in a measurement that was not comparable to other programs or agencies, which would diminish the value of government- wide improper payment rate reporting. We acknowledge that the current PERM methodology has been approved by OMB. However, we maintain that the PERM likely underestimates program integrity risks in Medicaid managed care. To ensure the appropriate targeting of program integrity activities, CMS needs better information about these risks. Given the size of the Medicaid program, its vulnerability to improper payments, and the growth in managed care, it is critical to have a full accounting of program integrity risks in managed care in order to best ensure the integrity of the whole Medicaid program. In its written comments, HHS also summarized several activities it uses to oversee and support states’ Medicaid program integrity efforts, including state program integrity reviews; collaborative audits conducted by federal contractors; Medicaid Integrity Institute training for state employees; and the Medicaid Provider Enrollment Compendium. HHS also provided technical comments, which we incorporated as appropriate. HHS’s comments are reprinted in appendix II. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or at yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix III. Appendix I: Federal and State Audits and Investigations of Medicaid Managed Care We reviewed 16 federal and state audits and 11 notices of investigations of Medicaid managed care organizations (MCO) and providers issued from January 2012 to September 2017. As the findings below show, the audits and investigations represent a limited number of reviews that, in many cases, focused on individual states and individual providers or MCOs within that state. Given the limited scope and number of states reviewed, the amount of the overpayments and unallowable costs occurring nationwide is unknown. These audits and investigations show cases of MCO overpayments to providers or unallowable costs, which are not accounted for by the Centers for Medicare & Medicaid Services’ Payment Error Rate Measurement (PERM) ; errors in capitated payments (e.g., capitated payments made for deceased individuals), which are accounted for in the PERM; and gaps in managed care oversight. When reporting overpayments and unallowable costs identified in the audits and investigations, we only include amounts specifically attributed to MCOs in our total. This total does not include the following: overpayments and unallowable costs identified in those audits and investigations that did not distinguish between the amounts attributable to MCOs, Medicaid fee-for-service, or Medicare; overpayments and unallowable costs identified in criminal proceedings that are not yet resolved; and errors in capitated payments, as those payments would be reviewed by the PERM. As a result, the total amount of overpayments and unallowable costs and capitated payment errors in this appendix exceed what we report. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contact named above, Leslie V. Gordon (Assistant Director), Pauline Adams (Analyst-in-Charge), Erika Huber, and Drew Long made key contributions to this report. Also contributing were Muriel Brown and Jennifer Whitworth.
Why GAO Did This Study The improper payment rate is a sentinel measure of program integrity risks for the Medicaid program. CMS and the states oversee Medicaid, whose size, structure, and diversity make it vulnerable to improper payments. CMS estimates the Medicaid improper payment rate annually through its PERM, which includes an estimate for Medicaid managed care, in which states contract with MCOs to provide services to Medicaid enrollees. GAO was asked to study the PERM methodology for managed care. In this report, GAO examined the extent to which the PERM accounts for program integrity risks in Medicaid managed care, including CMS's and states' oversight. GAO identified program integrity risks reported in 27 federal and state audits and investigations issued between January 2012 and September 2017; reviewed federal regulations and guidance on the PERM and CMS's Focused Program Integrity Reviews; and contacted program integrity officials in the 16 states with a majority of 2016 Medicaid spending for managed care, as well as CMS officials and program integrity experts. What GAO Found The Centers for Medicare & Medicaid Services' (CMS) estimate of improper payments for Medicaid managed care has limitations that are not mitigated by the agency's and states' current oversight efforts. One component of the Payment Error Rate Measurement (PERM) measures the accuracy of capitated payments, which are periodic payments that state Medicaid agencies make to managed care organizations (MCO) to provide services to enrollees and to cover other allowable costs, such as administrative expenses. However, the managed care component of the PERM neither includes a medical review of services delivered to enrollees, nor reviews of MCO records or data. Further, GAO's review of the 27 federal and state audits and investigations identified key program risks. Ten of the 27 federal and state audits and investigations identified about $68 million in overpayments and unallowable MCO costs that were not accounted for by PERM estimates; another of these investigations resulted in a $137.5 million settlement. These audits and investigations were conducted over more than 5 years and involved a small fraction of the more than 270 MCOs operating nationwide as of September 2017. To the extent that overpayments and unallowable costs are unidentified and not removed from the cost data used to set capitation rates, they may allow inflated MCO payments and minimize the appearance of program risks in Medicaid managed care. CMS and states have taken steps to improve oversight of Medicaid managed care through updated regulations, focused reviews of states' managed care programs, and federal program integrity contractors' audits of managed care services. However, some of these efforts went into effect only recently, and others are unlikely to address the risks in managed care across all states. Furthermore, these efforts do not ensure the identification and reporting of overpayments to providers and unallowable costs by MCOs. Federal internal control standards call for agency management to identify and respond to risks. Without addressing key risks, such as the extent of overpayments and unallowable costs, CMS cannot be certain that its estimated improper payment rate for managed care (0.3 percent compared with 12.9 percent in Medicaid fee-for-service) accurately reflects program risks. What GAO Recommends The Administrator of CMS should consider and take steps to mitigate the program risks that are not measured in the PERM, such as overpayments and unallowable costs; such an effort could include actions such as revising the PERM methodology or focusing additional audit resources on managed care. HHS concurred with this recommendation. HHS also provided technical comments, which were incorporated as appropriate.
gao_GAO-18-539
gao_GAO-18-539_0
Background The growth of the “sharing economy” has begun to impact public transportation. DOT describes the sharing economy as a developing phenomenon based on sharing, renting, and borrowing goods and services, rather than owning them. One facet of the sharing economy is shared mobility, meaning the shared use of a motor vehicle, bicycle, or other transportation mode that is often facilitated by requests from users, largely through mobile applications. See figure 1 for examples of shared mobility services available “on demand” through mobile applications. The increased use of ridesourcing services has been particularly noticeable in recent years. Since Uber first initiated ridesourcing services in the U.S. in 2010, such services have become increasingly popular, especially in urban areas. While data on the use of ridesourcing are limited, researchers reported in 2017 that about 21 percent of adults in major U.S. cities had used ridesourcing services, and about a quarter of them used these services on a frequent (weekly or daily) basis. Ridesourcing services offer convenience benefits for riders (see fig. 2, which explains how such services work). Millions of Americans—especially those unable to provide their own transportation due to age, disability or income constraints—rely on public transit to fully participate in society and access vital services. The types of services typically provided by local transit agencies include: rail services, in which vehicles operate along railways. fixed-route bus services, which operate according to regular schedules along prescribed routes with designated stops. paratransit services, which generally speaking are accessible, origin-to-destination transportation services that operate in response to calls or requests from riders. other demand-response services, which are sometimes called dial- a-ride. Local transit agencies have historically contracted out some services, in part to decrease their operating costs. For example, a survey we conducted in 2013 showed that a majority (61 percent) of the 463 responding local transit agencies contracted out one or more services. Services most frequently contracted out included paratransit services for individuals with disabilities and demand-response services. In particular, taxi companies have often been used to fulfill paratransit services and other demand-response services. Within DOT, FTA is responsible for providing grants that support the development of safe, comprehensive, and coordinated public transportation systems, among other things. Specifically, FTA: Annually distributes about $12 billion to support and expand transit systems, according to DOT. Two of these funding sources are Urbanized Area Formula Grants and Formula Grants for Rural Areas. These grant funds go to local transit agencies, but these local transit agencies may in some cases use these funds to procure the services of third parties such as private mobility companies. Ensures that local transit agencies receiving certain federal financial assistance do not discriminate based on race, color, religion, national origin, sex, disability, or age. Furthermore, FTA ensures local transit agencies comply with DOT regulations implementing certain portions of Title VI of the Civil Rights Act of 1964, as amended, (Title VI) and the Americans with Disabilities Act of 1990, as amended, (ADA). Administers the National Transit Database (NTD), which is intended to provide information to the federal government and others on which to base public transportation service planning. All recipients and direct beneficiaries of grants from the urbanized area formula program and rural area formula program are required by statute to submit data to the NTD, such as financial and operating data. FTA’s Office of Research, Demonstration, and Innovation recently launched a new MOD program to further its goals of improving the integration of transportation systems and increasing the accessibility and efficiency of public transit services for riders. According to officials, FTA believes that the U.S. public transportation system will be heavily influenced by the “Mobility on Demand” concept in the future, so has incorporated this concept into its planned research efforts. The MOD program is the agency’s main effort to help local transit agencies to explore emerging shared mobility technologies, in part by partnering with private mobility companies. The MOD program involves several components, including funding projects through the competitive MOD Sandbox grant program. In May 2016, FTA published a notice of funding opportunity and solicitation of project proposals for the MOD Sandbox grant program. In October 2016 FTA announced the selection of 11 projects to receive about $8 million. According to FTA officials, the agency designed the MOD program with several goals in mind, including: Funding those proposed grant projects with the most promise for generating benefits for the respective communities. Helping agencies better understand how such partnerships work in practice to promote emerging on-demand mobility options. Identifying any federal requirements that could impact the ability to provide on-demand mobility services offered through partnerships. Evaluating the extent to which the MOD Sandbox projects achieve their intended outcomes by developing and applying relevant performance metrics. Partnerships Seek Various Service Efficiencies, but Their Full Impacts are Unknown Most Selected Projects’ On-Demand Services Aimed to Increase Transit Ridership As shown in figure 3 below, local transit agencies nationwide are pursuing partnerships to offer a variety of on-demand services that aim to make access to public transportation more efficient and convenient. The private mobility companies involved in selected partnerships include some well- known companies such as Uber and Lyft, and some lesser-known types of companies such as a bike-share company and technology companies focused on transportation. Selected local transit agencies most frequently partnered with ridesourcing companies (11 projects), while 8 partnership projects included more than one type of private partner. Five of the 22 partnership projects were in FTA’s MOD Sandbox program. Most selected projects (14 of 22) involved on-demand first- and last-mile transportation connections, through which respective local transit agencies aim to increase ridership on their transit systems (see table 1). Addressing the first- and last- mile issue has been identified as an ongoing challenge for many local transit agencies seeking to increase their transit ridership. Research suggests that the easier it is to access a transit system, the more likely people are to use it. Connecting on-demand services for the “first- and last-mile”—which refers to the distances riders need to travel to or from a public transit station or a stop to arrive at their final destination—could improve transit access by effectively extending service beyond the respective fixed-route buses and commuter trains (see fig. 4). To attract riders to use such first- and last- mile services, eight projects provided a discount to pay for a portion of the fare for the ridesourcing ride to access public transit. Figure 5 below shows a selected local agency’s advertisement for such a voucher program. The second most common type of service provided through selected projects was on-demand paratransit service, which could help the respective transit agencies offer eligible riders more convenient options and also help address the high cost of providing such services. More than half of the selected projects (13 of 22) provided on-demand services targeted toward paratransit-eligible riders, either as the primary project goal or to ensure equivalent service for eligible customers. Officials from two local transit agencies told us that by providing more convenient ADA paratransit services—as compared to traditional services that require booking a day or more in advance—these projects in turn produce other benefits. For example, officials from one local transit agency with such a partnership thought their program could really benefit the broader community because the targeted riders could make more spur-of-the- moment decisions to participate in activities such as shopping, work, and church. Also, as we have previously reported, the costs of paratransit services are much more costly to provide than fixed route trips. Some local transit agencies also aimed to improve their public trip planning and ticketing systems, to increase convenience for riders in their communities. Specifically, five selected local agency projects involved early experiments with the Mobility as a Service (MaaS) concept, meaning offering riders a central electronic platform—such as an app—to plan end-to-end trips including booking, ticketing, and paying for any transportation needed to make the trip, both public or private. If fully implemented, MaaS would allow riders to, for example, use one app to view and compare real-time availability of various modes (e.g., a traveler might be directed to a train if one is arriving quickly, or to a ridesourcing vehicle if train service has ended for the night). Riders could tailor their trip to meet their needs and payments could be processed through their phone. Implementing MaaS apps could increase convenience for consumers, and may also increase transit ridership. As an example of an ‘early’ MaaS experiment, Chicago Transit Authority’s MOD Sandbox project seeks to integrate the city’s bike-share system into CTA’s central trip planning and fare payment app, so that riders can more easily pay for a bike-share ride along with their transit trip. Figure 6 shows a sample of a current trip planner and a future MaaS concept. To initiate their on-demand projects, half of selected local transit agencies relied on local funds and not federal funds. Specifically, officials from half of the local transit agencies (8 of 16) indicated that their projects did not use federal funds, with the projects either partially or entirely funded through a local transit agency or local government subsidy, where the transit agency subsidizes or pays the entire cost of the on-demand service. The remaining 7 local transit agencies used federal funds for their projects. For example, the 5 FTA MOD Sandbox projects in our selection received federal funds to support 80 percent of project costs, with the remaining 20 percent of project costs supported through local matching funds. One FTA MOD Sandbox project involved two local transit agencies. Long-Term Sustainability and Effects on Transit Ridership, Costs, and Communities Have Not Yet Been Determined Most of the selected projects have not yet been evaluated to determine whether they achieved intended outcomes. However, a few transit officials told us that their agencies’ costs had decreased since initiating the partnerships. For example, an official from one transit agency reported that the on-demand service provided through their partnership had helped them reduce costs for paratransit. Two of the completed partnerships in our review generated insufficient ridership to succeed. The partnerships were widely covered by the press, which transit agency officials believe provides other transit agencies the opportunity to learn from them as well. Specifically, Kansas City’s Bridj project and the Go Centennial project in the city of Centennial, Colorado failed to attract sufficient riders despite the money and time invested by the local transit agencies and their private partners. The transit officials involved indicated that the projects should have incorporated more marketing of the services being offered and allowed more time for riders to adapt to the new on- demand services, an issue which we will further discuss later in this report. In addition, according to some selected local transit agencies and literature, the increase in such partnerships may have negative effects on public transit ridership and on local transit agencies more broadly. For example, as riders become comfortable with the new on-demand options, they may elect to use these transportation modes instead of public transit, thus reducing public transit ridership. In addition to the possible loss of ridership revenue, on-demand services could decrease other transit agency revenues, such as parking fees charged at some transit stations. Further, one researcher that regularly reviews emerging mobility topics discussed the concern that over time, an increase in on-demand services offered could result in inequitable public transit. Specifically, she noted that if on-demand services offered continue to increase, riders may begin to perceive fixed route transit services as inferior to these new services, which could divert riders and revenues away from public transit. Eventually, this could result in two systems: an inferior public transit system and a superior on-demand system for those who can afford it. To provide more information about potential outcomes from such partnerships, DOT officials have commissioned a study to evaluate the outcomes of the MOD Sandbox partnerships and anticipate publishing results in 2019. In collaboration, FTA and DOT’s ITS JPO developed an evaluation framework for each of the 11 funded MOD Sandbox projects. As part of this evaluation, the transit agencies plan to collect information, such as ridership and cost data, to demonstrate how the project has influenced transit rider behavior. ITS JPO plans to use the data to measure the extent to which each project has fulfilled its goals and impacted travel behavior. The study will also include crosscutting analyses and lessons learned for all MOD Sandbox projects. According to DOT officials, FTA is also developing performance metrics to track the projects over time to see the extent to which they promote integrated transportation. While DOT Has Facilitated Partnerships, Some Requirements and Limited Data Pose Implementation Challenges FTA Has Facilitated Partnerships through the Mobility on Demand Program FTA’s MOD program is a key effort under way to encourage and better understand transit partnerships. Since first announcing the selected 11 MOD Sandbox projects to receive funding in October 2016, FTA has supported the program through various efforts, and most (10 of 16) selected local transit agencies in our review expressed positive views on the program. Specifically: FTA has provided technical support to participants as the MOD Sandbox projects have progressed. According to FTA officials, FTA has contracted with the SUMC to provide technical assistance to MOD Sandbox grantees. Officials from all six MOD grantees in our selection said that FTA support throughout the grant and planning processes has been helpful. For example, officials from one transit agency indicated that this program shows FTA’s dedication to the idea of shared mobility and enables the grantees to try out new models in a “nurturing environment.” FTA has held quarterly meetings open to all MOD Sandbox participants, including local transit agencies and private mobility companies. According to two private mobility companies in our selection that participated in the MOD Sandbox program, these meetings were a constructive forum where participants could discuss challenges, lessons learned and other issues. Most Selected Transit Agencies Wanted Additional Information Describing How Transit Partnerships Have Met Federal Requirements Since initiating the MOD Sandbox program, FTA has gathered information from grantees about federal requirements that may pose challenges to implementing transit partnerships. For example, FTA’s MOD Sandbox notice of funding opportunity encouraged grant applicants to identify any regulatory or policy waivers needed to implement proposed projects. According to FTA officials, they received many such waiver requests from applicants, many of which they could not grant. For example, some of the MOD Sandbox grantees’ private partners requested waivers from ADA requirements, which according to FTA officials the agency does not have the authority to waive. FTA officials also clarified that they do not intend to immediately change policies or regulations based on the feedback received through the MOD Sandbox program. Instead, they aim to help MOD Sandbox participants meet requirements and to provide technical assistance to local transit agencies outside of the program. They said that, in the longer term, the agency would consider potential policy and regulatory revisions if needed. Most selected transit agencies (11 of 16) and private mobility companies (10 of 13) indicated that some federal requirements—if applicable to a certain partnership—can impact these partnerships and in some cases, make them more challenging to undertake. Table 2 below shows four categories of requirements cited as having the potential to impact partnerships, along with examples of stakeholder views on their potential impacts. Although some stakeholders identified these requirements as potentially impacting partnerships, they did not agree that the requirements should be waived to facilitate partnerships. For example, officials from two local transit agencies told us that requirements related to providing accessible and equitable transportation are important to maintain even if they could deter partnerships. However, FTA designed the MOD Sandbox grant application process so that the applicant local transit agencies could choose their private mobility partners using a noncompetitive process, bypassing the procurement requirements that normally require a full and open competition. One MOD grantee told us that their ability to bypass a competitive process was helpful and expedited their project planning efforts. As FTA has gained more knowledge about such partnerships, the agency has sought to clarify how some of these requirements apply to such partnerships. For example, in December 2016, shortly after announcing MOD Sandbox grantees, FTA issued documentation clarifying various aspects of transit partnerships, as well as certain federal requirements. FTA issued a “Dear Colleague” letter to local transit agencies which addressed how certain ADA and Title VI requirements apply when a local transit agency enters into a partnership with a ridesourcing company. FTA published a dedicated webpage of frequently asked questions (FAQ) about shared-mobility partnerships. This website supplements subject-specific FAQs already available on FTA’s website that also may apply to these partnerships; it includes FAQs on Civil Rights and ADA requirements. In addition, FTA provides clarifying information to local transit agencies upon request, according to FTA and several local transit agency officials. However, officials from most (14 of 16) selected local transit agencies told us that additional information from FTA would be helpful, especially examples of how local transit agencies are structuring their partnerships to ensure they meet federal requirements. As noted above, FTA has issued various documents for local transit agencies about how federal requirements, such as Title VI requirements, apply to emerging partnerships. Nonetheless, officials from some local transit agencies told us that without examples, they were unclear about how such partnerships could ever meet requirements. For instance, one transit official told us he was unaware FTA has determined that local transit agencies may use ridesourcing companies without requiring that these contractors undergo drug and alcohol testing—the aforementioned “taxicab exception”—if riders are able to select from multiple providers for their on-demand rides. In another example, officials from one agency told us that they had tried to look at the NTD database to find peer local transit agencies with similar on-demand programs to ask these agencies for advice, but could not find any peers using that method. These officials wanted to know how other local transit agencies were dealing with customers without bank cards in their on-demand services. They told us that having more examples from FTA of how various local transit agencies are structuring their transit partnerships to comply with federal requirements could be especially helpful. Selected local transit agencies with ridesourcing partners described approaches that they believe help to ensure compliance with the drug and alcohol testing, ADA, and Title VI requirements, including using a taxi company, a paratransit company, or both. For example, transit officials managing four of the 11 selected projects involving a ridesourcing company told us they had added a taxi or paratransit company as an option for riders to comply with requirements. According to two taxi representatives we interviewed and research studies, taxi companies already have procedures for fulfilling federally-required drug and alcohol tests. Several local transit officials told us that taxi companies usually have call centers and accept cash payments, making it easier to ensure that the services comply with Title VI. In addition, according to taxi representatives and research reports, taxi companies may have experience complying with the ADA since some of DOT’s implementing regulations may already apply to them. Gathering and disseminating more information on partnerships corresponds with best practices for collaboration with external parties identified in prior work by GAO and others. For example, as we have previously reported, if federal agencies can identify and share best practices, this can help the entities that federal agencies oversee—such as local transit agencies in this case—make changes to successfully adapt to changes in the environment. Additionally, a recent industry report argues that local transit agencies seeking to form transit partnerships will strongly benefit from learning directly from peer agencies with relevant experiences in the emerging area. As discussed above, FTA has gathered local transit partnership information from its MOD Sandbox projects. However, the majority of local transit agencies that participate in partnerships are not in the MOD Sandbox program; and many of their projects may already be underway or complete. Gathering information from those local transit agencies would provide FTA with more information about how partnerships are meeting federal requirements. It would also likely provide FTA with more examples to disseminate to all local transit agencies interested in pursuing partnerships to help those agencies structure their partnerships in accordance with federal requirements. Finally, additional information on these partnerships would better position FTA to respond to changes in the transit industry that could impact its own efforts and goals, such as planning for future MOD grants and improving the efficiency of transit services overall. Selected Transit Agencies Reported Confusion about Whether and How On- Demand Project Data Should Be Entered into the National Transit Database To track its progress toward achieving its goals, such as increasing the efficiency of public transit services, FTA can use data from NTD. According to FTA officials, NTD is its primary source for information and statistics on U.S. transit systems. As we have previously reported, NTD is intended to provide timely, accurate information to help Congress and FTA apportion funding and assess the continued progress of the nation’s public transportation systems. A key goal of the NTD is to gather information from local transit agencies, such as financial and operating data, to inform public transportation service planning. All recipients and direct beneficiaries of grants from the Urbanized Area Formula Program and Rural Area Formula Program—such as local transit agencies— are required to report certain data to NTD. For example, in 2016, over 950 urban transit agencies and others reported into NTD, and FTA encourages transit agencies not receiving urbanized area and rural area grant funds to report voluntarily so that NTD can be more complete. Additionally, according to FTA officials, FTA uses certain NTD data to apportion certain grant funds to local transit agencies nationwide, including data on passenger miles traveled and vehicle revenue miles. Each year, urbanized area and rural area formula grant recipients and beneficiaries are required to submit an NTD package with many different types of data, including: financial information, including operating expenses and funding sources, asset inventory data, such as numbers of transit stations and maintenance facilities, and services supplied, including the number of passenger trips that year, and miles traveled by passengers. To help local transit agencies with this reporting, FTA issues NTD manuals annually that are updated with new information, as needed. These manuals describe how to report all the various NTD data requested, including how to report services that the transit agency provided based on the transportation mode, divided between rail and non- rail, with non-rail including demand response services, potentially provided by private mobility companies. According to FTA officials, some data that local transit agencies would need to report on-demand project data into NTD and to measure project outcomes—such as whether the targeted riders are using the on-demand rides to get to and from transit stations—would be tracked by the private mobility companies involved in the project. For example, to report data about services supplied into NTD, the local transit agency would need certain data such as: the numbers of trips and riders taken, distances traveled in miles, time spent travelling, and the days of the week when the services are offered. In the case of on-demand rides offered through transit partnerships, much of that data would be tracked by the private mobility company and potentially shared with the local transit agency for NTD entry. Although FTA has made some information available that could facilitate these transit partnerships—including updated NTD manuals—local transit agencies in our selection reported the following issues: confusion regarding whether and how to report on-demand service data into NTD, and difficulties gathering data for NTD reporting from ridesourcing companies. Confusion Regarding Whether and How to Report Data about On-Demand Services into NTD According to FTA officials and the most recent NTD manual, transit agencies only report data to the NTD for services provided that meet the statutory definition of public transportation. Under the statute, public transportation means regular, continuing shared-ride surface transportation services that are open to the general public or open to a segment of the general public defined by age, disability, or low income. However, public transportation does not include intercity passenger rail transportation provided by Amtrak, intercity bus service, charter bus service, school bus service, sightseeing service, courtesy shuttle service for patrons of one or more specific establishments, or intra-terminal or intra-facility shuttle services. FTA officials told us that for a transportation service to be considered “shared-ride” the service must have the real possibility of being offered on a shared-ride basis. According to FTA officials, for a transportation service to be “open to the general public” it cannot be limited to a specific group (except those groups specified in the definition), and neither the driver nor passenger can deny another person on board. For example, a service provided by a ridesourcing company in which a passenger or driver can refuse additional passengers would not be considered “open to the general public,” according to FTA officials. Furthermore, FTA officials told us that a time-limited pilot providing transportation service is not considered “regular” and “continuing.” Additionally, FTA officials told us that, even if a transportation service meets the statutory definition of public transportation, the local transit agency may not be required to report the associated data if that agency did not directly provide the transit service. For example, according to FTA officials, whether or not a local transit agency would have to report transportation service provided by a private partner would depend on the contract between the local transit agency and the private partner. Also, according to FTA officials, a local transit agency cannot report data about service provided by a private partner if it is a voucher program, because those services are not considered “shared ride” and thus do not meet the statutory definition of public transportation. If the service provided by the private partner meets the statutory definition of public transportation and is considered a service provided by the local transit agency, then FTA officials told us most services provided through these partnerships should be reported under the Demand Response or Demand Response-Taxi transportation modes. Despite available NTD manuals that discuss the statutory definition of public transportation, officials from most (10 of 16) selected local transit agencies expressed confusion about NTD reporting for on-demand projects, such as about which types of on-demand rides qualify as “public transportation” for NTD reporting purposes and how qualifying rides should be entered into NTD. These officials told us that further clarification is needed from FTA about this issue. For example, three projects in our review offered similar on-demand paratransit rides— through ridesourcing or taxi company partners—but officials from the three local transit agencies involved had different views on whether and how these rides should be entered into NTD. Officials from the first local transit agency told us that they were planning to report these rides into NTD and had met with the FTA officials responsible for maintaining NTD to ask them how to report them. These FTA officials had told them it should theoretically be possible to enter those rides into NTD, but they did not clarify how to do so. Officials from the second transit agency told us that if they extend the dates of their current partnership with two ridesourcing companies, then they will need more clarification from FTA about the information that should be reported into NTD, such as passenger miles traveled. These officials also noted that they already use an extensive process for entering paratransit information into NTD, including tracking vehicle hours and passenger miles of all vehicles used to provide such services. Officials from the third local transit agency told us that they did not intend to report these rides into NTD. In these officials’ opinion, these rides should not be entered into NTD since they do not meet the definition of public transportation in the NTD policy manual. Selected transit agencies in our review seem to be interpreting the information about whether and how to enter data from their partner- provided on-demand services—as outlined in NTD manuals—differently, leading to inconsistencies in whether and how these agencies planned to enter data. For example, one local transit agency’s project offered shared microtransit services on-demand through a technology company partner. In our interview, officials from this local transit agency told us that they intended to report these rides into NTD, but had received unclear and seemingly incorrect advice from the regional FTA staff on how to do so. According to these officials, the regional FTA staff had told them to include these microtransit rides with the agency’s demand-response paratransit rides, since some of the riders of this on-demand service were also qualified for paratransit. The local transit officials told us that they hesitated to report to NTD in the way instructed because it seemed inaccurate. In this local transit agency’s response to follow up questions, the local officials said they were no longer planning to report data on their on-demand service into NTD. We asked FTA officials if this type of service provided by a private partner qualifies as public transportation, and thus should be entered into the NTD by the local transit agency, and FTA officials said it seemed to qualify for entry. In another example, one transit official managing a first- and last- mile voucher project told us that she planned to report these rides into NTD. However, FTA officials told us services provided through voucher programs generally do not meet the definition of public transportation and therefore do not qualify for NTD entry. Federal internal control standards state that agencies should use quality information to achieve the entity’s objectives. To ensure that quality data are used to track progress toward achieving objectives, agencies should obtain relevant data from internal and external sources in a timely manner, according to the standards. Further, the standards state that agencies should use an iterative and ongoing process to identify what information is needed. As changes to the agencies’ objectives occur—or as external events occur that impact such objectives—the standards indicate that agencies should change information requirements as needed to meet these modified objectives. The above examples of local transit agencies’ confusion about NTD reporting requirements raise questions about whether NTD data accurately reflect the status of the U.S. public transportation system, a key goal of the NTD. According to officials, FTA is considering issuing more information clarifying required NTD reporting for on-demand services provided through partnerships. They explained that rather than change any reporting requirements, this new information would clarify how emerging on-demand services fit into current NTD reporting requirements. These officials also told us that local transit officials with questions related to NTD reporting can call an FTA NTD help desk or they can direct their questions to the designated NTD analyst. Officials said that they would consider issuing a document on frequently asked questions about NTD reporting for these partnerships, but that thus far FTA had received few relevant questions from transit agencies. Specifically, FTA officials told us that their NTD office had received relevant questions from two local transit agencies (both of which are in our selection), both about what types of ridesourcing services would be reportable to the NTD. According to FTA officials, they responded to these inquiries by explaining that all services entered into NTD must be shared and meet the statutory definition of “public transportation.” While FTA officials told us that only two local transit agencies had contacted them about NTD reporting confusion, this did not include some other agencies in our selection that had contacted their regional FTA offices for clarification. This raises the possibility that more transit agencies nationwide with such partnerships might have confusion about NTD reporting than the FTA headquarters office was aware of. FTA officials also told us that, in the longer-term, they are considering developing a separate NTD reporting category—or transportation mode— for shared ridesourcing services that qualify as public transportation. However, FTA officials did not commit to taking action on this issue. Without clarified information from FTA on whether services provided through on-demand projects qualify as public transportation, and how to enter data about these services into NTD, some local transit agencies will likely remain confused, potentially leading to inaccurate data in the NTD. Also, according to FTA officials, without accurate NTD data, (1) FTA will not be able to effectively track its own progress toward achieving goals— such as improving the efficiency of transit systems, and (2) the apportionment of certain grant funds to local transit agencies could be affected. Difficulties Gathering Data from Some Ridesourcing Partners Selected local transit agencies reported difficulties obtaining some data from their ridesourcing partners—such as the total miles travelled with passengers on board—and according to some stakeholders, local transit agencies nationwide have faced similar challenges. Some of these data may be needed for NTD reporting but they could also be useful to local transit agencies in tracking the outcomes of their on-demand projects. Specifically, officials from six selected local transit agencies that had partnered with ridesourcing companies had experienced issues obtaining data from them, mostly due to these companies’ concerns about rider privacy and proprietary data. For example, one local transit official told us that she requested, but did not receive, data needed for NTD reporting from a ridesourcing company, including miles travelled with passengers on board. While representatives from most selected private mobility companies we spoke to (11 of 13) expressed no issues with sharing data, representatives from the two large ridesourcing companies did. Specifically, Uber and Lyft representatives said their companies are uncomfortable with sharing riders’ personally identifiable information, such as the exact destination and origin addresses of their ridesourcing trips, with a public entity without riders’ previous consent because they believed the data would be subject to Freedom of Information Act (FOIA) requests. Representatives of two industry associations and a researcher told us that issues gathering data from ridesourcing companies is a broader challenge faced by local transit agencies in such partnerships. However, representatives of the two ridesourcing companies stated that they are working with local transit agencies and FTA to figure out how to provide data to local transit agencies for NTD reporting while still protecting privacy. FTA officials told us they have reached an informal agreement with ridesourcing companies participating in the MOD Sandbox program, including Uber and Lyft, for the collection of one category of data. According to FTA officials, that agreement relates only to certain data needed to assess the ADA equivalent level of service requirement. If the local transit agencies participating in the MOD Sandbox program need additional data for NTD reporting, FTA officials told us it is up to those local transit agencies to obtain it from the ridesourcing companies. In addition, FTA officials told us that local transit agencies partnering with ridesourcing companies outside of the MOD Sandbox program would not benefit from this informal agreement. To help address data collection issues, officials from some (5 of 16) selected local transit agencies suggested that FTA could play a greater role in encouraging ridesourcing companies to provide some minimum level of data needed for NTD reporting. For example, several transit officials suggested that FTA could circulate effective practices for data sharing, such as a template contract between a local transit agency and a private mobility company that includes data sharing obligations. Several transit officials discussed how such additional information from FTA could be helpful for local transit agencies in pursuing or maintaining their partnerships. For instance, officials from one local transit agency argued that FTA information in this area could help the many local transit agencies that are too small to have sufficient market power to get the needed NTD data from ridesourcing companies. The above examples of local transit agencies seeking templates of data sharing agreements suggest that these and other local transit agencies could benefit from more communication from FTA on this issue. If local transit agencies could use such data sharing templates from FTA to gather more complete and accurate data from their ridesourcing partners, this would in turn help ensure the accuracy and completeness of NTD data. As noted above, the internal control standards instruct federal agencies to use quality data. If FTA communicated more information about practices for data sharing, this would assist local transit agencies and also help FTA be better poised to track its overall progress in furthering its goals, including promoting efficient public transit systems. However, FTA officials told us that they do not track information about partnerships that did not receive funding through the MOD Sandbox program, such as details of data sharing agreements, and so could not disseminate examples of how those local transit agency partnership participants are handling data sharing issues. However, local transit agencies with partnerships that are outside of the MOD Sandbox program may still be required to report data into the NTD and could benefit from additional information. FTA officials explained that they want to avoid duplicating the work of other groups that are gathering and sharing information about partnerships. For example, SUMC gathers some information about such partnerships nationwide in a public database and has sponsored conferences to facilitate information sharing about local transit agencies’ experiences with their partnerships. However, SUMC’s public database of partnerships does not include details about how all partnerships are handling data sharing issues. Further, because FTA oversees local transit agencies, the documents that it issues may be viewed as more authoritative than those of a contracted agency such as SUMC. As FTA continues its efforts to address data sharing with the ridesourcing companies involved in the MOD Sandbox program, the agency could also develop broader information on best practices for data sharing agreements—in collaboration with the MOD Sandbox grantees and possibly also with SUMC—and share that information so it would be available for interested local transit agencies. By sharing such gathered information on partnerships, FTA could in turn help transit agencies make sound decisions regarding the data needed from their private mobility partners, and about various options for structuring partnerships to achieve that end. Considerations Impacting the Future Prevalence of Transit Partnerships Include Industry Changes, Available Funding for Local Transit, and Access to Services, Among Others Roles of Local Transit Agencies and Private Mobility Companies Could Change as the Broader Transportation Industry Evolves The transportation industry as a whole is rapidly evolving, with more on- demand services being offered, which could increase the use of transit partnerships. According to SUMC, the U.S. is currently experiencing a seismic shift in transportation, as breakthroughs in mobile technology, an influx of new mobility options and changes in travel behavior have significantly altered today’s transportation landscape, a trend likely to accelerate in the years ahead. Most selected local transit agencies (15 of 16) and private mobility companies (12 of 13) agreed that the industry is changing, and some discussed how transit agencies’ roles and operations are changing as a result. For example, officials at five local transit agencies told us that the transit industry is shifting to offer more mobility on-demand services. Some of these stakeholders predicted that as local transit agencies increasingly use contracted services, these agencies will increasingly become “mobility managers” rather than direct service providers. Officials from three agencies said they are already making or planning for this shift. The increasing automation of vehicles is another key industry change that could impact local transit agency operations and partnerships, but the timeframes needed for full automation remain unclear. As we have reported, automated vehicles promise transformative benefits such as reducing crashes and fatalities and increasing mobility, but such vehicles also pose challenges for policymakers, such as assuring safety and addressing data privacy and other issues. We also reported that these technologies are rapidly evolving, but there is no consensus about the time needed for their full deployment. According to a recent study, vehicle automation could result in significant changes to transit agencies’ operations. For example, FTA has reported that automated transit vehicles could be used to address first- and last-mile issues, which could in turn decrease the need for local transit agencies to partner with private mobility companies to fill such gaps. According to several stakeholders and research reports, some automakers and others have started investing in automated vehicle technologies and in private mobility companies in response to the projected rollout of shared automated vehicles in the near future. If these entities continue making such investments, this could help address challenges related to private mobility companies’ long-term sustainability, which could increase such companies’ ability to enter into partnerships. Of the 13 private mobility companies in our review, representatives of 5 told us that they receive significant financial support from an automaker. In addition to a car-share company, recipients of such support included, for example, three technology companies and a bike-share company. Representatives from two of these companies told us that such support helps ensure their long-term sustainability or provides them with the flexibility to try different business models and enter into transit partnerships without worrying about each being profitable. Such investments from well-established companies may also help address some local transit agency concerns about whether some private mobility companies would be reliable partners, thereby increasing partnerships. For example, according to a recent industry report, some transit officials have questioned the long-term financial viability of the ridesourcing business model, citing high driver turnover rates and other factors as concerns. Available Local Transit Funding and Transit Ridership Levels Will Impact Transit Partnerships All 16 selected local transit agencies and most private companies (10 of 13) told us that local transit agencies’ constrained budgets will impact transit partnerships, and most transit officials agreed that this would encourage partnerships. For example, officials from one local transit agency told us that they first began researching partnerships several years ago, when they felt compelled to look for other viable alternatives to certain bus routes after a local referendum to pay for increased bus services failed. According to several transit officials, if the current decline in public transit ridership continues, this could increase partnerships. For example, local transit agencies may seek to maintain their transit riders by, for example, offering first- and last-mile connections to make accessing transit services more convenient. Based on GAO analysis of FTA data, overall transit ridership decreased by about 1 percent between 2012 and 2016, but ridership changes varied greatly by metropolitan area. For example, since 2010, some larger metropolitan areas have experienced more significant ridership decreases, such as Los Angeles (over a 9 percent decrease) and Washington, D.C. (over a 9 percent decrease). However, ridership grew by more than 10 percent in several areas, including Seattle (24 percent increase), and Nashville (12.5 percent increase). According to recent reports, it remains unclear if the recent decline in public transit ridership, after a decade or more of growth, represents a long-term change in rider behaviors or a short-term cycle related to factors such as lower gas prices in recent years. Extent of Marketing and Outreach about On- Demand Services Can Impact Partnerships’ Success and Increase Access to Services Most stakeholders we interviewed agreed that sufficient marketing and outreach to target rider populations is critical for the success of new on- demand services, and this also impacts the overall success of the partnerships. Most selected local transit agencies (12 of 16) and companies (10 of 13) cited marketing as a significant factor impacting new service use. For example, officials at several local transit agencies told us that they dedicated resources for outreach to target riders to ensure these riders understood the new services being offered. One agency advertised its new on-demand taxi services for paratransit-eligible customers through phone calls to customers and residential mailings, and also encouraged the taxi companies involved to separately advertise these services. Even with outreach and marketing to target riders, some potential riders— particularly the elderly and low-income earners—may not be able to easily access some on-demand services. For example, the current ridesourcing model generally requires riders to have a smartphone and a bank card to request a ride, which could exclude some riders. According to recent reports, less than one-third of Americans over age 65 own a smartphone and only 4 percent had used a ridesourcing service as of 2016. However, according to literature, older Americans will be a key demographic for transit providers to target in coming years, since their numbers are projected to grow significantly and some will stop driving their own vehicles in the near future. Reflecting similar concerns, several officials from local transit agencies (4 of 16) told us that it can be challenging for older residents in their communities to learn to use the smartphone apps that are needed to access some on-demand services. According to a 2016 Pew Research Center report, of those surveyed with household incomes greater than $75,000, 86 percent had heard of ridesourcing services and 26 percent had used them. For those surveyed with incomes less than $30,000, however, only 51 percent had heard of these services and 10 percent had used them. According to a recent report, those with lower incomes could particularly benefit from more on-demand services, especially since reliable access to transportation can help people acquire and keep better jobs. Several selected transit and private mobility stakeholders had efforts underway to address such access issues. For example, two local transit agencies had done targeted outreach to senior communities to educate them about using the new services, including instructions for using the smartphone apps. According to transit officials involved, these efforts had increased the use of these on-demand services by elderly riders. In addition, one ridesourcing company offers gift certificates to offer an option for those without bank cards, which can be purchased with cash and used to redeem rides. In another example, staff at a bike-share company said that their company already offers some options for those without bank cards. Staff at a technology company told us they have plans to offer more such payment options in the future. Conclusions As the sharing economy continues to grow, local transit agencies may increasingly look for opportunities to leverage emerging technologies to extend their services, address first- and last- mile and other issues, and provide additional options for riders by partnering with private mobility companies. Since the sharing economy is a relatively recent phenomenon, FTA has an opportunity to proactively facilitate and share information about ongoing transit partnership projects, including how projects are meeting federal requirements related to accessibility and equity. In addition, FTA could improve the quality of NTD data by advising transit agencies on which on-demand services qualify for NTD entry and how to accurately report about qualifying services. Without clearer instructions on whether and how data from new on-demand services should be reported into NTD, local transit agencies may remain confused, potentially resulting in inconsistent reporting. Further, without more consistent and complete data on partnership activities, including projects that were not funded through the MOD Sandbox program, FTA may lack key information needed to track progress in achieving its goals of promoting more integrated and efficient transit systems. In the absence of a clear statement from FTA about the minimum data needed from private partners for entry into NTD, some local transit agencies will likely continue encountering challenges getting needed data from partners. Finally, absent more sharing of information on partnerships by FTA, including how such partners are addressing data sharing issues, local transit agencies will be poorly positioned to navigate ongoing changes in the transit industry. Recommendations for Executive Action: We recommend that FTA take the following three actions: Gather and publicly share information on transit partnerships, including those that did not receive funding through the MOD Sandbox program, to include examples regarding how various local transit agencies complied with federal requirements—such as procurement, drug and alcohol testing, ADA, and Title VI requirements—while offering new on-demand services in partnerships. (Recommendation 1) Determine which on-demand services qualify as “public transportation” based on the statutory definition and disseminate information to clarify whether and how to report data from such services into NTD. (Recommendation 2) Gather and publically share information on transit partnerships, including those that were not part of the MOD Sandbox program, to include: information on how the local transit agencies and their private mobility company partners are facilitating data sharing, and minimum data needed from a private partner to facilitate NTD reporting. (Recommendation 3) Agency Comments: We provided a draft of this report to DOT for review and comment. We received written comments from DOT, which are reprinted in appendix II. DOT concurred with our three recommendations. The department stated that, in line with these recommendations, it will continue its proactive efforts related to the Mobility on Demand program, and continue to share information about public transit partnerships. DOT also provided technical comments, which we incorporated in the report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Mark Goldstein at (202) 512-2834 or GoldsteinM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Descriptions of Selected Transit Partnership Projects That GAO Reviewed Project description From January 2017 through June 2018, LAVTA’s GoDublin Project provided first- and last-mile service for trips that begin and end within the city limits of Dublin, CA. LAVTA paid half of the fare of a ridesourcing ride, up to a maximum of $5.00. The service included a service option for paratransit-eligible riders. The overall goal of the pilot project was to see if rideshare programs reduce congestion and parking issues in Dublin. Lyft Line, Via Mobility Services, Denver South Transportation Management Association, Conduent (formerly Xerox) GoCentennial, a demonstration project that operated from August 17, 2016 through February 17, 2017, was intended to increase rail ridership by providing first- and last-mile Lyft Line rides (microtransit) and accessible transportation service to and from the Denver Regional Transportation District rail station located in Centennial, CO. The service included a transportation option for paratransit-eligible riders. Beginning in September 2017 WMATA’s Abilities Ride program provides riders that are eligible for WMATA’s Metro Access paratransit program the option to use on-demand taxi service for trips that originate and end in WMATA’s Maryland service area at a discounted rate. The Metro Access customer pays the first $5 of the fare; then WMATA pays up to the next $15. For a trip requiring a Wheelchair Accessible Vehicle, WMATA pays an extra $10 to the vendor for that trip. Beginning in December 2017, PSTA’s Mobility on Demand Sandbox project provides same day, on-demand door-to-door service to a small subset of paratransit-eligible customers in Pinellas County, FL. Beginning in February 2016, PSTA’s Direct Connect program provided first- and last-mile service, initially within two pilot zones. PSTA expanded the program to eight zones in Pinellas County in January 2017. As of April 2018, users can travel to or from 24 locations throughout Pinellas County. PSTA pays the first $5 of the ride and the customer pays the rest. The service includes a transportation option for paratransit-eligible riders. Beginning in August 2016, PSTA’s TD Late Shift program has provided service between home and work for lower-income riders from 10:00 pm through 6:00 am when PSTA’s regular service does not operate. The service includes a transportation option for paratransit-eligible riders. MARTA partnered with Uber for a promotional partnership to provide first- and last-mile transportation in 2015, then again after a bridge collapsed on Interstate 85 on March 30, 2017. MARTA currently has informal partnerships with both Uber and Lyft in which they advertise one another’s services. Transit agency, location of project 8 Project description CTA is partnering with the Chicago Department of Transportation and Divvy bike-share to integrate Divvy rentals into Ventra. Ventra is CTA’s central fare payment system that is accessible by Smartphone application, through a Mobility on Demand Sandbox project. CTA expects to launch the updated Ventra app in summer 2018. From October 2016 through June 2018, MBTA operated a pilot program with Uber and Lyft to offer on-demand paratransit service to customers that are eligible for MBTA’s The Ride, MBTA’s regular paratransit service. Once launched in summer 2018, MBTA will partner with local taxis on Curb’s platform to provide on-demand paratransit service to customers that are eligible for MBTA’s The Ride, MBTA’s regular paratransit service. From March 2016 through April 2017 KCATA partnered with Bridj, a company offering microtransit services, to offer riders services within and between two zones around downtown Kansas City, MO during weekday rush hours. The service included a transportation option for paratransit-eligible riders. From May 2017 through April 2018, KCTA partnered with local taxi companies owned by TransDev to provide subsidized on- demand service for paratransit-eligible customers. Customers that are not eligible for paratransit could also use the service, but KCATA did not subsidize the cost of the ride. Rabbit Transit has used demand-responsive service from Uber and Lyft to fill gaps during peak travel periods when the agency’s regular services are running late. King County Metro and Sound Transit will be partnering with Via to provide rides for customers traveling to and from bus and rail stations in the Seattle, WA area as a sub-recipient of the Los Angeles County Metropolitan Transportation Authority’s Mobility on Demand Sandbox partnership. Expected launch of the service is late 2018. King County Metro has dedicated four parking spaces at its Northgate Transit Center Park & Ride to free floating car-share vehicles to increase the number of options for customers to connect to transit, including customers who do not own a personal vehicle. The car-share spaces are also intended to enable more customers to ride transit by increasing parking turnover at this overcrowded lot. King County Metro will be operating a pilot program to provide on- demand first and last mile service to customers within a 2-mile radius of the Eastgate, Northgate, and South Renton park & ride lots. The service also will include a transportation option for paratransit-eligible riders. Expected launch of the service is August 2018. Transit agency, location of project 17 Project description Through its Mobility on Demand Sandbox project, LA Metro will be partnering with Via to provide first- and last-mile rides to and from locations where customers can board an LA Metro bus or train, in an effort to increase transit ridership. LA Metro will provide vehicles that can accommodate customers that need additional assistance or customers in wheelchairs as well as a call center for customers without smartphones. LA Metro aims to launch the service in September 2018. LA Metro partnered with Uber for two weeks in May 2016 to provide rides to and from Metro Expo Line stations. Customers received a $10 discount on these Uber rides. Through the Adaptive Mobility with Reliability and Efficiency (AMORE) Mobility on Demand Sandbox project, the Regional Transportation Authority (RTA) of Pima County, AZ will offer riders the ability to request services from Ruby Ride, a ridesourcing company, via a phone app, for first- and last-mile transportation. According to an RTA official, this project seeks to provide more services to outlying areas, which previously had either infrequent fixed routes or no service. The RTA and Metropia—a technology company involved in the project— also plan to offer riders incentives, such as discounted services, to change their travel behavior, such as changing their travel times to when roads are less busy. The phone app will also include a carpool matching service that will dynamically recommend potential driver/rider combinations to customers. RTA plans to launch this service in fall 2018. The service will include a transportation option for paratransit-eligible riders. From January through June 2018, GoTriangle partnered with TransLoc, a technology company, to provide first- and last-mile Go OnDemand shuttle service (microtransit) in Research Triangle Park and surrounding areas. Riders were able to hail GoTriangle’s shuttle service from their phone or online using the TransLoc Rider app. From June 2017 through June 2019, the Greater Dayton Regional Transit Authority (RTA) is partnering with Lyft and two other providers to provide on-demand rides from designated RTA Connect stops in underserved areas of the Greater Dayton service area to a transfer point where riders can access fixed- route bus service. The on-demand service has replaced fixed- route bus service that was eliminated due to low ridership. Capital Metro partnered with Via Transportation, Inc. (Via) to provide first- and last-mile on-demand microtransit service from June 2017 through June 2018 to an area of Austin with few fixed route options. Riders were able to book rides with Via, whose service has no fixed routes or fixed schedules. The buses used for the project were able to accommodate two wheel-chair riders and up to nine seated occupants. Appendix II: Comments from the Department of Transportation Appendix III: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Heather MacLeod (Assistant Director); Jessica Bryant-Bertail (Analyst-in-Charge); Lacey Coppage; Delwen Jones; Terence Lam; Bonnie Pignatiello Leer; Josh Ormond; Oliver Richard; and Kelly Rubin made key contributions to this report.
Why GAO Did This Study The public transit landscape is changing, as advances in technology have enabled more on-demand mobility services, such as ridesourcing and bike-share services. In response, some transit agencies have started to partner with private mobility companies with the aim of offering public transit riders more efficient and convenient options through on-demand services. FTA supports public transportation systems through a variety of federal grant programs. GAO was asked to review various issues related to such partnerships. This report examines, among other things: (1) the types of partnership projects that selected transit agencies have initiated with private mobility companies and (2) how DOT's efforts and funding and federal requirements may impact such partnerships. GAO interviewed DOT officials and reviewed DOT documents; interviewed 16 local transit agencies and 13 private mobility companies involved in transit partnerships; and reviewed 22 projects initiated by the selected partners, including 5 funded by the Mobility on Demand Sandbox grant program. GAO selected these partners to represent a range of service types and geographic locations; the results are non-generalizable. What GAO Found Some local transit agencies are pursuing partnerships with private mobility companies—including car-share and "ridesourcing" companies such as Lyft and Uber, which provide access to a shared vehicle “on demand”—with the aim of offering public transit riders more efficient and convenient service options. Most of the transit partnership projects that GAO selected (14 of 22) involved private partners providing on-demand transportation for the “first- and last-mile” connections to or from public transit stations (see figure). Local transit agencies use first- and last-mile connections to increase their public transit ridership. Other services provided through selected projects included filling transit service gaps in under-served areas. Most selected projects have not yet been evaluated to determine whether they achieved intended outcomes. The Department of Transportation's (DOT) efforts, especially the Federal Transit Administration's (FTA) initiation of the Mobility on Demand Sandbox program, have facilitated partnerships, but confusion about how to meet some requirements and how to report data pose challenges to implementing projects. In October 2016, FTA announced the selection of 11 projects to receive grants and has since provided assistance to the grantees. FTA also issued clarifications about how certain federal requirements—such as those related to the Americans with Disabilities Act of 1990 (ADA)—apply to transit partnerships. However, most selected local transit agencies (14 of 16) said that additional information beyond what FTA has already disseminated, including how agencies have successfully structured partnerships and met federal requirements, would be helpful. Collecting and disseminating such information could help FTA be better positioned to respond to changes in the transit industry that could impact its own efforts and goals, such as planning for future Mobility on Demand grants. In addition, most selected local transit agencies reported confusion related to reporting information about their on-demand projects into the FTA's National Transit Database, including confusion about which on-demand project data would qualify for entry. This confusion has led to possible reporting inconsistencies by some local transit agencies. Ensuring that data contained in the National Transit Database are complete and accurate is important, since according to FTA officials, FTA uses these data (1) to apportion certain grant funds to local transit agencies based on factors such as passenger miles traveled, and (2) to track its progress in achieving goals such as promoting efficient transportation systems, among other things. What GAO Recommends GAO is making three recommendations including that FTA disseminate information about how partnership projects met federal requirements and how data on partnerships should be entered into the National Transit Database. DOT concurred with the recommendations.
gao_GAO-19-144
gao_GAO-19-144_0
Background Federal agencies and our nation’s critical infrastructures—such as energy, transportation systems, communications, and financial services— are dependent on computerized (cyber) information systems and electronic data to carry out operations and to process, maintain, and report essential information. The information systems and networks that support federal operations are highly complex and dynamic, technologically diverse, and often geographically dispersed. This complexity increases the difficulty in identifying, managing, and protecting the myriad of operating systems, applications, and devices comprising the systems and networks. A resilient, well-trained, and dedicated cybersecurity workforce is essential to protecting federal IT systems. Nevertheless, OMB and our prior reports have pointed out that the federal government and private industry face a persistent shortage of cybersecurity and IT professionals to implement and oversee information security protections to combat cyber threats. As we noted in our prior report, the RAND Corporation and the Partnership for Public Service have reported on a nationwide shortage of cybersecurity experts in the federal government. According to these reports, the existing shortage of cybersecurity professionals makes securing the nation’s networks more challenging and may leave federal IT systems vulnerable to malicious attacks. The persistent shortage of cyber-related workers has given rise to the identification and assessment of the federal cybersecurity workforce across agencies so that efforts to increase the number of those workers can be applied in the most efficient and accurate manner. The NICE Framework and OPM Coding Structure Describe Federal Cybersecurity Work Roles NIST coordinates the National Initiative for Cybersecurity Education (NICE) partnership among government, academia, and the private sector. The initiative’s goal is to improve cybersecurity education, awareness, training, and workforce development in an effort to increase the number of skilled cybersecurity professionals. In August 2017, NIST revised and published the NICE Cybersecurity Workforce Framework (framework). The framework’s purpose is to help the federal government better understand the breadth of cybersecurity work by describing IT, cybersecurity, and cyber-related work roles associated with the categories and specialty areas that make up cybersecurity work. The framework organizes IT, cybersecurity, and cyber-related job functions into categories, representing high-level groupings of cybersecurity functions; and into specialty areas, representing areas of concentrated work or functions. Figure 1 identifies the seven categories and the 33 specialty areas in the NICE framework. In addition to categories and specialty areas, the NICE framework introduced the concept of work roles. Work roles provide a more detailed description of the roles and responsibilities of IT, cybersecurity, and cyber-related job functions than do the category and specialty area components of the framework. The framework defines one or more work roles within each specialty area. For example, as depicted in figure 2, the framework defines 11 work roles within the seven specialty areas of the “Securely Provision” category. In total, the framework defines 52 work roles across the 33 specialty areas. The NICE framework work roles include, among others, the Technical Support Specialist, IT Project Manager, and Software Developer. The framework identifies these IT, cybersecurity, and cyber-related work roles as essential functions. For example, a Technical Support Specialist may have a role in identifying the occurrence of a cybersecurity event, an IT Project Manager may need to manage cybersecurity risk to systems, and a Software Developer may need to implement appropriate cybersecurity safeguards. In October 2017, OPM updated the federal cybersecurity coding structure to incorporate the work roles identified in the NICE framework. The coding structure assigned a unique 3-digit cybersecurity code to each work role, which supplanted the prior coding structure’s 2-digit codes. According to OPM, the coding of federal positions with these specific 3- digit work role codes is intended to enhance agencies’ ability to identify critical IT, cybersecurity, and cyber-related workforce needs, recruit and hire employees with needed skills, and provide appropriate training and development opportunities to cybersecurity employees. Appendix II provides a summary of the IT, cybersecurity, and cyber-related work roles and corresponding OPM codes. Federal Cybersecurity Workforce Assessment Act of 2015 Establishes Workforce Planning Requirements In 2015, Congress and the President enacted the Federal Cybersecurity Workforce Assessment Act, which required OPM, NIST, and other federal agencies to undertake a number of cybersecurity workforce-planning activities. The act required these agencies to complete the activities within specified time frames. We addressed the first six activities in our prior report we issued in June 2018, and addressed the subsequent activities 7 through 10 in this report. Among the required cybersecurity workforce-planning activities are the following 10 that we selected for our review. 1. OPM, in coordination with NIST, was to develop a cybersecurity coding structure that aligns with the work roles identified in the NICE Cybersecurity Workforce Framework. (Due June 2016) 2. OPM was to establish procedures to implement a cybersecurity coding structure to identify all federal civilian positions that require the performance of IT, cybersecurity, or other cyber-related functions. (Due September 2016) 3. OPM was to submit a report to Congress on the progress that agencies made in identifying and assigning codes to their positions that perform IT, cybersecurity, or cyber-related functions. (Due June 2016) 4. Each federal agency was to submit a report to Congress on its baseline assessment and on the extent to which its employees who perform IT, cybersecurity, or cyber-related functions held certifications. (Due December 2016) 5. Each federal agency was to establish procedures to identify all filled and vacant IT, cybersecurity, or cyber-related positions and assign the appropriate code to each position. (Due April 2017 for civilian positions) 6. The Department of Defense (DOD) was to establish procedures to implement the cybersecurity coding structure to identify all federal noncivilian (i.e., military) positions. (Due June 2017) 7. Each agency was to complete the assignment of work role codes to its filled and vacant positions that perform IT, cybersecurity, or cyber- related functions. (Due April 2018 for civilian positions) 8. OPM was to identify critical needs across federal agencies and submit a progress report to Congress on the identification of critical needs. (Due December 2017) 9. OPM was to provide federal agencies with timely guidance for identifying IT, cybersecurity, or cyber-related work roles of critical need, including work roles with acute and emerging skill shortages. (The act did not specify a due date for this requirement). 10. Federal agencies were to identify their IT, cybersecurity, or cyber- related work roles of critical need in the workforce and submit a report describing these needs to OPM. (Due April 2019) Prior GAO Report Examined Agencies’ Implementation of the Initial Activities Required by the Federal Cybersecurity Workforce Assessment Act of 2015 In June 2018, we reported on federal agencies’ implementation of the first six of the 10 selected activities required by the Federal Cybersecurity Workforce Assessment Act. Specifically, we reported that, in November 2016, OPM, in coordination with NIST, had issued a cybersecurity coding structure that aligned with the NICE framework work roles (activity 1). Also, these two agencies developed procedures for assigning codes to federal civilian IT, cybersecurity, or cyber-related positions in January 2017 (activity 2). We noted that OPM had issued the cybersecurity coding structure and procedures later than the act’s deadlines because it was working with NIST to align the structure and procedures with the draft version of the NICE Cybersecurity Workforce Framework, which NIST issued later than planned. Regarding activity 3, we noted that OPM had submitted a report to Congress in July 2016 on the agencies’ progress in implementing the act’s required activities, as well as OPM’s efforts to develop a coding structure and government-wide coding procedures. We also reported that 21 of the 24 agencies had submitted baseline assessment reports identifying the extent to which their IT, cybersecurity, or cyber-related employees held professional certifications (activity 4). However, the three other agencies had not submitted such reports. In addition, four agencies did not include all reportable information in their reports, such as the extent to which personnel without certifications were ready to obtain them, or strategies for mitigating any gaps, as required by the act. We made 10 recommendations to these seven agencies to develop and submit baseline assessment reports, including all reportable information, to the congressional committees. As of February 2019, none of the seven agencies had implemented any of the 10 recommendations relating to the baseline assessment reports. Further, we reported that 23 of the 24 agencies had established procedures for assigning the appropriate work role codes to civilian positions that perform IT, cybersecurity, or cyber-related functions (activities 5 and 6 above), as required by the act. One agency had not established such procedures. Further, of the 23 agencies that had established procedures, 6 agencies did not address one or more of seven activities required by OPM in their procedures. For example, the agencies’ procedures did not include activities to review all filled and vacant positions and annotate reviewed position descriptions with the appropriate work role code. In addition, DOD had not established procedures for identifying and assigning work role codes to noncivilian (i.e., military) positions. Our June 2018 report included 20 recommendations to eight agencies to establish or update their procedures to fully address the required activities in OPM’s guidance. Subsequent to the report, the eight agencies implemented the 20 recommendations related to establishing or improving agencies’ coding procedures to address the required OPM activities. Specifically: The Department of Energy (Energy) established coding procedures that addressed the seven OPM required activities. The Department of Education (Education), Department of Labor (Labor), NASA, National Science Foundation (NSF), Nuclear Regulatory Commission (NRC), and United States Agency for International Development (USAID) revised their procedures to ensure that the procedures addressed OPM’s required activities. DOD established a consolidated government-wide and internal procedure for identifying and assigning work role codes to noncivilian (i.e., military) positions. Table 1 summarizes the status of agencies’ implementation of the first six selected activities required by the act as of October 2018. We initially reported on the status of these activities in our June 2018 report. Agencies Generally Categorized Positions, but Did Not Ensure the Reliability of Their Efforts Regarding the selected activity for agencies to complete the assignment of work role codes to filled and vacant positions that perform IT, cybersecurity, or cyber-related functions (activity 7) as set forth in the Federal Cybersecurity Workforce Assessment Act of 2015, the 24 agencies had generally assigned work roles code to their positions. However, several agencies had not completed assigning codes to their vacant positions. In addition, most agencies had likely miscategorized the work roles of many positions. For example, in these instances, the agencies had assigned a code designated for positions that do not perform IT, cybersecurity, or cyber-related functions to positions that most likely perform these functions. As indicated in table 2, federal agencies’ efforts to assign work role codes to filled and vacant positions that performed IT, cybersecurity, or cyber- related functions were ongoing as of October 2018. Agencies Had Generally Assigned Work Role Codes to Positions, but Six Had Not Completely Coded Vacant Positions To assist agencies with meeting their requirements under the Federal Cybersecurity Workforce Assessment Act of 2015, OPM issued guidance that directed agencies to identify filled and vacant positions with IT, cybersecurity, or cyber-related functions and assign work role codes to those positions using the Federal Cybersecurity Coding Structure by April 2018. As previously mentioned, this coding structure designates a unique 3-digit code for each work role defined in the NICE framework. According to OPM’s guidance, agencies could assign up to three work role codes to each position, and should assign the code of “000” only to positions that did not perform IT, cybersecurity, or cyber-related functions. The 24 agencies generally had assigned work role codes to their filled workforce positions that performed IT, cybersecurity, or cyber-related functions. Specifically, 22 of the agencies responded to our questionnaire that, as of April 2018, they had completed assigning work role codes to those filled positions. In addition, data from the OPM Enterprise Human Resources Integration system showed that, as of May 2018, the 24 agencies had collectively assigned work role codes or a “000” code to over 99 percent of the filled positions in their entire workforce. In addition, 18 of the 24 agencies reported they had identified and assigned codes to their vacant IT, cybersecurity, or cyber-related positions by April 2018. However, the remaining six agencies reported that they were not able to identify or assign codes to all of their vacant positions. For example, four agencies—DOD, EPA, GSA, and NASA— responded to our questionnaire that they did not identify and assign codes to vacant IT, cybersecurity, or cyber-related positions. DOD reported that, while some components assigned codes to vacant positions, the department did not have an enterprise-wide capability to assign codes to vacant positions and had not modified the systems to enable the use of the 3-digit work role codes for vacant positions due to time and funding constraints. EPA reported that it had assigned codes to vacant positions in April 2018, but it did not have a process for assigning codes to newly created vacant positions. GSA human resources officials said that they assigned codes to vacant positions that had been authorized and funded. However, they did not code unfunded vacant positions because they did not anticipate filling them. Agency officials noted that they, instead, tracked unfunded vacant positions through staffing plans. NASA human resources and Office of the Chief Information Officer officials said the agency did not identify and code vacant positions because they did not track vacant positions. Further, the remaining two agencies—Energy and Justice— stated that they could not provide data regarding the number of vacant IT, cybersecurity, or cyber-related positions that had been identified and coded. For example, Justice said that information on vacant positions was not available through its human resources system, and that it would need to send a data call to components to obtain information on the number of vacancies with an assigned work role code. However, according to management division officials, the department would need additional time to collect this information. OPM stated that it plans to issue additional guidance for tracking IT, cybersecurity, and cyber-related vacancies by January 2019. OPM officials said that agencies have focused on the assignment of codes to filled positions and that tracking vacancies is challenging because agencies vary in the way they track vacancies. By not completing their efforts to identify and code their vacant IT, cybersecurity, and cyber-related positions, the six agencies lack important information about the state of their workforces. As a result, these agencies may be limited in their ability to identify work roles of critical need and improve workforce planning. Most Agencies Had Likely Miscategorized the Work Roles of Many Positions The Federal Cybersecurity Workforce Assessment Act of 2015 required agencies to assign the appropriate work role codes to each position with cybersecurity, cyber-related, and IT functions, as defined in the NICE framework. In addition, OPM guidance required agencies to assign work role codes using the Federal Cybersecurity Coding Structure. As previously mentioned, according to OPM’s guidance, agencies could assign up to three work role codes to each position. Agencies were to assign a code of “000” only to positions that did not perform IT, cybersecurity, or cyber-related functions. Further, the Standards for Internal Control in the Federal Government states that agencies should obtain relevant data from reliable sources that are complete and consistent. However, the 24 agencies had likely miscategorized the work roles of many positions. For example, the 24 agencies routinely assigned work role codes to positions that were likely inconsistent with the positions’ functions. Specifically, at least 22 of the 24 agencies assigned the code “000”, which is designated for positions not performing IT, cybersecurity, or cyber-related functions, to many positions that most likely performed these functions. For example, OPM’s Enterprise Human Resources Integration data from May 2018 showed that 22 of the 24 agencies had assigned the “000” code to between 5 and 86 percent of their positions in the 2210 IT management occupational series. These positions are most likely to perform IT, cybersecurity, or cyber-related functions, as defined by the NICE framework. OPM and agency officials told us that they would expect agencies to assign a NICE work role code to these positions, with a few exceptions, such as in cases where a position’s duties did not align with a NICE work role code. Table 3 identifies the number and percentage of the 2210 IT management positions that were assigned a “000” code by each of the 24 agencies, according to OPM’s Enterprise Human Resources Integration data, as of May 2018. Collectively, the agencies assigned a “000” code to about 15,779 positions, or about 19 percent of the agencies’ 2210 IT management positions. Agencies identified varying reasons for why they assigned the “000” code to positions that most likely performed IT, cybersecurity, or cyber-related functions. For example, Agency human resources and IT officials from 10 agencies said that they may have assigned the “000” code in error (DOD, Education, Energy, Justice, State, Department of Veterans Affairs (VA), NRC, OPM, Small Business Administration (SBA), Social Security Administration (SSA)). Agency human resources and IT officials from 13 agencies said they had not completed the process to validate the accuracy of their codes (Department of Agriculture (Agriculture), Education, Department of Health and Human Services (HHS), DHS, Department of Housing and Urban Development (HUD), Justice, Treasury, VA, EPA, GSA, NRC, SBA, SSA). Agency human resources and IT officials from seven agencies said that they assigned the “000” code to positions that did not perform cybersecurity duties for a certain percentage of their time (Commerce, Justice, Labor, Transportation, Treasury, GSA, and NASA). Agency human resources and IT officials from 12 agencies said that OPM’s guidance was not clear on whether the 2210 IT management positions should be assigned a work role code and not be assigned the “000” code (Agriculture, Energy, DHS, HUD, Interior, Labor, State, VA, EPA, GSA, NASA, and SSA). Agency human resources and IT officials from three agencies stated that they assigned the “000” code to IT positions when their positions did not align with any of the work roles described in the NICE framework (Interior, Treasury, and NRC). However, the work roles and duties described in the agencies’ position descriptions for the 2210 IT management positions that we reviewed aligned with the work roles defined in the NICE framework. For example, in examining the position descriptions that NRC officials said did not align to work roles in the NICE framework, we were able to match duties described in the position descriptions to work role tasks in the framework and identify potential work role codes for those positions. Additionally, Treasury officials said that positions in the area of cryptographic key management did not align with the NICE framework; however, these positions would likely align with the Communications Security Manager (i.e., NICE code 723) work role, which covers cryptographic key management. By assigning work role codes that are inconsistent with the IT, cybersecurity, and cyber-related functions performed by positions, the agencies in our review are diminishing the reliability of the information they will need to identify their workforce roles of critical need. Agencies Assigned Work Role Codes to Sample Positions That Were Inconsistent with Duties Described In Corresponding Position Descriptions Similar to the work role data reported in OPM’s Enterprise Human Resources Integration system, the six agencies that we selected for additional review had assigned work role codes to positions in their human resources systems that were not consistent with the duties described in their corresponding position descriptions. Of 120 randomly selected 2210 IT management positions that we reviewed at the six agencies, 63 were assigned work role codes that were inconsistent with the duties described in their position descriptions. DHS assigned a Network Operational Specialist code (NICE code 441) to a position with duties associated with a Cyber Instructional Curriculum Developer (NICE code 751). State assigned a Cyber Legal Advisor (NICE code 731) code to a position with duties associated with a Program Manager (NICE code 801). Table 4 summarizes the consistency of work role coding in comparison to corresponding position description text for the random sample of positions for the six selected agencies. The six agencies had also assigned different work role codes for positions that had identical position titles and similar functions described in corresponding position descriptions for 46 of 72 positions that we reviewed. For example, State had two positions associated with a position description that described duties associated with the IT Program Auditor (NICE code 805). Although State assigned the “805” work role code to one position, it assigned the “000” code to the other position. DOD had two positions associated with a position description that described duties associated with the Information Systems Security Manager work role (NICE code 722). However, DOD assigned the “000” code to one position and assigned an invalid 2-digit code to the other position. The six agencies provided multiple reasons for why they had assigned codes that were not consistent with the work roles and duties described in their corresponding position descriptions: DOD officials from the Office of the Chief Information Officer cited the large number of positions that perform IT, cybersecurity, or cyber- related functions and the lack of one-to-one mapping of the NICE framework work roles to positions as impediments. DHS human resources officials said that position descriptions may not have been consistent with coding because the assignment of the work role codes could be based on specific tasks that are described in separate documents (e.g., job analyses or employee performance plans) outside of the position descriptions. Information Resource Management officials at State said that their system did not require all IT positions to have a work role code. However, according to the officials, they had plans to create and release a business rule in September 2018 to reduce data errors and require the 2210 IT management positions series to have a work role code. EPA officials in the Office of Environmental Information and the Office of Human Resources stated that the first-line supervisor made the final determination of each position’s work role code. Officials stated that first-line supervisors may have assigned different codes for similar positions because they interpreted OPM guidance and work roles differently. GSA human resources officials said they assigned “000” to IT positions because they needed clarification and further interpretive guidance from OPM. According to the officials, once GSA received the guidance, the agency planned to conduct a review of IT positions coded “000.” In addition, GSA had assigned the code “000” if the position description did not include 25 percent or more of cybersecurity functions. According to NASA officials from the Offices of the Chief Human Capital Officer and Chief Information Officer, the agency miscoded a few positions due to an administrative error that has since been corrected. In addition, NASA officials said that they assigned the “000” code to positions that did not perform cybersecurity duties for a certain percentage of time (e.g., 25 percent or more of the time). Agencies did not provide further evidence that the positions we evaluated as inconsistently coded were accurate. Moreover, in reviewing 87 position descriptions provided by the six agencies—DOD, DHS, State, EPA, GSA, and NASA—in no case did we find the assignment of the “000” work role code to be consistent with the duties described. By assigning work role codes that are inconsistent with the IT, cybersecurity, and cyber-related functions performed by positions, the agencies in our review are diminishing the reliability of the information they will need to identify their workforce roles of critical need. OPM and Agencies Had Taken Steps to Identify IT, Cybersecurity, and Cyber-related Work Roles of Critical Need As of November 2018, OPM and the 24 agencies had taken steps to address the three selected activities that the Federal Cybersecurity Workforce Assessment Act of 2015 required to identify IT, cybersecurity, and cyber-related work roles of critical need. Specifically, OPM had reported on agencies’ progress in identifying critical needs (activity 8) and had provided agencies with guidance for identifying IT, cybersecurity, and cyber-related work roles of critical need (activity 9). In addition, the 24 agencies had submitted preliminary reports of their identified critical needs to OPM, but their efforts to identify critical needs were ongoing (activity 10). Table 5 presents the status of the agencies’ efforts to identify work roles of critical need, as of November 2018. Further, appendix III summarizes the status of implementation of each of the 10 selected activities required by the act. OPM Reported on Progress of Efforts and Provided Guidance for Agencies to Identify Cybersecurity Work Roles of Critical Need The Federal Cybersecurity Workforce Assessment Act of 2015 required OPM, in consultation with DHS, to identify critical needs for the IT, cybersecurity, or cyber-related workforce across federal agencies and submit a progress report to Congress on the identification of IT, cybersecurity, or cyber-related work roles of critical need by December 2017. The act also required OPM to provide timely guidance for identifying IT, cybersecurity, or cyber-related work roles of critical need, and including current acute and emerging skill shortages. In December 2017, OPM, in consultation with DHS, reported on the progress of federal agencies’ identification of IT, cybersecurity, and cyber- related work roles of critical need to Congress. In the report, OPM could not identify critical needs across all federal agencies because agencies were still in the process of assigning work role codes and identifying their critical needs. As such, OPM reported that agencies were working toward accurately completing their coding efforts by April 2018, as a foundation for assessing the workforce and identifying needed cybersecurity skills. OPM stated in the report that it would begin to identify and report IT, cybersecurity, and cyber-related work roles of critical need following the agencies’ completion of their assessments and coding of the workforce. Further, in April 2018, OPM issued a memorandum to federal agencies’ chief human capital officers that provided guidance on identifying IT, cybersecurity, and cyber-related work roles. Specifically, this guidance required agencies to report their greatest skill shortages, analyze the root cause of the shortages, and provide action plans with targets and measures for mitigating the critical skill shortages. In addition, in June 2018, to ensure that agencies were on track to meet the requirements outlined in the act to submit their critical needs by April 2019, OPM required agencies to provide a preliminary report on work roles of critical need and root causes by August 31, 2018. OPM provided agencies with a template to collect critical information such as critical needs and root causes. OPM guidance stated that these data would provide the Congress with a government-wide perspective of critical needs and insight into how to allocate future resources. Agencies Have Begun to Identify Cybersecurity Work Roles of Critical Need The act required agencies to identify IT, cybersecurity, or cyber-related work roles of critical need and submit a report to OPM substantiating these critical need designations by April 2019. OPM also required agencies to submit a preliminary report, which included agencies’ identified work roles of critical need and the associated root causes, by August 31, 2018. The 24 agencies have begun to identify critical needs and submitted a preliminary report of critical needs to OPM. Seventeen agencies submitted their report by the August 31, 2018 deadline, and seven submitted their report after the deadline in September 2018. Most agencies’ reports included the required critical needs and root causes. Specifically, Twenty-four agencies’ reports documented work roles of critical need. Twenty-two agencies’ reports included the root cause of the critical needs identified. Table 6 shows the status of the 24 agencies’ submissions of preliminary reports on cybersecurity work roles of critical need as of November 2018. The preliminary reports of critical needs for the 24 agencies showed that, as of November 2018, IT project managers, information systems security managers, and systems security analysts are among the top identified work roles of critical need at these agencies. Twelve agencies reported each of these work roles as a critical need. Agencies’ preliminary reports should provide a basis for agencies to develop strategies to address shortages and skill gaps in their IT, cybersecurity, and cyber-related workforces. For additional information on the top 12 reported work roles of critical need, see appendix IV. Conclusions As required by the Federal Cybersecurity Workforce Assessment Act of 2015, the 24 agencies had generally categorized their workforce positions that have IT, cybersecurity, or cyber-related functions; however, agencies did not ensure the work role coding was reliable. For example, six of the 24 agencies had not completed assigning codes to their vacant positions. In addition, 22 of the agencies had assigned a code designated for positions not performing IT, cybersecurity, or cyber-related functions to about 19 percent of filled IT management positions. Further, six selected agencies—DOD, DHS, State, EPA, GSA, and NASA—had assigned work role codes to positions in their human resources systems that were not consistent with the duties described in the corresponding position descriptions. Until agencies accurately categorize their positions, the agencies may not have reliable information to form a basis for effectively examining their cybersecurity workforce, improving workforce planning, and identifying their workforce roles of critical need. Although OPM met its deadlines for reporting to congressional committees on agencies’ progress in identifying critical needs, the progress report did not identify critical needs across all federal agencies because agencies were still in the process of assigning work role codes and identifying their critical needs. In addition, OPM has since provided agencies with guidance that should assist them in their efforts to identify critical needs by April 2019. Further, all of the 24 agencies have submitted preliminary reports identifying work roles of critical need to OPM. These efforts should assist these agencies in moving forward to develop strategies to address shortages and skill gaps in their IT, cybersecurity, and cyber-related workforces. Recommendations for Executive Action We are making a total of 28 recommendations to 22 agencies to take steps to complete the appropriate assignment of codes to their positions performing IT, cybersecurity, or cyber-related functions, in accordance with the requirements of the Federal Cybersecurity Workforce Assessment Act of 2015. Specifically: The Secretary of Agriculture should take steps to review the assignment of the “000” code to any positions in the department in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 1) The Secretary of Commerce should take steps to review the assignment of the “000” code to any positions in the department in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 2) The Secretary of Defense should complete the identification and coding of vacant positions in the department performing IT, cybersecurity, or cyber-related functions. (Recommendation 3) The Secretary of Defense should take steps to review the assignment of the “000” code to any positions in the department in the 2210 IT management occupational series, assign the appropriate NICE framework work role codes, and assess the accuracy of position descriptions. (Recommendation 4) The Secretary of Education should take steps to review the assignment of the “000” code to any positions in the department in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 5) The Secretary of Energy should complete the identification and coding of vacant positions in the department performing IT, cybersecurity, or cyber- related functions. (Recommendation 6) The Secretary of Energy should take steps to review the assignment of the “000” code to any positions in the department in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 7) The Secretary of Health and Human Services should take steps to review the assignment of the “000” code to any positions in the department in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 8) The Secretary of Homeland Security should take steps to review the assignment of the “000” code to any positions in the department in the 2210 IT management occupational series, assign the appropriate NICE framework work role codes, and assess the accuracy of position descriptions. (Recommendation 9) The Secretary of Housing and Urban Development should take steps to review the assignment of the “000” code to any positions in the department in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 10) The Secretary of Interior should take steps to review the assignment of the “000” code to any positions in the department in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 11) The Attorney General should complete the identification and coding of vacant positions in the Department of Justice performing IT, cybersecurity, or cyber-related functions in the Department of Justice. (Recommendation 12) The Attorney General should take steps to review the assignment of the “000” code to any positions in the Department of Justice in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 13) The Secretary of Labor should take steps to review the assignment of the “000” code to any positions in the department in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 14) The Secretary of State should take steps to review the assignment of the “000” code to any positions in the department in the 2210 IT management occupational series, assign the appropriate NICE framework work role codes, and assess the accuracy of position descriptions. (Recommendation 15) The Secretary of Transportation should take steps to review the assignment of the “000” code to any positions in the department in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 16) The Secretary of Treasury should take steps to review the assignment of the “000” code to any positions in the department in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 17) The Secretary of Veterans Affairs should take steps review the assignment of the “000” code to any positions in the department in the 2210 IT management occupational series and assign the appropriate NICE work role codes. (Recommendation 18) The Administrator of the Environmental Protection Agency should complete the identification and coding of vacant positions in the agency performing IT, cybersecurity, or cyber-related functions. (Recommendation 19) The Administrator of the Environmental Protection Agency should take steps to review the assignment of the “000” code to any positions in the agency in the 2210 IT management occupational series, assign the appropriate NICE framework work role codes, and assess the accuracy of position descriptions. (Recommendation 20) The Administrator of the General Services Administration should complete the identification and coding of vacant positions at GSA performing IT, cybersecurity, or cyber-related functions. (Recommendation 21) The Administrator of the General Services Administration should take steps to review the assignment of the “000” code to any positions at GSA in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes, and assess the accuracy of position descriptions. (Recommendation 22) The Administrator of the National Aeronautics and Space Administration should complete the identification and coding of vacant positions at NASA performing IT, cybersecurity, or cyber-related functions. (Recommendation 23) The Administrator of the National Aeronautics and Space Administration should take steps to review the assignment of the “000” code to any positions at NASA in the 2210 IT management occupational series, assign the appropriate NICE framework work role codes, and assess the accuracy of position descriptions. (Recommendation 24) The Chairman of the Nuclear Regulatory Commission should take steps to review the assignment of the “000” code to any positions at NRC in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 25) The Director of the Office of Personnel Management should take steps to review the assignment of the “000” code to any positions at OPM in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 26) The Administrator of the Small Business Administration should take steps to review the assignment of the “000” code to any positions at SBA in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 27) The Commissioner of the Social Security Administration should take steps to review the assignment of the “000” code to any positions at SSA in the 2210 IT management occupational series and assign the appropriate NICE framework work role codes. (Recommendation 28) Agency Comments and Our Evaluation We provided a draft of this report to the 24 CFO Act agencies and OMB for their review and comment. Of the 22 agencies to which we made recommendations, 20 agencies stated that they agreed with the recommendations directed to them; one agency partially agreed with the recommendation; and one agency agreed with one recommendation but did not agree with one recommendation. In addition, of the two agencies to which we did not make recommendations, one agency acknowledged its review of the report but did not otherwise provide comments; the other agency provided technical comments, which we incorporated into the report as appropriate. We also received technical comments from three of the agencies to which we made recommendations, and incorporated them into the report as appropriate. Further, OMB responded that it had no comments on the report. The following 20 agencies agreed with the recommendations in our report: In comments provided via email on February 19, 2019, the Director of Strategic Planning, Policy, E-government and Audits in Agriculture’s Office of the Chief Information Officer stated that the department concurred with the recommendation in our report. In written comments (reprinted in appendix V), Commerce agreed with our recommendation and stated that it would ensure the proper coding of 2210 IT management occupational series positions with the appropriate NICE framework work role codes. In written comments (reprinted in appendix VI), DOD concurred with our two recommendations. With regard to our recommendation that it complete the identification and coding of vacant positions performing IT, cybersecurity, or cyber-related functions, the department stated that its longer-term initiative is to code positions, including vacant positions, in DOD’s manpower requirements systems to provide true gap analysis capabilities. Regarding our recommendation that it review the assignment of “000” codes, the department stated that it would continue efforts to remediate erroneously coded positions. In written comments (reprinted in appendix VII), Education concurred with our recommendation. The department stated that its Office of Human Resources would continue to review the 2210 IT positions and ensure the assignment of appropriate work role codes. In written comments (reprinted in appendix VIII), Energy concurred with our two recommendations. Regarding our recommendation that it complete the identification and coding of vacant IT, cybersecurity, and cyber-related positions, the department stated that it had instituted procedures to review and code vacant positions. Regarding our recommendation that it review the assignment of “000” codes, the department said that it had ensured that all 2210 IT management positions were assigned the appropriate work role codes by April 2018. However, our review of the May 2018 data from OPM’s Enterprise Human Resources Integration System found that Energy had assigned the “000” code to about 16 percent of its 2210 IT management positions. Further, along with its comments on the draft report, in January 2019, the department provided a report indicating that Energy had not assigned the “000” work role code to its positions in the 2210 IT management occupation series. We plan to take follow- up steps to verify the completeness of the department’s actions. In addition to the aforementioned comments, Energy provided technical comments, which we have incorporated into this report, as appropriate. In written comments (reprint in appendix IX), HHS concurred with our recommendation and outlined steps to identify, review, and make necessary corrections to its 2210 IT management positions that were coded as “000.” In written comments (reprinted in appendix X), DHS concurred with our recommendation. The department stated that personnel in its Office of the Chief Human Capital Officer had established processes for periodically reviewing cybersecurity workforce coding data and for collaborating with components to ensure positions with significant responsibilities associated with the NICE framework—including 2210 positions—were properly coded. Nevertheless, DHS expressed concern with our finding that it had miscategorized the work roles for some positions. The department stated that its position descriptions are often written in a generalized format, and are static, baseline, point-in-time documents. The department added that, several positions may align with the same position description, yet have specific duties and content captured in other human capital documents such as employee performance plans. Thus, some positions may have the same position description yet require different cybersecurity codes. While we agree that position descriptions do not detail every possible activity, according to OPM, the position descriptions should document the major duties and responsibilities of a position. However, we found that DHS did not always assign codes consistent with major duties and responsibilities described in the position descriptions. For example, the department assigned a Network Operational Specialist code to a position with major duties associated with a Cyber Instructional Curriculum Developer. The department did not provide evidence that the positions we evaluated as inconsistently coded were accurately coded. If work role codes are not consistent with position descriptions, DHS may not have reliable information to form a basis for effectively examining its cybersecurity workforce, improving workforce planning, and identifying its workforce roles of critical need. The department also provided technical comments, which we have incorporated into this report as appropriate. In comments provided via email on February 14, 2019, an audit liaison officer in HUD’s Office of the Chief Human Capital Officer stated that the department agreed with our recommendation. In written comments (reprinted in appendix XI), Interior concurred with our recommendation and stated that it had taken steps to change the designation of the “000” code for the remaining personnel in the 2210 IT management occupational series. In comments provided via email on February 4, 2019, an audit liaison specialist in Justice’s Management Division stated that the department concurred with the two recommendations. In written comments (reprinted in appendix XII), Labor concurred with our recommendation and stated that it had taken steps to review and code the department’s 2210 IT positions using the NICE framework. In written comments (reprinted in appendix XIII), State concurred with our recommendation. The department said that it will conduct a comprehensive review of its 2210 positions and include instructions to change the coding of any such positions that have been assigned a “000” code. In addition, the department stated that it had created a new business rule in its human resources system to ensure that 2210 positions are assigned a primary work role code. In comments provided via email on December 20, 2018, an audit relations analyst in Transportation’s Office of the Secretary stated via email that the department concurred with our findings and recommendation. In written comments (reprinted in appendix XIV), VA concurred with our recommendation and stated that the department had begun conducting a review of its cyber coding. In written comments (reprinted in appendix XV), EPA concurred with our two recommendations to the agency. With regard to our recommendation that it complete the identification and coding of vacant positions performing IT cybersecurity or cyber-related functions, EPA stated that it would update its standard operating procedures to include the requirement to code vacant positions during the position classification process. Nevertheless, while including this requirement in the procedures is an important step, it is imperative that the agency implement the procedures to ensure that its vacant positions are assigned appropriate work role codes. With regard to our recommendation that the agency review the assignment of the “000” code to its 2210 IT management occupation series, EPA stated that it would review all such positions and assign the appropriate NICE framework codes to any positions that were erroneously coded with the non-IT work role code. In comments provided via email on January 31, 2019, the Director of the Human Capital Policy and Programs Division stated that GSA agreed with our two recommendations. Also, in written comments (reprinted in appendix XVI), GSA stated that, once it completes the ongoing transition to a position-based human resources system, it will explore options to include vacant positions in its new system. In addition, GSA stated that it had completed an initial review of cyber codes and indicated that it would update all coding by March 2019. In written comments (reprinted in appendix XVII), NRC agreed with the findings in our draft report and said it had taken actions to address our recommendation by assigning appropriate work role codes to IT management positions previously assigned a “000” code. In written comments (reprinted in appendix XVIII), OPM concurred with our recommendation to the agency. OPM stated that its human resources and subject matter experts plan to assess the assignment of “000” codes to personnel in the 2210 IT management occupation series to help ensure accurate coding and appropriate application of the NICE framework work role codes. In written comments (reprinted in appendix XIX), SBA concurred with our recommendation. The agency stated that its Office of the Chief Information Officer, Office of Human Resources Solutions, and appropriate program offices would review the assignment of the “000” code to any 2210 IT management occupation series positions and assign the appropriate NICE framework role codes. The agency also provided technical comments, which we have incorporated into this report as appropriate. In written comments (reprinted in appendix XX), SSA agreed with our recommendation and stated that it had taken steps to complete the assignment of codes to the remaining 2210 IT management positions. In addition, one agency partially agreed with the recommendations in our report. In comments provided via email on February 15, 2019, the Acting Director for Treasury’s Office of Human Capital Strategic Management stated that the department partially concurred with our recommendation that it review the assignment of “000” codes. According to the Acting Director, the Deputy Assistant Secretary for Human Resources and Chief Human Capital Officer had issued guidance to all Treasury Bureaus to validate the coding of 2210 IT management positions. However, Treasury did not agree with our finding that positions in the area of cryptographic key management could be aligned to the NICE framework work role code for the Communications Security Manager. The official stated that the cryptographic key management functions did not completely align with any of the NICE framework work roles. We acknowledge that there may be positions that do not completely align with work roles described in the NICE framework. However, according to OPM, the framework currently covers a broad array of functions that describe the majority of IT, cybersecurity, and cyber-related work. As noted in our report, OPM officials told us that they would expect agencies to assign a NICE work role code to 2210 IT management positions, with a few exceptions, such as in cases where a position’s duties did not align with a NICE work role code. As such, we maintain that Treasury likely miscategorized over 1,300 IT management positions by assigning a “000” code to them, designating those positions as not performing IT, cybersecurity, or cyber-related work and, thus, should review these positions and assign the appropriate work role codes. Further, one agency did not agree with one of the two recommendations directed to it. Specifically, in written comments (reproduced in appendix XXI) NASA stated that it concurred with our recommendation to review the assignment of “000” codes to 2210 IT management positions. In this regard, the agency stated that it would complete a review of the assignment of “000” codes to 2210 IT management positions and assign the appropriate NICE framework work role codes. NASA did not concur with our other recommendation to complete the identification and coding of vacant positions performing IT, cybersecurity, or cyber-related functions. The agency stated that it had met the intention of the recommendation with existing NASA processes that assign a code at the time a vacancy is identified. However, the agency’s workforce planning process is decentralized and the agency previously noted that it did not track vacancies. We maintain that the Federal Cybersecurity Workforce Assessment Act requires agencies to identify and code vacant positions and that NASA could compile necessary information from components to identify and code vacant IT, cybersecurity, and cyber-related positions. These efforts would provide important information about vacant IT, cybersecurity, and cyber-related positions across the agency to enhance NASA’s workforce planning. Thus, we continue to believe that our recommendation is warranted. In addition, of the two agencies to which we did not make recommendations, one agency—USAID—provided a letter (reprinted in appendix XXII) acknowledging its review of the report and the other agency—NSF—provided technical comments, which we have incorporated into the report as appropriate. We are sending copies of this report to interested congressional committees, the Director of the Office of Management and Budget, the secretaries and agency heads of the departments and agencies addressed in this report, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XXIII. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) determine the extent to which federal agencies have assigned work role codes to positions performing information technology (IT), cybersecurity, or cyber-related functions, and (2) describe the steps federal agencies took to identify work roles of critical need. The scope of our review included the 24 major departments and agencies covered by the Chief Financial Officers (CFO) Act of 1990. To address our objectives, we reviewed the provisions of the Federal Cybersecurity Workforce Assessment Act of 2015 and assessed the workforce planning actions taken by the Office of Personnel Management (OPM) and the other 23 CFO Act agencies against the selected four activities required by the act. To evaluate the four selected activities of the act and objectives 1 and 2, we reviewed the National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework and OPM’s cybersecurity coding structure and guidance. The guidance provided information on how agencies should identify and assign work role codes to IT, cybersecurity, and cyber-related positions. We also designed and administered a questionnaire to each of the 24 agencies regarding their efforts to identify and assign work role codes to IT, cybersecurity, or cyber-related positions, and identify work roles of critical need. In developing the questionnaire, we took steps to ensure the accuracy and reliability of responses. We pre-tested the questionnaire with OPM and the Department of Homeland Security (DHS) officials to ensure that the questions were clear, comprehensive, and unbiased, and to minimize the burden the questionnaire placed on respondents. We also asked the chief information officer and the chief human capital officer of each agency to certify that they reviewed and validated the responses to the questionnaires. We administered the questionnaire between June and October 2018. We received completed questionnaires from each of the 24 agencies, for a response rate of 100 percent. We examined the questionnaire results and performed computer analyses to identify missing data, inconsistencies, and other indications of error, and addressed such issues as necessary, including through follow-up communications with the 24 agencies. We reviewed and analyzed the agencies’ responses to the questionnaire in comparison to the act’s requirements and OPM’s and NICE’s guidance. We also obtained, reviewed, and analyzed supporting documentation of questionnaire responses, such as reports of cybersecurity employment code data, to assess whether agencies assigned work role codes in accordance with the activities in OPM’s coding guidance, by April 2018. Further, to analyze how federal agencies assigned work role codes to positions performing IT, cybersecurity, or cyber-related functions, we obtained IT, cybersecurity, or cyber-related workforce coding data for the 24 agencies from OPM’s Enterprise Human Resources Integration system. To assess the reliability of coding data from OPM’s system, we reviewed these data to determine its completeness, and asked officials responsible for entering and reviewing the work role coding data a series of questions about the accuracy and reliability of the data. In addition, we examined the Enterprise Human Resources Integration IT, cybersecurity, or cyber-related coding data to determine the number of positions the 24 agencies had assigned the “000” code to positions in the 2210 IT management occupational series as of May 2018. We reviewed positions from the 2210 IT management occupational series because those positions are likely to perform IT, cybersecurity, or cyber-related functions. In the report, we note some challenges with the reliability of these data and are careful to present our data in line with these limitations. We then identified a subset of the 24 agencies and performed an additional review of these agencies’ work role coding efforts. We selected these agencies based on their total cybersecurity spending for fiscal year 2016, as reported by the Office of Management and Budget (OMB) in its Federal Information Security Modernization Act annual report. We sorted the 24 agencies’ IT cybersecurity spending from highest to lowest and then divided them into three equal groups of high, medium, and low. We then selected the top two agencies from each group. Based on these factors, we selected six agencies: the (1) Department of Defense (DOD), (2) DHS, (3) Department of State (State), (4) National Aeronautics and Space Administration (NASA), (5) Environmental Protection Agency (EPA), and (6) General Services Administration (GSA).We performed an additional review of the agencies’ work role coding efforts. We did this by evaluating the six selected agencies’ coding processes against their established procedures and OPM requirements. We also obtained and reviewed coding data that included the assigned work role codes for civilian employees from each agency’s human resources system. To assess the reliability of coding data from the selected six agencies’ systems, we reviewed related documentation such as the agencies’ coding procedures, processing guides, personnel bulletins, and system screen shots. We also conducted electronic testing for missing data, duplicate data, or obvious errors. In addition, we asked officials responsible for entering and reviewing the work role coding data a series of questions about the accuracy and reliability of the data. For any anomalies in the data, we followed up with the six selected agencies’ offices of the chief information officer and chief human capital officer to either understand or correct those anomalies. Further, we assessed the reliability of data in terms of the extent to which codes were completely assigned and reasonably accurate. In the report, we note some challenges with the reliability of these data and are careful to present our data in line with these limitations. We randomly selected a sample of 20 positions from each of the six selected agencies (120 total positions) within the 2210 IT management occupational series. We reviewed positions from the IT management 2210 series because those positions are likely to perform IT, cybersecurity, or cyber-related functions. For the selected positions, we requested position descriptions and reviewed whether the position work role codes in the coding data were consistent with the corresponding position description text. We also selected a second nonstatistical sample of 12 positions for each of the six agencies (72 total positions) from the 2210 IT management occupational series based on pairs of positions that had identical position titles, occupational series, and sub-agencies, but for which the agencies had assigned different work role codes for the positions. An analyst reviewed the work role coding data and compared them to the duties described by the position descriptions to determine whether they were consistent with the position duties. A second analyst verified whether or not the position’s work role code was consistent with the position description. A third analyst adjudicated cases in which the first and second analysts’ evaluations did not match. Lastly, to evaluate agencies’ actions to address the last three activities of the act related to the identification of cybersecurity work roles of critical need, we obtained, reviewed, and analyzed OPM’s guidance for identifying critical needs and its progress report to Congress by comparing it to the act’s requirements. We reviewed agencies’ responses to our questionnaire regarding whether they had developed methodologies or project plans for identifying critical needs. We also reviewed any available documentation on the 24 agencies’ progress in identifying critical needs, such as project plans, timelines, and preliminary reports. In addition, OPM required agencies to submit a preliminary report on work roles of critical need by August 31, 2018. We obtained copies of the preliminary reports from the 24 agencies. We evaluated agencies’ efforts to meet the deadline, as well as for meeting OPM’s requirements for documenting work roles of critical need and determining root causes of those needs. To supplement our analysis, we interviewed agency officials from human resources and chief information officer offices at the 24 agencies regarding their progress in coding and identifying cybersecurity work roles of critical need. We conducted this performance audit from February 2018 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Office of Personnel Management Information Technology, Cybersecurity, and Cyber-related Work Role Codes Appendix III: Summary of 24 Chief Financial Officers Act Agencies’ Implementation of the Federal Cybersecurity Workforce Assessment Act of 2015, as of Nov. 2018 Appendix IV: Top 12 Work Roles of Critical Need as Identified by the 24 Chief Financial Officers (CFO) Act Agencies in Their Preliminary Reports of Critical Need Appendix V: Comments from the Department of Commerce Appendix VI: Comments from the Department of Defense Appendix VII: Comments from the Department of Education Appendix VIII: Comments from the Department of Energy Appendix IX: Comments from the Department of Health and Human Services Appendix X: Comments from the Department of Homeland Security Appendix XI: Comments from the Department of the Interior Appendix XII: Comments from the Department of Labor Appendix XIII: Comments from the Department of State Appendix XIV: Comments from the Department of Veterans Affairs Appendix XV: Comments from the Environmental Protection Agency Appendix XVI: Comments from the General Services Administration Appendix XVII: Comments from the Nuclear Regulatory Commission Appendix XVIII: Comments from the Office of Personnel Management Appendix XIX: Comments from the Small Business Administration Appendix XX: Comments from the Social Security Administration Appendix XXI: Comments from the National Aeronautics and Space Administration Appendix XXII: Comments from the United States Agency for International Development Appendix XXIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Tammi Kalugdan (Assistant Director), Merry Woo (Analyst-in-Charge), Carlos (Steven) Aguilar, Alexander Anderegg, Christina Bixby, Carl Barden, Chris Businsky, Virginia Chanley, Cynthia Grant, Paris Hawkins, Lee Hinga, James (Andrew) Howard, Assia Khadri, David Plocher, Steven Putansu, and Priscilla Smith made significant contributions to this report.
Why GAO Did This Study A key component of mitigating and responding to cyber threats is having a qualified, well-trained cybersecurity workforce. The act requires OPM and federal agencies to take several actions related to cybersecurity workforce planning. These actions include categorizing all IT, cybersecurity, and cyber-related positions using OPM personnel codes for specific work roles, and identifying critical staffing needs. The act contains a provision for GAO to analyze and monitor agencies' workforce planning. GAO's objectives were to (1) determine the extent to which federal agencies have assigned work roles for positions performing IT, cybersecurity, or cyber-related functions and (2) describe the steps federal agencies took to identify work roles of critical need. GAO administered a questionnaire to 24 agencies, analyzed coding data from personnel systems, and examined preliminary reports on critical needs. GAO selected six of the 24 agencies based on cybersecurity spending levels to determine the accuracy of codes assigned to a random sample of IT positions. GAO also interviewed relevant OPM and agency officials. What GAO Found The 24 reviewed federal agencies generally assigned work roles to filled and vacant positions that performed information technology (IT), cybersecurity, or cyber-related functions as required by the Federal Cybersecurity Workforce Assessment Act of 2015 (the act). However, six of the 24 agencies reported that they had not completed assigning the associated work role codes to their vacant positions, although they were required to do so by April 2018. In addition, most agencies had likely miscategorized the work roles of many positions. Specifically, 22 of the 24 agencies assigned a “non-IT” work role code to 15,779 (about 19 percent) of their IT positions within the 2210 occupational series. Further, the six agencies that GAO selected for additional review had assigned work role codes that were not consistent with the work roles and duties described in corresponding position descriptions for 63 of 120 positions within the 2210 occupational series that GAO examined (see figure). Human resource and IT officials from the 24 agencies generally reported that they had not completely or accurately categorized work roles for IT positions within the 2210 occupational series, in part, because they may have assigned the associated codes in error or had not completed validating the accuracy of the assigned codes. By assigning work roles that are inconsistent with the IT, cybersecurity, and cyber-related positions, the agencies are diminishing the reliability of the information they need to improve workforce planning. The act also required agencies to identify work roles of critical need by April 2019. To aid agencies with identifying their critical needs, the Office of Personnel Management (OPM) developed guidance and required agencies to provide a preliminary report by August 2018. The 24 agencies have begun to identify critical needs and submitted a preliminary report to OPM that identified information systems security manager, IT project manager, and systems security analyst as the top three work roles of critical need. Nevertheless, until agencies accurately categorize their positions, their ability to effectively identify critical staffing needs will be impaired. What GAO Recommends GAO is making 28 recommendations to 22 agencies to review and assign the appropriate codes to their IT, cybersecurity, and cyber-related positions. Of the 22 agencies to which GAO made recommendations, 20 agreed with the recommendations, one partially agreed, and one did not agree with one of two recommendations. GAO continues to believe that all of the recommendations are warranted.
gao_GAO-18-417
gao_GAO-18-417_0
Background The Defense Laboratories The National Defense Authorization Act (NDAA) for Fiscal Year 1995 authorized the Secretary of Defense to conduct personnel demonstration projects at the department’s laboratories designated as Science and Technology Reinvention Laboratories. The demonstration projects were established to give laboratory managers more authority and flexibility in managing their civilian personnel. These projects function as the vehicles through which the department can determine whether changes in personnel management concepts, policies, or procedures, such as flexible pay or hiring authorities, would result in improved performance and would contribute to improved DOD or federal personnel management. Table 1 presents a list of the 15 defense laboratories included in the scope of our review. The Defense Laboratories Office—within the Office of the Undersecretary of Defense for Research and Engineering (Research and Engineering)— carries out a range of core functions related to the defense labs, including the aggregation of data, analysis of capabilities, and alignment of activities, as well as advocacy for the defense labs. The National Defense Authorization Act for Fiscal Year 2017 gave authority to conduct and evaluate defense laboratory personnel demonstration projects to the Under Secretary of Defense for Research and Engineering and, accordingly, the Defense Laboratories Office. The Defense Laboratories Office supports the Research and Engineering mission by helping to ensure comprehensive department-level insight into the activities and capabilities of the defense laboratories. The LQEP was chartered on April 15, 1994 to improve productivity and effectiveness of the defense laboratories through changes in, among other things, personnel management and contracting processes. The NDAA for Fiscal Year 2017 established a new organizational structure for the program, adding two new panels while also specifying that two previously existing subpanels on personnel and infrastructure would continue to meet. The NDAA for Fiscal Year 2017 requires the department to maintain a LQEP Panel on Personnel, Workforce Development, and Talent Management—one of the four panels established by a February 14, 2018 charter signed by the Under Secretary of Defense for Research and Engineering. The purpose of the panel is to help the LQEP achieve the following goals: (1) review and make recommendations to the Secretary of Defense on current policies and new initiatives affecting the defense laboratories; (2) support implementation of quality enhancement initiatives; and (3) conduct assessments and data analysis. The LQEP Panel on Personnel, Workforce, Development, and Talent Management includes representatives from each of the defense laboratories, as well as from the Army, Navy, Air Force, appropriate defense agencies, and Office of the Under Secretary of Defense for Research and Engineering. Hiring Authorities A hiring authority is the law, executive order, or regulation that allows an agency to hire a person into the federal civil service. Among other roles, hiring authorities determine the rules (or a subset of rules within a broader set) that agencies must follow throughout the hiring process. These rules may include whether a vacancy must be announced, who is eligible to apply, how the applicant will be assessed, whether veterans preference applies, and how long the employee may stay in federal service. Hiring authorities may be government-wide or granted to specific agencies. Government-wide (Title 5) Hiring Authorities Competitive (Delegated) Examining. This is the traditional method for making appointments to competitive service positions, and it requires adherence to Title 5 competitive examining requirements. The competitive examining process requires agencies to notify the public that the government will accept applications for a job, screen applications against minimum qualification standards, apply selection priorities such as veterans preference, and assess applicants’ relative competencies or knowledge, skills, and abilities against job-related criteria to identify the most qualified applicants. Federal agencies typically assess applicants by rating and ranking them based on their experience, training, and education. Figure 1 depicts the Office of Personnel Management’s (OPM) 80-day standard roadmap for hiring under the competitive process. Governmentwide (Title 5) Direct Hire Authority. This authority allows agencies to appoint candidates to positions without regard to certain requirements in Title 5 of the United States Code, with OPM approval. A direct hire authority expedites hiring by eliminating specific hiring rules. In order for an agency to use direct hire, OPM must determine that there is either a severe shortage of candidates or a critical hiring need for a position or group of positions. When using the direct hire authority, agencies must adhere to certain public notice requirements. The Pathways Programs. These programs were created to ensure that the federal government continues to compete effectively for students and recent graduates. The current Pathways Programs consist of the Internship Program, the Recent Graduates Program, and the Presidential Management Fellows Program. Initial hiring is made in the excepted service, but it may lead to conversion to permanent positions in the competitive service. Veterans-Related Hiring Authorities. These include both the Veterans Recruitment Appointment Authority and the Veterans Employment Opportunities Act authority. The Veterans Recruitment Appointment authority allows for certain exceptions from the competitive examining process. Specifically, agencies may appoint eligible veterans without competition under limited circumstances or otherwise through excepted service hiring procedures. The Veterans Employment Opportunities Act authority is a competitive service appointment authority that allows eligible veterans to apply for positions announced under merit promotion procedures when an agency accepts applications from outside of its own workforce. DOD-specific Hiring Authorities The Defense Laboratory Direct Hire Authorities. These include the following four types of direct hire authorities granted to the defense laboratories by Congress for hiring STEM personnel: (1) direct hire authority for candidates with advanced degrees; (2) direct hire authority for candidates with bachelor’s degrees; (3) direct hire authority for veterans; and (4) direct hire authority for students currently enrolled in a graduate or undergraduate STEM program. The purpose of these direct hire authorities is to provide a streamlined and accelerated hiring process to allow the labs to successfully compete with private industry and academia for high-quality scientific, engineering, and technician talent. The Expedited Hiring Authority for Acquisition Personnel. This authority permits the Secretary of Defense to designate any category of positions in the acquisition workforce as positions for which there exists a shortage of candidates or there is a critical hiring need; and to utilize specific authorities to recruit and appoint qualified persons directly to positions so designated. The Science, Mathematics, and Research for Transformation (SMART) Scholarship-for-Service Program. This program was established pursuant to 10 USC §2192a, as amended, and is funded through the National Defense Education Program. The SMART scholarship for civilian service program provides academic funding in exchange for completing a period of full-time employment with DOD upon graduation. The Defense Laboratories Have Used Direct Hire Authorities and Other Incentives to Help Hiring Efforts, but Officials Reported Challenges in Hiring Highly Qualified Candidates The labs have used the defense laboratory-specific direct hire authorities more than any other category of agency-specific or government-wide hiring authority. Defense laboratory officials we surveyed reported that these direct hire authorities had been the most helpful to the labs’ efforts to hire highly qualified candidates for STEM positions, and also reported that the use of certain incentives had been helpful in this effort. However, even with access to the authorities, these defense laboratory officials identified challenges associated with the hiring process that affected their ability to hire highly qualified candidates. Defense Laboratories Used the Direct Hire Authorities Most Frequently for Hiring STEM Candidates, and the Use of These Authorities Has Increased since 2015 For fiscal years 2015 through 2017, the defense laboratories used laboratory-specific direct hire authorities more often than any other category of hiring authorities when hiring STEM personnel. Moreover, the defense laboratories’ use of these direct hire authorities increased each year from fiscal year 2015 through fiscal year 2017. Of the 11,562 STEM hiring actions in fiscal years 2015 through 2017, approximately 46 percent were completed using one of the defense laboratory direct hire authorities. The second and third most used hiring authorities were internal hiring actions and the expedited hiring authority for acquisition personnel, each of which comprised approximately 12 percent of the hiring actions during the time period. Table 2 provides information on the overall number of hiring actions by hiring authority for fiscal years 2015 through 2017. The laboratory-specific direct hire authorities include the direct hire authorities for candidates with advanced degrees, candidates with bachelor’s degrees, and candidates who are veterans—authorities were granted by Congress in prior legislation. Among the defense laboratory direct hire authorities, the direct hire authority for candidates with bachelor’s degrees was used for 55 percent of all direct hires, for a total of 2,920 hiring actions for fiscal years 2015 through 2017. During the same time frame, the labs used the direct hire authority for candidates with advanced degrees for approximately 36 percent (1,919 hiring actions) of all direct hires, and the direct hire authority for veteran candidates for approximately 9 percent (455 hiring actions). In addition, for less than one percent of the direct hires, either the labs used another category of laboratory-specific direct hire authority or we were unable to determine which type of direct hire authority was used during those same three fiscal years. See table 3 for information on the defense labs’ use of the defense laboratory-specific direct hire authorities for fiscal years 2015 through 2017. In fiscal year 2017 the defense labs used the defense laboratory direct hire authorities for 54 percent of STEM hiring actions completed, representing an increase of approximately 16 percentage points relative to fiscal year 2015, when 38 percent were hired under defense lab direct hire authorities. For additional information on the labs’ use of hiring authorities in fiscal years 2015 through 2017, as well as hiring authority data by laboratory, see appendix IV. One laboratory official explained that the increased use of the direct hire authorities could be a result of the NDAA for Fiscal Year 2016, which increased the laboratories’ allowable use of the direct hire authority for candidates with bachelor’s degrees from 3 percent to 6 percent, and use of the direct hire authority for veterans from 1 percent to 3 percent, of the total number of scientific and engineering positions at each laboratory at the end of the preceding fiscal year. The direct hire authority for candidates with bachelor’s degrees was used most often—for 1,151 out of 1,835 hiring actions—as compared with the other direct hire authorities in fiscal year 2017. See table 4 for more information on the laboratories’ use of all hiring authorities in fiscal year 2017. In addition, table 5 provides more information on the labs’ use of the direct hire authorities in fiscal year 2017. Laboratory Officials Reported That Certain Hiring Authorities and Incentives Have Helped Defense Laboratories Hire Highly Qualified Candidates Defense laboratory officials we surveyed most frequently identified the three defense laboratory-specific direct hire authorities as having helped to hire highly qualified candidates (see figure 2) and to hire quickly (see figure 3). Specifically, 15 of 16 respondents to our survey stated that each of the three direct hire authorities had been helpful in hiring highly qualified candidates, and that the direct hire authorities for veterans and for candidates with an advanced degree had helped them to hire quickly. Moreover, all 16 survey respondents stated that the direct hire authorities for candidates with a bachelor’s degree had helped them to hire quickly. Among the three direct hire authorities, the one for candidates with bachelor’s degrees was reported to be the most helpful to the laboratories’ hiring efforts, according to our survey results. A majority of the laboratory officials we surveyed also stated that the Expedited Hiring Authority and the Science, Mathematics, and Research for Transformation (SMART) Program had both helped facilitate their efforts to hire highly qualified candidates and to hire them quickly. According to our survey, the least helpful hiring authority that lab officials reported using was the delegated examining unit authority. Six of 16 survey respondents stated that the delegated examining unit authority had helped them to hire highly qualified candidates, while 9 of 16 stated that the authority had hindered this effort. Three of 16 survey respondents stated that the delegated examining unit authority had helped them to hire quickly, while 12 of 16 stated that the use of this authority had hindered their ability to hire quickly. During our interviews with laboratory officials, hiring officials and supervisors described the defense laboratory direct hire authorities as being helpful in their hiring efforts. For example, hiring officials from one lab stated that the direct hire authorities were the easiest authorities to use, and that since their lab had started using them, job offer acceptance rates had increased and their workload related to hiring had decreased. A hiring official from another laboratory stated that the use of direct hire authorities had allowed their lab to be more competitive with the private sector in hiring, which is useful due to the high demand for employees in research fields. A supervisor from one lab stated that the use of direct hire authorities was not only faster than the competitive hiring process, but it also allowed supervisors a greater ability to get to know candidates early in the process to determine whether they met the needs of a position. In comparison, hiring managers we interviewed at one laboratory stated that the Pathways Program is not an effective means of hiring students because the program requires a competitive announcement. Supervisors also stated that the application process for Pathways can be cumbersome and confusing for applicants and may cause quality applicants to be screened out early. Defense laboratory officials who responded to our survey also stated that the process takes too long and that quality applicants may drop out of the process due to the length of the process. Defense laboratory hiring data also indicated that use of the defense laboratory direct hire authorities resulted in faster than median hiring times. As shown in table 6, the median time to hire for STEM positions at the defense laboratories in fiscal year 2017 was 88 days. The median time to hire when using the defense laboratories’ direct hire authorities, Pathways, or the SMART program authority was faster than that of the median for all categories combined. The median time to hire when using the competitive hiring process was approximately twice as long as when using the labs’ direct hire authorities. Our full analysis of defense laboratory hiring data, including the time to hire by hiring authority category, for fiscal years 2015 through 2017 can be found in appendix V. Defense laboratory officials also cited the use of incentives as helpful in hiring highly qualified candidates, as shown in figure 4. According to our survey results, the defense laboratories’ flexibility in pay setting under their demonstration project authority was generally considered to be the most helpful incentive, with 13 of 16 survey respondents stating that this incentive had very much helped them to hire highly qualified candidates. During interviews, laboratory officials described the use of these incentives as being particularly helpful if a candidate is considering multiple job offers because the incentives can help make the lab’s offer more competitive with offers from other employers. Multiple hiring officials stated that they would generally not include such incentives in an initial offer, but that if the candidate did not accept that offer, they would consider increasing the salary or offering a bonus. A hiring official from one lab stated that his lab has not offered many recruitment bonuses in recent years, because their acceptance rate has been sufficiently high without the use of that incentive. Many of the recently hired lab employees whom we interviewed also cited incentives, including bonuses and student loan repayment, as factoring into their decisions to accept the employment offers for their current positions. For example, one recently hired employee stated that the lab’s student loan repayment program was a significant factor in his decision to accept employment at the lab rather than with private industry. Recently hired employees also cited less tangible benefits of working at the labs, including the work environment, job stability, and type of work performed, as key factors in their decisions to accept their current positions. One newly hired employee stated that, while she could earn more money in a private-sector job, the defense laboratory position would afford her the freedom to pursue the type of work she is currently doing, and that this was a major consideration in her decision to accept it. Another newly hired employee similarly stated that he was interested in the type of research conducted at the lab where he now works, and that he was attracted to the opportunity to contribute to the national defense, while also taking advantage of benefits that support the pursuit of higher education. Defense Laboratory Officials We Surveyed Identified Challenges That Affect Their Ability to Hire Highly Qualified Candidates Defense laboratory officials we surveyed reported that, although the available hiring authorities and incentives are helpful, they experience a range of challenges to their ability to hire highly qualified candidates, as shown in figure 5, ranging in order from the most to the least frequently cited. In addition, figure 6 shows the extent to which officials reported selected top challenges that hindered their respective labs’ abilities to hire highly qualified candidates. Defense laboratory officials described how hiring challenges identified in our survey affect their ability to hire high quality candidates. Specifically, these challenges are as follows: Losing quality candidates to the private sector: Fifteen of 16 survey respondents stated that this was a challenge, and 12 of the 15 stated that this challenge had somewhat or very much hindered their lab’s ability to hire highly qualified candidates for STEM positions since October 2015. Hiring officials and supervisors we interviewed stated that private-sector employers can make on-the-spot job offers to candidates at college career fairs or other recruiting events, whereas the labs are unable to make a firm job offer until later in the hiring process. Government-wide hiring freeze: Fifteen of 16 survey respondents identified this as a challenge, with 13 of those reporting that it had either somewhat or very much hindered their lab’s ability to hire highly qualified candidates for STEM positions since October 2015. Multiple hiring officials and supervisors we interviewed stated that they had lost candidates whom they were in the process of hiring because the candidates had accepted other offers due to the delays created by the hiring freeze. In addition, some officials stated that, although the freeze had been lifted, their labs’ hiring efforts were still affected by backlogs created by the freeze, or were adapting to new processes that were implemented as a result of the freeze. Delays with the processing of security clearances: Fifteen of 16 survey respondents cited this as a challenge; 12 of the 15 stated that this challenge had somewhat or very much hindered their lab’s ability to hire highly qualified candidates for STEM positions since October 2015. A supervisor from one lab stated that he was in the process of trying to hire two employees whose hiring actions had been delayed due to the security clearance process. The supervisor stated that he had been told it could potentially take an additional 6 months to 1 year to complete the process, and that he believed this may cause the candidates to seek other employment opportunities. In other cases, hiring officials stated that employees may be able to begin work prior to obtaining a clearance, but that they may be limited in the job duties they can perform while waiting for their clearance to be granted. The government-wide personnel security clearance process was added to GAO’s High Risk List in 2018, based on our prior work that identified, among other issues, a significant backlog of background investigations and delays in the timely processing of security clearances. Inability to extend a firm job offer until a final transcript is received: Fourteen of 16 survey respondents stated that this was a challenge, with 10 of the officials responding that it had somewhat or very much hindered their lab’s ability to hire highly qualified candidates. One hiring official stated that top candidates will often receive 5 to 10 job offers prior to graduation, and that his lab’s may be the only one of those offers that is characterized as tentative. Multiple officials noted that career fairs can often occur several months prior to graduation, so the lab would have to wait for the duration of this time before extending a firm offer to a candidate who has been identified. Delays with processing personnel actions by the external human resources office: Thirteen of 16 survey respondents stated that this presented a challenge, and 9 of the 13 stated that this challenge had somewhat or very much hindered their lab’s ability to hire highly qualified candidates for STEM positions since October 2015. Multiple hiring officials stated that employees at their human resource offices may not have an understanding of either the technical nature of the positions being filled at the lab or the lab’s unique hiring authorities, and that this lack of knowledge could create delays. Other officials noted that their servicing human resource offices seemed to be inflexible regarding certain paperwork requirements. For example, officials at one lab stated that their human resource office requires candidates’ resumes to be formatted in a particular way, and that they have been required to ask candidates to make formatting changes to their resumes. An official at another lab stated that the lab has faced similar challenges with regard to the formatting of transcripts and has had to request clarifying documentation from the university. In both cases, the officials described these requirements as embarrassing, and as a delay to the hiring process. Further, both a supervisor and a newly hired employee we interviewed noted that it is difficult to learn the status of an application when it is being processed by the human resource office. Overall length of the hiring process: Twelve of 16 survey respondents cited this as a challenge; 11 of the 12 stated that this challenge had somewhat or very much hindered their lab’s ability to hire highly qualified candidates for STEM positions since October 2015. Hiring officials and supervisors we interviewed stated that their lab had lost candidates due to the length of the hiring process. One supervisor we interviewed stated that he has encountered candidates who really wanted to work at his lab but had had to pursue other opportunities because they could not afford to wait to be hired by the lab. Multiple newly hired employees we interviewed described the process as slow or lengthy, but described reasons why they were willing to wait. For example, some employees were already working at their lab in a contractor or post-doctoral fellowship position, and accordingly they were able to continue in these positions while completing the hiring process for the permanent positions they now hold. One employee stated that if the process had gone on any longer, he likely would have accepted another offer he had received, while another employee stated that he knew of at least two post- doctoral fellows at his lab who chose not to continue in the hiring process for a permanent position at the lab due to the length of the hiring process. DOD and the Defense Labs Track Hiring Data, but the Defense Laboratories Office Has Not Obtained and Monitored These Data or Evaluated the Effectiveness of Hiring at the Laboratories The department and the defense laboratories track hiring data that can be used to evaluate some aspects of the individual labs’ hiring efforts, but the Defense Laboratories Office has not routinely obtained or monitored these data or evaluated the effectiveness of hiring, including the use of hiring authorities, across the defense laboratories as a whole. Laboratory hiring data are captured at the department level in the Defense Civilian Personnel Data System (DCPDS)—the department’s system of record for personnel data. In addition, the individual defense laboratories track hiring data, including the type of hiring authority used and certain milestone dates that can be used to measure the length of the hiring process, known as time to hire. According to OPM guidance and our prior work, time to hire is a measure that may inform about the effectiveness of the hiring process, and federal agencies are required to report time to hire for certain types of hiring actions to OPM. Defense laboratory officials stated that, from their perspectives, the time- to-hire metric does not sufficiently inform about the effectiveness of the use of specific authorities, particularly when using the most commonly tracked milestones—from the initiation of a request for personnel action to an employee’s entrance-on-duty date. For example, officials stated that when a direct hire authority is used to hire a candidate who is completing the final year of his or her educational program, the lab may identify and provide a tentative offer to this candidate several months prior to graduation, consistent with private- sector recruitment methods. In this case, officials stated that the length of time between the initiation of the request for personnel action and the candidate’s entrance-on-duty date, following his or her graduation, could span a period of several months. According to defense laboratory officials, the total number of days for this hiring action gives the appearance that the use of the hiring authority was not efficient in this case; however, officials stated that it would have been effective from the supervisor’s perspective, because the use of the hiring authority resulted in the ability to recruit a highly qualified candidate in a manner that was more competitive with the private sector. Further, time-to-hire data, as reflected by the milestone dates that are currently tracked across the defense laboratories, may not reflect a candidate’s perception of the length of the hiring process. More specifically, a candidate may consider the hiring process to be completed upon receiving a job offer (either tentative or final), which could occur weeks or months before the candidate’s entrance-on-duty date, the commonly used end-point for measuring time to hire. According to officials, the length of time from when the offer is extended to entrance on duty can be affected by a candidate’s individual situation and preferences, such as the need to complete an educational program or fulfill family or professional responsibilities prior to beginning work in the new position. In other cases, certain steps of the hiring process, such as completing the initial paperwork or obtaining management approval, may occur after a candidate has been engaged but prior to the initiation of a request for personnel action—the commonly used start-point for measuring time to hire. In this situation, the candidate’s perception of the length of the hiring process may be longer than what is reflected by the time-to-hire data. For the reasons described above, some defense laboratories measure time to hire using milestones that they have determined more appropriately reflect the effectiveness of their hiring efforts. For example, officials from one lab stated that they have sought to measure the length of the hiring process that occurs prior to the request for personnel action, while officials from some labs stated that they measure time to hire using the tentative offer date as an end-point. In addition, some laboratories informally collect other types of data that they use in an effort to evaluate their hiring efforts, such as the reasons why candidates decline a job offer or feedback on the hiring process from newly hired employees. However, officials from the Defense Laboratories Office stated that their office has not conducted any review of the effectiveness of defense laboratory hiring, including the use of hiring authorities, across the labs. The National Defense Authorization Action for Fiscal Year 2017 gave authority to conduct and evaluate defense laboratory personnel demonstration projects to the Office of the Under Secretary of Defense for Research and Engineering, under which the Defense Laboratories Office resides. Defense Laboratories Office officials stated that the office has not evaluated the effectiveness of defense laboratory hiring because it does not have access to defense laboratory hiring data, has not routinely requested these data from the labs or at the department level to monitor the data, and has not developed performance measures to evaluate the labs’ hiring. As noted, laboratory hiring data are captured at the department level in DCPDS and in a variety of service- and laboratory- specific systems and tools. However, the Defense Laboratories Office does not have access to these data and, according to one official, the office would not have access to defense laboratory hiring data unless officials specifically requested them from the labs or from the Defense Manpower Data Center, which maintains DCPDS. According to the official, the Defense Laboratories Office has not routinely requested such data in the past, in part because their role did not require evaluation of such data. In addition, the Defense Laboratories Office has not developed performance measures to evaluate the effectiveness of hiring across the defense laboratories or the labs’ use of hiring authorities. An official from the Defense Laboratories Office stated that the office may begin to oversee the effectiveness of the defense laboratories’ hiring efforts and, in doing so, may consider establishing performance measures to be used consistently across the labs, which could include time-to-hire or other measures. However, as of March 2018, the office had not established such measures for use across the defense laboratories nor provided any documentation on any planned efforts. Standards for Internal Control in the Federal Government states that management should design appropriate types of control activities to achieve the entity’s objectives, including top-level reviews of actual performance and the comparison of actual performance with planned or expected results. Further, consistent with the principles embodied in the GPRA Modernization Act of 2010, establishing a cohesive strategy that includes measurable outcomes can provide agencies with a clear direction for implementation of activities in multi-agency cross-cutting efforts. We have previously reported that agencies are better equipped to address management and performance challenges when managers effectively use performance information for decision making. Without routinely obtaining and monitoring defense laboratory hiring data and developing performance measures, the Defense Laboratories Office cannot effectively oversee the effectiveness of hiring, including the use of hiring authorities, at the defense laboratories. Specifically, without performance measures for evaluating the effectiveness of the defense laboratories’ hiring, and more specifically the use of hiring authorities, the department lacks reasonable assurance that these authorities—in particular, those granted by Congress to the defense laboratories—are resulting in improved hiring outcomes. In addition, without evaluating the effectiveness of the defense laboratories’ hiring efforts, the department cannot understand any challenges experienced by the labs or determine appropriate strategies for mitigating these challenges. As a result, the department and defense laboratories may be unable to demonstrate that they are using their authorities and flexibilities effectively, or that such authorities and flexibilities should be maintained or expanded for future use. DOD Does Not Have Clear Time Frames for Approving and Implementing New Hiring Authorities for the Defense Laboratories DOD does not have clear time frames for its process for approving and implementing new hiring authorities for the defense laboratories. Section 1105 of the Carl Levin and Howard P “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015 established a direct hire authority for students enrolled in a scientific, technical, engineering, or mathematics course of study at institutions of higher education on a temporary or term basis. Officials from the Defense Laboratories Office stated that the labs were unable to use the authority because the department’s current process—the publication of a federal register notice—for allowing the laboratories to use the hiring authority took longer than anticipated. On June 28, 2017—2 ½ years after the authority was granted in the NDAA for Fiscal Year 2015—the department published a federal register notice allowing the defense laboratories the authority to use the direct hire for students. DOD officials stated that the department has typically published a federal register notice whenever the defense laboratories are granted a new hiring authority in legislation—for example, when an NDAA is issued, or when certain modifications to the demonstration projects are made. The Defense Civilian Personnel Advisory Service—through its personnel policymaking role for the department—at the time required that the federal register notice process be used to implement any hiring authorities granted to the defense labs by Congress in legislation. These procedures were published in DOD Instruction 1400.37. DOD officials identified coordination issues that occurred during the approval process of the federal register notice across the relevant offices as the cause of the delay associated with this federal register notice. Changes to DOD organizational structures further complicated the process of implementing new hiring authorities for defense laboratories. Specifically, in late 2016 a provision in the NDAA for Fiscal Year 2017 shifted the authority to conduct and evaluate defense laboratory personnel demonstration projects from the Office of the Under Secretary of Defense for Personnel and Readiness to the Office of the Under Secretary of Defense for Research and Engineering. Within the Office of the Under Secretary of Defense for Research and Engineering, the Defense Laboratories Office has been tasked with the responsibility for matters related to the defense laboratories. According to the Director of the Defense Laboratories Office, informal discussions about the transition began shortly after the NDAA for Fiscal Year 2017 was passed in late 2016. According to that official, despite the shift in oversight responsibility, coordination between the offices of the Under Secretaries for Research and Engineering and for Personnel and Readiness is required on issues related to civilian personnel, including defense laboratory federal register notices. Although a formal process for coordination did not exist at the start of our review, officials from the Defense Laboratories Office stated that representatives from the offices have met approximately five times since December 2016 and were taking steps to establish a coordination process for implementing new authorities. According to officials from the Defense Laboratories Office, during those meetings as well as during other, less formal interactions, officials have taken steps to formalize the roles and responsibilities of the relevant offices. According to officials from the Defense Laboratories Office, as of May 2018 the office was drafting a memorandum to formalize the roles and responsibilities of the Defense Laboratories Office and the Office of the Under Secretary of Defense for Personnel and Readiness to correspond to the federal register notice approval process; however, officials did not provide a completion date. The Defense Laboratories Office established and documented its own federal register approval process in spring 2017 and updated it in early 2018. The aforementioned memorandum would further describe the roles and responsibilities for the Offices of the Under Secretary for Research and Engineering and the Deputy Assistant Secretary of Defense for Civilian Personnel Policy in carrying out the updated process. According to officials, this is the process the office will use moving forward for coordination and approval of any future federal register notices. On March 6, 2018, the Office published a federal register notice that rescinds the earlier instruction published by the Defense Civilian Personnel Advisory Service of the Office of the Under Secretary of Personnel and Readiness. By rescinding that instruction—including the earlier process for approving requests from the labs and federal register notices—the Defense Laboratories Office can, according to officials, publish its own process and guidance. In a 2016 presentation to the Joint Acquisition/Human Resources Summit on the defense laboratories, the Chair of the Laboratory Quality Enhancement Program Personnel Subpanel stated that a renewed and streamlined approval process would be beneficial to the creation of new authorities, among other things. Although Defense Laboratories Office officials provided a flowchart of the office’s updated federal register approval process for coordination, this process did not include time frames for specific stages of the coordination. Officials stated that they cannot arbitrarily assign time frames or deadlines for a review process because any time frames will be contingent on the other competing priorities of each office, and other tasks may take priority and thus push review of a federal register notice down in order of priority. Our prior work has found that other federal agencies identify milestones, significant events, or stages in the agency-specific rulemaking process, and track data associated with these milestones. That work also found that, despite variability across federal agencies in the length of time taken by the federal rulemaking process, scheduling and budgeting for rulemaking are useful tools for officials to manage regulation development and control the resources needed to complete a rule. Standards for Internal Control in the Federal Government further establishes that management should design control activities to achieve objectives and respond to risks. Further, management should also establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. Moreover, documentation is a necessary part of an effective internal control system. The level and nature of documentation may vary based on the size and complexity of the organization and its processes. The standards also underscore that specific terms should be fully and clearly set forth such that they can be easily understood. Our prior work on interagency collaboration has also found that overarching plans can help agencies overcome differences in missions, cultures, and ways of doing business, and can help agencies better align their activities, processes, and resources to collaborate effectively to accomplish a commonly defined outcome. Without establishing and documenting clear time frames for its process for departmental coordination efforts related to the approval and implementation of new hiring authorities, the department cannot be certain that it is acting in the most efficient or effective manner possible. Moreover, the defense laboratories may not promptly benefit from the use of congressionally granted hiring authorities, relying instead on other existing authorities. Doing so could, according to officials, have the unintended consequence of complicating the hiring process, increasing hiring times, or resulting in the loss of highly qualified candidates. Conclusions The future of the department’s technological capabilities depends, in large part, on its investment in its people—the scientists and engineers who perform research, development, and engineering. To that end, Congress has granted the defense laboratories specific hiring authorities meant to encourage experimentation and innovation in their approaches to building and strengthening their workforces. The defense laboratories have used most of these authorities as a part of their overall hiring efforts. However, without obtaining and monitoring hiring data and developing performance measures, the Defense Laboratories Office may not be in a position to provide effective oversight of the defense laboratories’ hiring, including the use of hiring authorities, or to evaluate the effectiveness of specific hiring authorities. Moreover, the absence of clear time frames to facilitate timely decision-making and implementation of any new hiring authorities may impede the laboratories’ ability to make use of future authorities when authorized by Congress. Until the department addresses these issues, it lacks reasonable assurance that the defense laboratories are taking the most effective approach toward hiring a workforce that is critical to the military’s technological superiority and ability to address existing and emerging threats. Recommendations for Executive Action We are making three recommendations to DOD. The Secretary of Defense should ensure that the Defense Laboratories Office routinely obtain and monitor defense laboratory hiring data to improve the oversight of the defense laboratories’ use of hiring authorities. (Recommendation 1) The Secretary of Defense should ensure that the Defense Laboratories Office develop performance measures to evaluate the effectiveness of the defense laboratories’ use of hiring authorities as part of the labs’ overall hiring to better inform future decision making about hiring efforts and policies. (Recommendation 2) The Secretary of Defense should ensure that the Defense Laboratories Office, in collaboration with the Under Secretary of Defense for Personnel and Readiness and the Laboratory Quality Enhancement Panel’s Personnel Subpanel, establish and document time frames for its coordination process to direct efforts across the relevant offices and help ensure the timely approval and implementation of hiring authorities. (Recommendation 3) Agency Comments We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix VI, DOD concurred with our recommendations, citing steps the department has begun and plans to take to improve oversight and coordination of the defense laboratories’ hiring efforts. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and other interested parties, including the Defense Laboratories Office and defense laboratories. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact Brenda Farrell at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Appendix I: Department of Defense Science, Technology, Engineering, and Mathematics (STEM) Occupations The term “STEM” refers to the fields of science, technology, engineering, and mathematics. The following figure identifies the Department of Defense’s broad categories of STEM occupations, as well as the specific occupational series within each category. Appendix II: Copy of GAO Questionnaire Administered to the Defense Laboratory Officials Appendix III: Objectives, Scope, and Methodology This report examines (1) the defense laboratories use of existing hiring authorities and what officials view as the benefits of authorities and incentives and the challenges in hiring; (2) the extent to which the Department of Defense (DOD) evaluates the effectiveness of hiring, including hiring authorities, at the defense laboratories; and (3) the extent to which DOD has time frames for approving and implementing new hiring authorities. To address these objectives, we included in the scope of our review science, technology, engineering, and mathematics (STEM) hiring at the 15 defense laboratories designated as Science and Technology Reinvention Laboratories (STRL) that were implemented at the time of our review within the Army, Navy, and Air Force. We included 9 Army laboratories: Armament Research, Development, and Engineering Center; Army Research Laboratory; Aviation and Missile Research, Development, and Engineering Center; Communications-Electronics Research, Development, and Engineering Center; Edgewood Chemical and Biological Center; Engineer Research and Development Center; Medical Research and Materiel Command; Natick Soldier Research, Development, and Engineering Center; and Tank Automotive Research, Development, and Engineering Center. We included 5 Navy laboratories: Naval Air Systems Command Warfare Centers, Weapons Division and Aircraft Division; Naval Research Laboratory; Naval Sea Systems Command Warfare Centers, Naval Surface and Undersea Warfare Centers; Office of Naval Research; and Space and Naval Warfare Systems Command, Space and Naval Warfare Systems Center, Atlantic and Pacific. We included 1 Air Force laboratory: Air Force Research Laboratory. We excluded 2 additional defense laboratories within the Army—the Army Research Institute and the Space and Missile Defense Command—because these defense laboratories were in the process of being implemented at the time of our review. For our first objective, we obtained and analyzed documentation, including past National Defense Authorization Acts (fiscal years 1995 through 2017), guidance related to government-wide hiring authorities, and federal register notices on existing hiring authorities used by the defense laboratories to hire STEM personnel. We obtained data that were coordinated by the Defense Manpower Data Center and prepared by the Defense Civilian Personnel Advisory Service’s Planning and Accountability Directorate. These data included, among other things, hiring process milestone dates and type of hiring authority used for each civilian hire at the defense laboratories for fiscal years 2015 through 2017. We selected these years because they were the three most recent years for which hiring data were available, and because doing so would allow us to identify any trends in the use of hiring authorities or the length of time taken to hire. The data we obtained were extracted from DCPDS using the Corporate Management Information System. The team refined the data to include only those hiring actions that were made by the 15 defense laboratories included within the scope of our review. In addition, we excluded hiring actions that used a 700-series nature of action code, which denotes actions that relate to position changes, extensions, and other changes, which we determined should not be included in our analysis. We included actions that used nature of action codes in the 100-series (appointments) and 500-series (conversions to appointments). For the purpose of calculating time to hire, we also excluded records with missing dates and those for which the time-to-hire calculation resulted in negative number (that is, the record’s request for personnel action initiation date occurred after the enter-on- duty date). Specifically, we excluded 92 actions for which no request for personnel action initiation date was recorded and 205 actions for which the date occurred after the enter-on-duty date, for a total of 2.57 percent of all hiring actions. We included in our calculation 7 actions for which the request for personnel action initiation date was the same date as the enter-on-duty date, resulting in a time to hire of zero days. To determine the extent to which the defense laboratories use existing hiring authorities, based on the department’s data, we analyzed the current appointment authority codes identified for individual hiring actions. Current appointment authority codes are designated by the Office of Personnel Management and are used to identify the law, executive order, rule, regulation, or other basis that authorizes an employee’s most recent conversion or accession action. Based on our initial review of the data, we determined that, in some cases, more than one distinct current appointment authority code could be used to indicate the use of a certain hiring authority. Alternately, a single current appointment authority code could in some cases be used for indicating more than one type of authority. In these cases, the details of the specific type of hiring authority that was used for the hiring action can be recorded in the description field associated with the current appointment authority code field. For this reason, in order to determine the type of hiring authority used, it was necessary to analyze the description fields for the current appointment authority code when certain codes were used. Two analysts independently reviewed each description and identified the appropriate hiring authority. Following this process, the two analysts compared their work and resolved any instances in which the results of their analyses differed. A data analyst used the results to produce counts of the number of times various categories of hiring authorities were used, as well as the average time to hire for each hiring authority category. For those instances where the analysts could not identify a hiring authority on the basis of the three digit codes or the description fields, the hiring actions were assigned to an “unknown” category. We note that the “unknown” category included 591 hiring actions, or approximately 5 percent of the total data for fiscal years 2015 through 2017. In addition, within the laboratory-specific direct hire authority category, if a determination could not be made about the specific type of laboratory- specific direct hire authority used, the hiring action was captured in the “direct hire authority, unspecified” category because the action was clearly marked as one of the laboratory-specific direct hire authorities but the type of authority (for example, direct hire for veterans) was unclear. Of the 5,303 hiring actions identified as a laboratory-specific direct hire authority, 0.1 percent of the hiring actions fell into the unspecified category. Based on the aforementioned steps and discussions with officials from the Defense Civilian Personnel Advisory Service and the Defense Manpower Data Center and reviews of additional documentation provided to support the data file, as well as interviews with officials from 13 of the laboratories about their data entry and tracking, we determined that these data were sufficiently reliable for the purposes of reporting the frequency with which the labs used specific hiring authorities and calculating the time it takes the labs to hire, or time to hire, for fiscal years 2015 through 2017. To describe officials’ views of hiring authorities and other incentives, we conducted a survey of officials at each of the defense laboratories on (1) their perceptions of the various hiring authorities and incentives, (2) whether those authorities and incentives have helped or hindered hiring efforts, (3) the extent to which they experienced barriers to using hiring authorities, and (4) any challenges during the hiring process, among other things. We administered the survey to the official at each defense laboratory who was identified as the Laboratory Quality Enhancement Program Personnel, Workforce Development, and Talent Management Panel point of contact, because we determined that this individual would be the most knowledgeable about his or her lab’s hiring process and use of hiring authorities. One laboratory—the Space and Naval Warfare Systems Command Centers—had two designated Laboratory Quality Enhancement Program Personnel, Workforce Development, and Talent Management Panel points of contact, one for each of its command centers (Atlantic and Pacific). Because the contacts would each be knowledgeable about his or her lab’s hiring processes for their respective command centers, we chose to include both command centers in our survey. As a result, we included a total of 16 laboratory officials in our survey. We drafted our questionnaire based on the information obtained from our initial interviews with department, service, and laboratory personnel. We conducted pretests to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, (4) the information could feasibly be obtained, and (5) the survey was comprehensive and unbiased. We conducted five pretests to include representatives from each of the three services, as well as from corporate research laboratories and from research, development, and engineering centers. We conducted the pretests—with the assistance of a GAO survey specialist—by telephone and made changes to the content and format of the questionnaire after each pretest, based on the feedback we received. Key questions from the questionnaire used for this study are presented in appendix II. We sent a survey notification email to each laboratory’s identified point of contact on July 6, 2017. On July 10, 2017, we sent the questionnaire by email as a Microsoft Word attachment that respondents could return electronically after marking checkboxes or entering responses into open answer boxes. One week later, we sent a reminder email, attaching an additional copy of the questionnaire, to everyone who had not responded. We sent a second reminder email and copy of the questionnaire to those who had not responded 2 weeks following the initial distribution of the questionnaire. We received questionnaires from all 16 participants by August 4, 2017, for a 100 percent response rate. Between July 26 and October 5, 2017, we conducted additional follow-up with 11 of the respondents via email to resolve missing or problematic responses. Because we collected data from every lab, there was no sampling error. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as non-sampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, how the responses were processed and analyzed, or the types of people who do not respond can influence the accuracy of the survey results. We took steps in the development of the survey, the data collection, and the data analysis to minimize these non-sampling errors and help ensure the accuracy of the answers that were obtained. For example, a survey specialist designed the questionnaire, in collaboration with analysts having subject matter expertise. Then, as noted earlier, the draft questionnaire was pretested to ensure that questions were relevant, clearly stated, and easy to comprehend. The questionnaire was also reviewed by internal subject matter experts and an additional survey specialist. Data were electronically extracted from the Microsoft Word questionnaires into a comma-delimited file that was then imported into a statistical program for quantitative analyses and Excel for qualitative analyses. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error, and we addressed such issues as necessary. Quantitative data analyses were conducted by a survey specialist using statistical software. An independent data analyst checked the statistical computer programs for accuracy. To obtain information on department- and service-level involvement in and perspectives of defense laboratory hiring, we interviewed officials at the Defense Personnel Advisory Service, Defense Laboratories Office, Army Office of the Assistant G-1 for Civilian Personnel, and Navy Office of Civilian Human Resources. In addition, we interviewed hiring officials, first-line supervisors, and newly hired employees from a non- generalizable sample of six defense laboratories or subordinate level entities within a laboratory (for example, division or directorate) to obtain their perspectives on the hiring process. We selected the six laboratories based on the following two criteria: (1) two laboratories from each of the three services, and (2) a mix of both corporate research laboratories and research and engineering centers. In addition, because some hiring activities can occur at subordinate levels within a laboratory—such as a division or directorate—we included at least one subordinate level entity for each service. In total, we selected: Army Research Laboratory Sensors and Electron Devices directorate; Aviation and Missile Research, Development, and Engineering Center (Army); Naval Research Laboratory; Naval Air Warfare Center Weapons Division; Air Force Research Laboratory Information directorate; and Air Force Research Laboratory Space Vehicles directorate. For each lab, we requested to interview the official(s) most knowledgeable about the lab’s hiring process, supervisors who had recently hired, and newly hired employees. We initially requested to interview one group each of supervisors and newly hired employees. Following our first round of interviews at one laboratory, we requested to interview two groups each of supervisors and newly hired employees. Subsequent to this request, at one lab we were able to conduct one supervisor interview and at a second lab we were able to conduct one newly hired employee interview, due to scheduling constraints. The views obtained from these officials, supervisors, and recent hires are not generalizable and are presented solely for illustrative purposes. For our second and third objectives, we reviewed guidance and policies for collecting and analyzing laboratory personnel data related to the implementation and use of hiring authorities by these labs. We interviewed DOD, military service, and defense laboratory officials to discuss and review their hiring processes and procedures for STEM personnel, the use of existing hiring authorities, and efforts to document and evaluate time-to-hire metrics. We also met with DOD officials from the Office of the Under Secretary of Defense for Personnel and Readiness and the Office of the Under Secretary of Defense for Research and Engineering to discuss processes and procedures for implementing new hiring authorities granted by Congress. We evaluated their efforts to determine whether they met federal internal control standards, including that management should design appropriate types of control activities to achieve the entity’s objectives, including top-level reviews of actual performance, and should establish an organizational structure, assigning responsibilities and delegating authority to achieve an organization’s objectives. We conducted this performance audit from November 2016 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix IV: The Department of Defense Laboratories’ Use of Hiring Authorities for Fiscal Years 2015, 2016, and 2017 We analyzed three years of Department of Defense hiring data obtained from the Defense Civilian Personnel Data System to identify the defense laboratories’ use of hiring authorities. We found that the defense laboratories completed a total of 11,562 STEM hiring actions in fiscal years 2015 through 2017 and used the defense laboratory direct hire authorities the most often when hiring STEM personnel. Table 7 provides information on the laboratories’ use of hiring actions by hiring authority for fiscal years 2015, 2016, and 2017. Table 8 provides a breakdown of the individual labs’ use of hiring authorities in fiscal years 2015 through 2017. Appendix V: Defense Laboratory Time to Hire Data by Hiring Authority Category for Fiscal Years 2015, 2016, and 2017 We analyzed three years of the DOD hiring data to identify time to hire using various types of hiring authorities when hiring for Science, Technology, Engineering, and Math (STEM) occupations at the defense laboratories. Tables 9, 10, 11, and 12 below show the frequency of actions for each hiring authority category and the average, minimum, maximum, median, 25th percentile, and 75th percentile of the number of days to hire for each category in fiscal years 2015 through 2017 and for all three years combined. Appendix VI: Comments from the Department of Defense Appendix VII: Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments: In addition to the contact named above, Vincent Balloon (Assistant Director), Isabel Band, Vincent Buquicchio, Joseph Cook, Charles Culverwell, Serena Epstein, Christopher Falcone, Robert Goldenkoff, Cynthia Grant, Chelsa Gurkin, Amie Lesser, Oliver Richard, Michael Silver, John Van Schaik, Jennifer Weber, and Cheryl Weissman made key contributions to this report.
Why GAO Did This Study DOD's defense labs help sustain, among other things, U.S. technological superiority and the delivery of technical capabilities to the warfighter. Over time Congress has granted unique flexibilities—such as the ability to hire qualified candidates who meet certain criteria using direct hire authorities—to the defense labs to expedite the hiring process and facilitate efforts to compete with the private sector. Senate Report 114-255 included a provision for GAO to examine the labs' hiring structures and effective use of hiring authorities. This report examines (1) the defense labs use of existing hiring authorities and officials' views on the benefits of authorities and challenges of hiring; (2) the extent to which DOD evaluates the effectiveness of hiring, including hiring authorities at the defense labs; and (3) the extent to which DOD has time frames for approving and implementing new hiring authorities. GAO analyzed DOD hiring policies and data; conducted a survey of 16 defense lab officials involved in policy-making; interviewed DOD and service officials; and conducted nongeneralizable interviews with groups of officials, supervisors, and new hires from 6 labs—2 from each of the 3 military services, selected based on the labs' mission. What GAO Found The Department of Defense's (DOD) laboratories (defense labs) have used the laboratory-specific direct hire authorities more than any other category of agency-specific or government-wide hiring authority for science, technology, engineering, and mathematics personnel. As shown below, in fiscal years 2015—2017 the labs hired 5,303 personnel out of 11,562 total hires, or 46 percent using these direct hire authorities. Lab officials, however, identified challenges to hiring highly qualified candidates, such as delays in processing security clearances, despite the use of hiring authorities such as direct hire. Source: GAO analysis of Department of Defense data. | GAO-18-417 . a Other includes all other defense laboratory-specific direct hiring authorities used. b All other includes remaining five categories of hiring authorities. c Percentages may not sum to total due to rounding. DOD and the defense labs track hiring data, but the Defense Laboratories Office (DLO) has not obtained or monitored these data or evaluated the effectiveness of the labs' hiring, including the use of hiring authorities. While existing lab data can be used to show the length of time of the hiring process, effectiveness is not currently evaluated. According to lab officials, timeliness data do not sufficiently inform about the effectiveness of the authorities and may not reflect a candidate's perception of the length of the hiring process. Further, the DLO has not developed performance measures to evaluate the effectiveness of hiring across the defense laboratories. Without routinely obtaining and monitoring hiring data and developing performance measures, DOD lacks reasonable assurance that the labs' hiring and use of hiring authorities—in particular, those granted by Congress to the labs—result in improved hiring outcomes. DOD does not have clear time frames for approving and implementing new hiring authorities. The defense labs were unable to use a direct hire authority granted by Congress in fiscal year 2015 because it took DOD 2½ years to publish a federal register notice—the process used to implement new hiring authorities for the labs—for that authority. DOD officials identified coordination issues associated with the process as the cause of the delay and stated that DOD is taking steps to improve coordination—including meeting to formalize roles and responsibilities for the offices and developing a new approval process—between offices responsible for oversight of the labs and personnel policy. However, DLO's new federal register approval process does not include time frames for specific stages of coordination. Without clear time frames for its departmental coordination efforts related to the approval and implementation of new hiring authorities, officials cannot be certain they are taking action in a timely manner. What GAO Recommends GAO recommends that DOD (1) routinely obtain and monitor defense lab hiring data to improve oversight; (2) develop performance measures for evaluating the effectiveness of hiring; and (3) establish time frames to guide hiring authority approval and implementation. DOD concurred with the recommendations.
gao_GAO-19-84
gao_GAO-19-84_0
Background This section describes (1) NNSA’s weapons design and production sites; (2) the framework for managing LEPs, known as the Phase 6.X process, and NNSA’s program execution instruction; and (3) NNSA’s technology development and assessment process. NNSA Weapons Design and Production Sites NNSA oversees three national security laboratories—Lawrence Livermore in California, Los Alamos in New Mexico, and Sandia in New Mexico and California. Lawrence Livermore and Los Alamos are the design laboratories for the nuclear components of a weapon, while Sandia works with both to design nonnuclear components and as the system integrator. Los Alamos led the original design of the W78, but Lawrence Livermore is leading current efforts to design the replacement warhead. NNSA also oversees four nuclear weapons production plants—the Pantex Plant in Texas, the Y-12 National Security Complex in Tennessee, the Kansas City National Security Campus in Missouri, and the Savannah River Site in South Carolina. In general, the Pantex Plant assembles, maintains, and dismantles nuclear weapons; the Y-12 National Security Complex produces the secondary and the radiation case; the Kansas City National Security Campus produces nonnuclear components; and the Savannah River Site replenishes a component known as a gas transfer system that transfers boost gas to the primary during detonation. Phase 6.X Process for Managing LEPs and NNSA’s Program Management Directive DOD and NNSA have established a process, known as the Phase 6.X process, to manage life extension programs. According to a Nuclear Weapons Council document, NNSA’s Office of Defense Programs will follow this process to manage a W78 replacement program. As shown in figure 1, this process includes key phases or milestones that a nuclear weapon LEP must undertake before proceeding to subsequent steps. In January 2017, while the program was still suspended, NNSA issued a supplemental directive that defines additional activities that NNSA offices should conduct in support of the Phase 6.X process. For example, as discussed below, NNSA’s supplemental directive established a new requirement during Phase 6.1 (Concept Assessment) that NNSA conduct a technology readiness assessment of technologies proposed for potential use in the warhead. In addition, NNSA’s Office of Defense Programs issued a program execution instruction that defines enhanced program management functions for an LEP and other programs. This instruction also describes the level of program management rigor that the LEP must achieve as it advances through the Phase 6.X process. NNSA’s Technology Development and Assessment Process According to NNSA’s Fiscal Year 2018 Stockpile Stewardship Management Plan, NNSA extends the life of existing U.S. nuclear warheads by replacing aged nuclear and non-nuclear components with modern technologies. In replacing these components, NNSA seeks approaches that will increase safety, improve security, and address defects in the warhead. Several technologies are frequently developed concurrently before one approach is selected. According to NNSA’s Fiscal Year 2018 Stockpile Stewardship Management Plan, this approach allows selection of the option which best meets warhead requirements and reduces the risks and costs associated with an LEP. NNSA conducts technology readiness assessments to provide a snapshot in time of the maturity of technologies and their readiness for insertion into a program’s design and schedule, according to NNSA’s guidance. NNSA’s assessments also look at the ability to manufacture the technology. NNSA measures technological maturity using technology readiness levels (TRLs) on a scale from TRL 1 (basic principles developed) through TRL 9 (actual system operation). Similarly, NNSA measures manufacturing readiness using manufacturing readiness levels (MRL) on a scale from MRL 1 (basic manufacturing implications identified) through MRL 9 (capability in place to begin full rate production). According to NNSA’s guidance, NNSA recommends but does not require that an LEP’s critical technologies reach TRL 5 (technology components are integrated with realistic supporting elements) at the beginning of Phase 6.3 (Development Engineering). At the end of Phase 6.3, it recommends that a technology be judged to have achieved MRL 5 (capability to produce prototype components in a production relevant environment). However, according to NNSA officials, lower TRLs and MRLs may be accepted in circumstances where a technology is close to achieving the desired levels or the program team judges that the benefit of the technology is high and worth the increased risk that it may not be sufficiently mature when the program needs it. NNSA Has Taken Steps to Prepare to Restart a Program to Replace the W78 Nuclear Warhead Capability NNSA has taken steps to prepare to restart a program to replace the W78 nuclear warhead capability. According to NNSA officials, these steps are typically needed to conduct any LEP. Therefore, they can be undertaken despite the uncertainty about whether the final program will develop the warhead for the Air Force only or for both the Air Force and the Navy. Specifically, NNSA has (1) taken initial steps to establish the program management functions needed to execute the program and assemble personnel for a program management team; (2) assessed technologies that have been under development while the program was suspended that could potentially be used to support a W78 replacement; and (3) initiated plans for the facilities and capabilities needed to provide the nuclear and nonnuclear components for the warhead. At the time of our review, NNSA and DOD officials stated that, in response to the 2018 NPR, they planned to restart a program that would focus on replacing the capabilities of the W78 for the Air Force; however, the extent to which the program would focus on providing a nuclear explosive package for the Navy was uncertain. DOD officials said that the Navy plans to complete a study examining the feasibility of using the nuclear explosive package developed for the W78 replacement warhead in its SLBM system by the end of fiscal year 2019. According to DOD officials, the Nuclear Weapons Council will make a decision about developing an interoperable warhead for the Air Force and the Navy based on the results of the study but, as of August 2018, had not established time frames for making that decision. According to Air Force and NNSA officials, if the Nuclear Weapons Council decided that the Navy should participate in the program, then NNSA would not need to redo the work planned for fiscal year 2019. Program Management and Personnel NNSA has taken initial steps to establish the program management functions needed to execute the program and assemble personnel for a program management team, as follows: Program management. In fiscal year 2018, NNSA started to establish the program management functions needed to execute a W78 replacement program, as required in the Office of Defense Programs’ program execution instruction. In preparation for the program restart, NNSA assigned a manager for a W78 replacement program who is taking or plans to take steps to implement these functions. For example, among other steps, the W78 replacement program manager told us that he had started developing the risk management plan to define the process for identifying and mitigating risks that may impact the program. The program manager also said NNSA had started to adapt a standardized work breakdown structure for life extension programs to define and organize the W78 replacement program’s work scope for restart. An initial version of this work breakdown structure would be completed before the program restarts in fiscal year 2019, according to the program manager. Further, as NNSA refines the scope of work, the agency will refine and tailor the work breakdown structure. At the time of our review, this work was under development and therefore we were not able to review these plans and tools. In addition, as of July 2018, NNSA had created a preliminary schedule for a W78 replacement program under the Phase 6.X process (see fig. 2). According to NNSA’s preliminary schedule, the program will: Restart in Phase 6.2 (Feasibility and Design Options) in the third quarter of fiscal year 2019. NNSA previously completed Phase 6.1 and was authorized by the Nuclear Weapons Council to start Phase 6.2 in June 2012. During Phase 6.2, NNSA plans to, among other things, select design options and develop cost estimates of the selected design options. Conduct Phase 6.2A (Design Definition and Cost Study) for one year beginning in the fourth quarter of fiscal year 2021. During this phase, for example, NNSA plans to develop a preliminary cost estimate for the program, called a weapons design and cost report, and also produce an independent cost estimate. Start Phase 6.3 (Development Engineering) in the fourth quarter of fiscal year 2022 and transition to Phase 6.4 (Production Engineering) in the mid-2020s. During these phases, NNSA will develop the final design as well as begin producing selected acquisition reports, which detail the total program cost, schedule, and performance, among other things. According to the W78 program manager, the military characteristics will be finalized in Phase 6.4 and before that point DOD will continue to update the requirements. Achieve production of the first warhead—Phase 6.5—by the second quarter of fiscal year 2030 so that it can be fielded on the Air Force’s planned Ground Based Strategic Deterrent that same year. Start Phase 6.6 (Full Scale Production) by the second quarter of fiscal year 2031. When the program restarts in fiscal year 2019, NNSA intends to develop or finalize initial versions of other plans and tools such as a requirements management plan, according to the program manager. (See appendix I for a detailed description of the steps NNSA is taking or plans to take to establish the program management functions needed to execute a W78 replacement program, according to the manager for the W78 replacement program.) The program manager also told us that as the program progresses through Phases 6.2 (Feasibility and Design Options), 6.2A (Design Definition and Cost Study), and 6.3 (Development Engineering), NNSA will increase the maturity of the program management processes and tools, consistent with the Office of Defense Programs’ program execution instruction. For example, in Phases 6.2 and 6.2A, NNSA intends to establish an earned value management system (EVM)—used to measure the performance of large, complex programs. In Phase 6.3, NNSA will further develop the system to be consistent with DOE and industry standards, as specified in the program execution instruction. NNSA officials said they will need to achieve sufficient program management rigor in Phase 6.3 to effectively report to Congress on the status and performance of the program as NNSA develops cost and schedule baselines. Personnel. At the time of our review, NNSA was reconstituting a program management team. Specifically, as mentioned above, NNSA assigned a new program manager in March 2017. In the spring of 2018, NNSA began assigning additional federal staff and contractor support to help ramp up the program in advance of the fiscal year 2019 restart date. According to the program manager, he expected to complete a plan in the late summer or early fall of 2018 that NNSA could use to hire additional federal staff needed to manage the program in fiscal year 2019. The advanced development and implementation of staffing plans prior to each phase of an LEP was a key lesson learned from an NNSA review of another LEP—the W76- 1. Technology Development and Assessment While the program was suspended, NNSA supported other programs that developed weapons technologies—including materials and manufacturing processes—that could potentially be used by the W78 replacement program and potentially by other future life extension programs. Specifically, according to NNSA officials, NNSA supported the development of technologies through ongoing LEPs (such as the W80-4 LEP) and other technology maturation projects (such as the Joint Technology Demonstrator) that could support future LEPs. For example, the W80-4 program has supported development at Lawrence Livermore of certain new materials as a risk mitigation strategy in case certain legacy materials used in the secondary are not available. According to NNSA officials, NNSA will likely continue to develop these new materials for use in future weapons, including the W78 replacement. In addition, contractors at Lawrence Livermore told us that test demonstrations conducted under the Joint Technology Demonstrator have helped to mature potential technologies for a W78 replacement. Examples they cited included additively manufactured mounts and cushions for securing and stabilizing the nuclear explosive package inside the Air Force’s aeroshell. In May 2018, in anticipation of the restart of a W78 replacement program and to retroactively address NNSA’s new supplemental requirement to conduct a technology readiness assessment in Phase 6.1, NNSA’s Office of Systems Engineering and Integration completed a technology readiness assessment that evaluated the maturity of technologies potentially available for the W78 replacement program. According to NNSA officials, the assessment identified and evaluated technologies that NNSA would have available for the next LEP, irrespective of whether the final program will replace the W78 warhead in ICBMs only or will also be used in the Navy’s SLBMs. The assessment evaluated 126 technologies based on proposals from the laboratories and production sites. As shown in table 1 below, the proposals related to key functional areas of the warhead, including the nuclear explosive package and the arming, fuzing, and firing mechanism—which provides signaling that initiates the nuclear explosive chain. For the W78 warhead replacement, DOD divided the military characteristics into two categories: threshold or minimum requirements (or “needs”) and objective or optional requirements (or “wants”). NNSA’s assessment grouped the technologies into one of three categories, as follows. Must do. A technology deemed “must do” means that it is the only technology available that can meet a minimum requirement (or “need”) for the warhead to function. The technology that previously fulfilled this requirement is generally obsolete or no longer produced, and there are no alternatives. Must do (trade space). “Must do (trade space)” technologies fulfill a minimum requirement (or “need”) for the warhead, but there are two or more technologies that could meet this need. NNSA must evaluate and select which technology it will use to fulfill the need. Trade space. “Trade space” technologies are those that can meet an optional requirement (or “want”) for the warhead. Among the nine “must do” technologies that NNSA evaluated, for example, was a new manufacturing process being developed at Sandia to produce a type of magnesium oxide—needed for use in the thermal batteries that power the warhead’s firing mechanism—that is no longer available from a vendor and for which NNSA’s existing supplies are limited. For this new process, the assessment team estimated that it had completed TRL 1 (basic principles developed) but had not yet reached MRL 1 (basic manufacturing implications identified). The technology readiness assessment noted that for technologies with a TRL of 3 or less, an MRL of 1 or less is expected. In addition, according to the report, Sandia estimated that it may cost about $7.1 million to develop the material and manufacturing process to TRL 5 and MRL 4 during fiscal years 2018 through 2023—when the program is slated to reach Phase 6.3—to achieve a level of readiness where it could potentially be included in the design of the W78 replacement warhead. Among the 59 “must do (trade space)” technologies that NNSA evaluated were, for example, two new gas transfer system technologies developed by Sandia that may offer advantages compared with the existing technology. A gas transfer system is a required capability (or “must do”) but, according to the technology readiness assessment report, NNSA needs to compare the costs, benefits, and risks of these new technologies with the traditional technology (i.e., evaluate the “trade space”) and make a selection among them. The first new technology was a gas transfer system bottle made out of aluminum that could be cheaper, weigh less, and last longer than the gas transfer system used in the W78. According to the technology readiness assessment report, the assessment team estimated the aluminum-based bottle had completed TRL 2 but did not have enough information to estimate an MRL. Sandia estimated that it would cost about $6.5 million to achieve TRL 5 and MRL 4 during fiscal years 2018 through 2023. The second Sandia technology involved an advanced gas transfer system technology. The assessment team estimated that this technology had completed TRL 3 but did not have enough information to estimate an MRL. Sandia estimated that it would cost about $5.4 million to achieve TRL 5 and MRL 4 during fiscal years 2018 through 2023. According to the technology readiness assessment report, NNSA will need to further evaluate these approaches as well as the traditional technology to make a selection for a W78 replacement program. The 75 “trade space” technologies that the assessment team evaluated included, for example, several proposed by Lawrence Livermore, Los Alamos, and Sandia for providing an advanced safety feature to prevent unauthorized detonation of the warhead. As mentioned above, when NNSA extends the life of existing U.S. nuclear warheads it also seeks approaches that will increase the safety and improve security of the warhead. According to the report, the laboratories proposed similar concepts that varied in maturity levels and estimated costs for further development. Specifically, the assessment team estimated the Lawrence Livermore and Los Alamos technologies to have completed TRL 4 and Sandia’s proposal to have completed TRL 3. Regarding MRLs, the assessment team also estimated Lawrence Livermore’s technology to have completed MRL 1, Los Alamos’s technology to be at MRL 1, and did not have enough information to estimate the MRL for Sandia’s technology. In addition, according to the report, Lawrence Livermore estimated costs of about $31.2 million to $45.6 million to further mature its technology during fiscal years 2018 through 2023. Los Alamos estimated costs of about $72.1 million to $154.5 million to further mature its technology during the same period. Sandia estimated costs of about $8.2 million to further mature its technology during the same period. Because the feature is not a minimum requirement, NNSA officials told us that they are continuing to evaluate the costs, benefits, and risks of including the feature. According to NNSA’s manager for the W78 replacement program and key staff involved in preparing to restart the program, when the program restarts in fiscal year 2019 they will use the assessment to identify specific technologies or groups of technologies (i.e., trade spaces) to further evaluate for potential use in the warhead. These officials said they will continue evaluating technologies and make selections of preferred options at the same time that the warhead’s program requirements and priorities are refined during Phases 6.2 and 6.2A. According to the program manager, NNSA will produce a technology development plan for technologies selected for a W78 replacement during Phase 6.2 and 6.2A and that will identify the current readiness levels of the technologies, key risks, and estimated costs to bring them to TRL 5 in Phase 6.3. In addition, the technology readiness assessment team made several recommendations to the NNSA Deputy Administrator for Defense Programs regarding the development of technologies that could provide benefits to the nuclear security enterprise overall. For example, the assessment team observed that 21 of the proposed technologies for a W78 replacement involved the use of additive manufacturing. The assessment noted that, if successful, these technologies could reduce component production costs and schedule risks for future LEPs compared to current methods. The team recommended that the Office of Defense Programs conduct an analysis to validate these capabilities and develop a nuclear enterprise-wide effort to address additive manufacturing for a W78 replacement, future LEPs, and other applications. According to the NNSA official who led the assessment, at the time of our review, the assessment team was preparing to present its enterprise-wide recommendations to the Office of Defense Program’s senior leadership; therefore, specific follow-on actions had not yet been decided. Coordination with Facilities and Capabilities The manager of the W78 replacement program said that he has begun to identify the facilities and capabilities at the laboratories and production sites that will be needed to provide the nuclear and nonnuclear components for a W78 replacement, and plans to draft formal agreements to help ensure coordination with them. According to the program manager, collecting the information that identifies facilities and capabilities—including a rough idea of key milestone dates for when the program will need to use them—is the first step in producing a major impact report, which is required upon completion of Phase 6.2 and accompanies the final Phase 6.2 study report delivered to the Nuclear Weapons Council. Among other things, a major impact report identifies aspects of the program—including facilities and capabilities to support it— that could affect the program’s schedule and technical risk, according to the Phase 6.X guidelines. According to an NNSA official and contractor representatives, many of the existing nuclear and nonnuclear components of the W78 are outdated or unusable and a W78 replacement will need all newly manufactured components. As a result, NNSA will need to exercise numerous manufacturing capabilities in support of this effort, and the facilities and capabilities must be ready to support the work. However, many of the facilities that may be needed to provide components for a W78 replacement program are outdated and are undergoing modernization to either build new facilities or repair existing facilities and capabilities, which represents a critical external risk to the program. According to NNSA’s Fiscal Year 2018 Stockpile Stewardship and Management Plan, these planned modernization activities will require sustained and predictable funding over many years to ensure they are available to support the weapons programs. Some examples of NNSA activities to build or repair facilities and capabilities that will provide nuclear or nonnuclear components for a W78 replacement warhead—and which may have schedule, cost, or capacity issues that could impact the program— include: Plutonium pit production facilities. NNSA does not currently have the capability to manufacture sufficient quantities of plutonium pits for a W78 replacement program. NNSA’s Fiscal Year 2018 Stockpile Stewardship and Management Plan stated that the agency will increase its capability to produce new pits over time, from 10 pits per year in fiscal year 2024 to 30 pits per year in fiscal year 2026, and as many as 50 to 80 pits per year by 2030. NNSA is refurbishing its pit production capabilities at Los Alamos to produce at least 30 pits per year. In addition, in May 2018, NNSA announced its intention to repurpose the Mixed Oxide Fuel Fabrication Facility at the Savannah River Site in South Carolina to produce at least an additional 50 pits per year by 2030. NNSA officials told us that they will need both the Los Alamos and Savannah River pit production capabilities to meet anticipated pit requirements for the W78 replacement program and for future warhead programs. Uranium processing facilities. NNSA’s construction of the Uranium Processing Facility at the Y-12 National Security Complex will help ensure NNSA’s continued ability to produce uranium components for the W78 replacement program. NNSA plans to complete the facility for no more than $6.5 billion by the end of 2025—approximately 4 years before the scheduled delivery of the first production unit of a W78 replacement program warhead. This effort is part of a larger NNSA plan to relocate and modernize other enriched uranium capabilities performed in a legacy building at the Y-12 National Security Complex to other existing buildings or in newly constructed buildings. Lithium production facility. NNSA will require lithium for a W78 replacement warhead. The United States no longer maintains full lithium production capabilities and relies on recycling as the only source of lithium for nuclear weapon systems. According to the Fiscal Year 2018 Stockpile Stewardship and Management Plan, NNSA has analyzed options to construct a new lithium production facility, and a conceptual design effort is next, with an estimated completion date of fiscal year 2027 for the new facility. Until the facility is available, NNSA has developed a bridging strategy to fill the interim supply gaps. Radiation-hardened microelectronics facility. Nuclear warheads, such as a W78 replacement warhead, include electronics that must function reliably in a range of operational environments. NNSA has a facility at Sandia that produces custom, strategic radiation-hardened microelectronics for nuclear weapons. In August 2018, NNSA officials told us that this facility, known as Microsystems and Engineering Sciences Applications, can remain viable until 2040—but would need additional investment. The W78 replacement program manager told us that the need for newly manufactured components coupled with the scale of NNSA’s modernization activities means that a comprehensive coordination effort will be necessary to ensure that the facilities and capabilities are ready to provide components for the warhead by the end of the 2020s. Because these activities are separately managed and supported outside the W78 replacement program, NNSA considers progress on them to represent a critical external risk to the program. NNSA is taking or plans to take some action to mitigate this external risk at the program and agency level. One step that the program plans to take to address this risk is to draft formal agreements—called interface requirements agreements—with other NNSA program offices that oversee the deliverables and schedules for the design, production, and test facilities that are needed for the program. These agreements describe the work to be provided by these external programs, including milestone dates for completing the work; funding; and any risks to cost, schedule, or performance. The W78 program manager stated that they are generally drafted toward the end of Phase 6.2 through Phase 6.2A and largely finalized in Phase 6.3—though small adjustments may be made into Phase 6.4 (Production Engineering). At the agency level, in response to a direction in the 2018 NPR, NNSA officials told us that the agency is also developing an agency-wide integrated master schedule that is intended to align NNSA’s enterprise- wide modernization schedule with milestone delivery dates for nuclear weapons components. The W78 program manager and other NNSA officials told us that the information they provide on the facilities and capabilities needed, as well as milestone dates, will be integrated into this schedule and used to help ensure that the facilities and capabilities are ready to support the program. Agency Comments We provided a draft of this report to NNSA and DOD for comment. NNSA and DOD provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Defense and Energy, the Administrator of NNSA, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or bawdena@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to the report are listed in appendix II. Appendix I: NNSA’s Program Management Functions to Execute a W78 Replacement Program The table below identifies the steps NNSA is taking or plans to take to establish the program management functions needed to execute a W78 replacement program. NNSA was directed by the Nuclear Weapons Council to suspend the program in fiscal year 2014 and the 2018 Nuclear Posture Review directed NNSA to restart the program in fiscal year 2019. The NNSA Office of Defense Program’s program execution instruction defines enhanced program management functions for a warhead life extension program (LEP) such as the W78 replacement program and other programs. The instruction also describes the level of program management rigor that the LEP must achieve as it advances through the Department of Defense and NNSA process for managing life extension programs called the Phase 6.X process. This process includes key phases or milestones that a nuclear weapon life extension program must undertake before proceeding to subsequent steps. NNSA completed Phase 6.1 (Concept Assessment) and started Phase 6.2 (Feasibility and Design Options) activities before the program was suspended in fiscal year 2014. NNSA, therefore, plans to restart the program in Phase 6.2. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Allison B. Bawden, (202) 512-3841 or bawdena@gao.gov. Staff Acknowledgments In addition to the individual named above, William Hoehn (Assistant Director), Brian M. Friedman (Analyst in Charge), and Julia T. Coulter made significant contributions to this report. Also contributing to this report were Antoinette Capaccio, Pamela Davidson, Penney Harwell Caramia, Greg Marchand, Diana Moldafsky, Cynthia Norris, Katrina Pekar-Carpenter, and Sara Sullivan.
Why GAO Did This Study The Department of Defense and NNSA have sought for nearly a decade to replace the capabilities of the aging W78 nuclear warhead used by the U.S. Air Force. NNSA undertakes LEPs to refurbish or replace the capabilities of nuclear weapons components. In fiscal year 2014, NNSA was directed to suspend a program that was evaluating a capability that could replace the W78 and also be used by the U.S. Navy. NNSA's most recent estimate—reported in October 2018—was that the combined program would cost about $10 billion to $15 billion. NNSA has been directed by the 2018 Nuclear Posture Review to restart a program to replace the W78 for the Air Force in fiscal year 2019. The 2018 Nuclear Posture Review also directed NNSA and the Navy to further evaluate whether the Navy could also use the warhead. Senate report 115-125 included a provision for GAO to review NNSA's progress on the program to replace the W78. GAO's report describes NNSA's steps in key early planning areas—including program management, technology assessment, and coordination with facilities and capabilities—to prepare to restart a program to replace the W78. GAO reviewed documentation on areas such as program management, technologies, and facilities needed for the program, and interviewed NNSA and DOD officials. What GAO Found The Department of Energy's National Nuclear Security Administration (NNSA) has taken steps to prepare to restart a life extension program (LEP) to replace the capabilities of the Air Force's W78 nuclear warhead—a program which was previously suspended. According to NNSA officials, these steps are typically needed to conduct any LEP. Therefore, they can be undertaken despite the current uncertainty about whether the final program will develop the warhead for the Air Force only or for both the Air Force and the Navy. Specifically, NNSA has taken the steps described below: Program management. NNSA has begun to establish the program management functions needed to execute a W78 replacement program, as required by NNSA's program execution instruction. For example, NNSA has started to develop a risk management plan to define the process for identifying and mitigating risks. In addition, NNSA has created a preliminary schedule to restart the program in fiscal year 2019 in the feasibility and design options phase with the goal of producing the first unit in fiscal year 2030. (See figure) Technology assessment. In May 2018, NNSA completed an assessment of 126 technologies for potential use in a W78 replacement. These included nine technologies that are needed to replace obsolete or no longer available technologies or materials. These are considered “must-do” because they are the only technologies or materials available to meet minimum warhead requirements established by the Department of Defense and NNSA. NNSA officials said that in fiscal year 2019 they will use the assessment to further evaluate technologies for potential use in the warhead. Coordination with facilities and capabilities. NNSA's program manager is identifying the facilities and capabilities needed to provide components for the warhead. This information will be used to produce a report that identifies aspects of the program—including facilities and capabilities to support it—that could affect the program's schedule and technical risk. However, several of the needed facilities must be built or repaired, and these activities are separately managed and supported outside the W78 replacement program—representing a critical external risk to the program. As mitigation, the program intends to coordinate with the offices that oversee these facilities to draft agreements that describe the work to be performed and timeframes, among other things. What GAO Recommends GAO is not making recommendations. NNSA and DOD provided technical comments, which GAO incorporated as appropriate.
gao_GAO-18-290
gao_GAO-18-290_0
Background STEM Education The term “STEM education” includes educational activities across all grade levels—from preschool to graduate school. STEM education programs have a variety of primary objectives, which include preparing students for STEM coursework, providing postsecondary students with grants or fellowships in STEM fields, and improving STEM teacher training (see appendix I for our definition of STEM education programs). Federal STEM education programs have been created in two ways— either by law or by federal agencies under their statutory authorities. We previously reported that most federal STEM education programs overlapped to some degree with at least one other program, in that they offered similar services to similar groups in similar STEM fields to achieve similar objectives (see sidebar for definition of overlap). Duplication occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. Overlap occurs when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve their goals, or aim to serve similar beneficiaries. Fragmentation refers to those circumstances in which more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national need and opportunities exist to improve service delivery. Although those programs may not be duplicative, we reported that they were similar enough that they needed to be well coordinated and guided by a robust strategic plan. And, through its strategic planning and other coordination efforts, the Office of Science and Technology Policy and the National Science and Technology Council implemented our recommendations to work with agencies to better align their activities with a government-wide strategy; develop a plan for sustained coordination; identify programs for potential consolidation or elimination; and assist agencies in determining how to better evaluate their programs. America COMPETES Reauthorization Act of 2010 Enacted in 2007, the America COMPETES Act authorized several programs to promote STEM education. The America COMPETES Reauthorization Act of 2010 (COMPETES Act) reauthorized the America COMPETES Act and addresses coordination and oversight issues, including those associated with the coordination and potential duplication of federal STEM education efforts. The COMPETES Act required the Director of the Office of Science and Technology Policy to establish, under the National Science and Technology Council, the Committee on STEM Education to serve as the interagency coordination body for STEM education in the federal government (see fig. 1). In May 2013, the Committee on STEM Education issued a 5-year Strategic Plan for federal STEM education efforts, as required by the COMPETES Act. To improve collaboration across the portfolio, the Strategic Plan identified five priority investment areas and two coordination objectives, specifying national goals for each (see fig. 2). The COMPETES Act also requires that the Committee create, and periodically update, an inventory of federal STEM education programs that includes documentation of program assessments and the participation rates of women, underrepresented minorities, and persons in rural areas. In addition, the COMPETES Act requires that the Office of Science and Technology Policy publish annual reports on coordinating federal STEM education efforts. The law mandates that these reports include specific information, such as: a description of each federal agency’s STEM education programs funded in the previous and current fiscal years, as well as those proposed under the President’s budget request; the levels of funding for each participating federal agency’s programs described above; an evaluation of the levels of duplication and fragmentation of the programs described above; and a description of the progress made implementing the Strategic Plan, including a description of the outcome of any program assessments completed in the previous year, and any changes made to the Strategic Plan since the previous annual report. In January 2017, the President signed into law the American Innovation and Competitiveness Act, which, among other things, amended certain provisions of the COMPETES Act. The Act added some requirements for both the Office of Science and Technology Policy and the Committee on STEM Education. For example, it created new mandates for the Committee to: review the measures federal agencies use to evaluate their STEM education programs, and make recommendations for reforming, terminating, or consolidating the federal STEM portfolio. Any such recommendations for an upcoming fiscal year are to be included in the Office of Science and Technology Policy’s annual report. Cross-agency Priority Goal on STEM Education In 2014, the Office of Management and Budget, in consultation with the federal agencies that administer STEM education programs, established STEM education as a cross-agency priority goal. The Office of Science and Technology Policy and the National Science Foundation led the oversight and management of this goal, and as part of this work, goal leaders from these agencies identified milestones that aligned with the Strategic Plan’s priority investment areas and coordination objectives (see fig. 2). For example, goal leaders reported progress toward meeting key milestones associated with improving STEM instruction. In 2017, to ensure alignment with the current administration’s priorities, the Office of Management and Budget removed the priority status of all cross-agency priority goals, including STEM education; this ended the required public issuance of quarterly priority goal reports. The STEM Education goal’s final quarterly progress report was issued at the end of fiscal year 2016. Other Data and Transparency Requirements Other government-wide efforts are underway to improve the transparency around federal programs in general. These efforts are not directed at the STEM education programs specifically, but may assist in managing the STEM education portfolio. The GPRA Modernization Act of 2010 requires the Office of Management and Budget to present a coherent inventory of all federal programs by making information about each federal program available on a website. However, we previously reported that, because agencies used different approaches to define their programs, comparability of programs within and across agencies on this inventory was limited. We recently identified a potential framework for the development of a useful federal program inventory. The Office of Management and Budget decided to postpone further development of the inventory in order to coordinate with the implementation of related requirements of the Digital Accountability and Transparency Act of 2014. Once fully implemented, this act is expected to expand the types and transparency of public information on federal spending to make it easier to track it to specific federal programs. The act requires government-wide reporting on a greater variety of data related to federal spending, such as budget and financial information, as well as tracking of these data at multiple points in the federal spending lifecycle. From 2010 to 2016, the Number of STEM Education Programs Decreased While Spending Remained Stable, and Most Programs Continued to Overlap Agencies Reported Fewer STEM Education Programs and Relatively Stable Levels of Spending in Fiscal Year 2016 Compared to Fiscal Year 2010 Program officials from the 13 federal agencies that administer STEM education programs reported a total of 163 STEM education programs in fiscal year 2016, compared to 209 programs in fiscal year 2010. Three agencies—the Department of Energy, the Department of Health and Human Services, and the National Science Foundation—administered more than half of all STEM education programs in fiscal years 2010 and 2016. Despite collectively reporting fewer STEM education programs, program officials responding to our questionnaire reported spending about the same amount in fiscal year 2016 as they did in fiscal year 2010. In fiscal year 2016, program officials reported spending about $2.9 billion on the 163 programs. Spending by individual programs ranged from about $14,000 annually to hundreds of millions of dollars. The National Science Foundation and the Department of Health and Human Services programs account for about 60 percent of this spending. Figure 3 provides an agency-level summary of the number of programs and their reported spending. Appendix II contains a complete list of the 163 STEM education programs and their reported spending for fiscal year 2016. While agencies reported many of the same STEM education programs in fiscal years 2010 and 2016, the federal portfolio evolved in various ways. About half of the 209 programs previously reported for fiscal year 2010 were reported again for fiscal year 2016—accounting for about two-thirds (109 programs) of the fiscal year 2016 portfolio. The remaining third (54 programs) were newly reported for fiscal year 2016. (See appendix I for more information on changes to the STEM portfolio between fiscal years 2010 and 2016.) The portfolio underwent various changes from fiscal years 2010 to 2016, including program consolidations, creations, and terminations. According to leadership of the Committee on STEM Education, these changes were due to many factors. One key factor is the STEM Education Strategic Plan, which, among other things, calls for greater efficiency and cohesion across federal STEM education programs. Other factors include agencies’ individual priorities, including their mission and budget, and congressional interest in specific programs. For example, agencies reported: Consolidations. Starting in 2014, for greater efficiency and cohesion, the National Science Foundation consolidated a number of related undergraduate STEM education programs, including STEM Talent Expansion Programs, Transforming Undergrad Education in STEM, and Nanotechnology Undergraduate Education in Engineering. Creations. Department of Health and Human Services officials reported administering 28 new STEM education programs. These programs are housed in the Department’s National Institutes of Health, which generally bases its funding decisions on scientific opportunities and its own peer review process. One new program is the Building Infrastructure Leading to Diversity Initiative. This program supports undergraduate institutions in implementing and studying approaches to engaging and retaining students from diverse backgrounds in biomedical research. Terminations. Department of Education officials reported that four STEM education programs funded in fiscal year 2010 were terminated before fiscal year 2016. One such program was the Women’s Educational Equity program. Congress last funded this program in fiscal year 2010. Significant Overlap Continued to Exist Among STEM Education Programs, Although Programs May Differ in Meaningful Ways Based on our analysis of questionnaire responses, nearly all STEM education programs in fiscal year 2016 overlapped with at least one other STEM education program, in that they offered at least one similar service to at least one similar group in at least one similar STEM field to achieve at least one similar objective (see text box). Similar levels of overlap occurred among programs funded in fiscal year 2010. Similarities Among Overlapping Federal Science, Technology, Engineering, and Mathematics (STEM) Education Programs Similar Services Many of the 163 STEM education programs provided similar services. To support students, most programs (143) provided research opportunities, internships, mentorships, or career guidance. In addition, 110 programs supported short-term experiential learning activities, and 99 programs supported long-term experiential learning activities. Short-term experiential learning activities include field trips, guest speakers, workshops, and summer camps. Long-term experiential learning activities last throughout a semester in length or longer. To support teachers, 77 programs provided curriculum development and 45 programs supported teacher in-service training, professional development, or retention activities. Similar Groups Intended to be Served Many programs also provided services to similar groups, such as K-12 students, postsecondary students, K-12 teachers, and college faculty. A majority of STEM programs reported primarily benefiting postsecondary students; specifically, 103 programs intended to serve 4-year undergraduate students, 76 intended to serve Master’s degree students, and 83 intended to serve doctoral students. Most programs also intended to serve multiple groups; 137 of the 163 programs served two or more groups. Similar STEM Fields More than 75 percent of programs focused on specific STEM academic fields of study. The most common fields were biology (85 programs), technology (75 programs), engineering (72 programs), and computer science (71 programs). Of those programs that focused on specific STEM fields of study, about 55 percent (68 programs) focused on 5 or more different fields. Similar Objectives Many STEM education programs had similar objectives. An objective of a majority of programs (115) was to provide training opportunities for undergraduate or graduate students in STEM fields. Most programs (139) also reported having multiple primary STEM objectives. Despite these similarities, overlapping programs may differ in meaningful ways, such as their specific field of focus and those programs’ stated goals. For example, a primary objective of the Department of Health and Human Services’ Cancer Education Grants program and the National Aeronautics and Space Administration’s National Space Grant College and Fellowship Project is to provide training opportunities for undergraduate or graduate students in biological sciences, among other fields. However, these programs have different program goals: The Cancer Education Grants program aims to develop innovative cancer education programs and cancer research dissemination projects. The National Space Grant College and Fellowship Project encourages interdisciplinary education, research, and public service programs related to aerospace. Although many STEM education programs are designed to provide similar services to similar groups, some programs serve distinct populations within those broader groups, such as minority, disadvantaged, or underrepresented groups. Within the broad group— middle and high school students, an individual program may focus on serving only minority, disadvantaged, or underrepresented students. For example, the Department of Transportation’s Garrett A. Morgan Technology and Transportation Education program focuses services on students who are girls and minorities, whereas the Department of Education’s Upward Bound Math-Science program aims to serve students who are economically disadvantaged. The Committee on STEM Education and the Office of Science and Technology Policy reported managing overlap in the portfolio by coordinating with other agencies through a: Cross-agency priority goal. Project management and oversight of this goal provided an additional mechanism to facilitate coordination. Goal leaders published quarterly progress reports describing their efforts to achieve each of the five priority investment areas and two coordination objectives. Federal coordination subcommittee. Creating a federal coordination subcommittee and various interagency working groups helped to advance goals identified in the Strategic Plan. Committee leadership structured working groups to connect agencies with similar programs (see fig. 4). Efforts to Assess Programs’ Performance and Participation Rates of Underrepresented Minorities Are Limited Performance Assessments of STEM Education Programs Are Not Reviewed or Documented The Committee on STEM Education and Office of Science and Technology Policy have not fully met their responsibilities to assess the STEM education portfolio. Specifically, the Committee on STEM Education has not reviewed performance assessments of STEM education programs to ensure effectiveness—a primary function of its authorizing charter. Committee leadership acknowledged that they have not conducted such reviews. Overall, the Committee made limited progress advancing its strategic goal of increasing the use of evidence- based approaches because, according to Committee leadership, they focused on achieving other strategic goals. By reviewing programs’ performance assessments, the Committee could leverage existing performance information to identify and share promising practices that agencies could use in designing or revising their programs. Moreover, in doing so, the Committee could further its strategic goal of increasing the use of evidence-based approaches across the portfolio of STEM education programs. We previously have reported that managers can use performance information to identify and increase the use of program approaches that are working well. Additionally, such a review could help the Committee meet its new responsibilities under the 2017 American Innovation and Competitiveness Act, including reviewing the measures federal agencies use to evaluate their STEM education programs and making recommendations for terminating, consolidating, and reforming programs in the federal STEM education portfolio. Further, the Committee on STEM Education has not met the COMPETES Act requirement to document the performance assessments of STEM education programs in its federal STEM inventory (see sidebar). In 2011, the Committee on STEM Education reported summary information on programs’ performance assessments, including the total number of programs funded in fiscal year 2010 that had been evaluated since 2005. However, the information provided was not program- specific; therefore, it is unclear which programs were assessed for effectiveness. Further, that information is outdated, as the STEM education portfolio has changed considerably since 2010, as we have discussed in this report. Committee leadership said they do not have plans to update the summary information provided in 2011, noting that agency budget justifications include program performance assessments. However, we reviewed the budget justifications for 10 STEM education programs that program officials reported had been recently evaluated and found that 8 had no information on performance assessments. By periodically documenting in its federal STEM education inventory whether programs have been assessed for effectiveness, the Committee can enhance communication of performance information among agency officials and stakeholders. This could facilitate the use of performance information by agency managers and lead to greater public awareness regarding the effectiveness of many of the nation’s STEM education programs. The Office of Science and Technology Policy has not done everything required of it either. It has not described the outcomes of programs’ performance assessments completed in the previous year in its annual reports, as required by the COMPETES Act (see sidebar). Office of Science and Technology Policy officials said that they have not reported on recent program assessments, and added that many STEM education programs were not mature enough to provide sufficient data for a definitive assessment. However, many of the 2016 programs that we identified were at least 7 years old and had been assessed. Specifically, 67 percent (109) of the programs reported by program officials for fiscal year 2016 had also been reported for fiscal year 2010. Of the programs in existence since 2010, 49 percent (53) have been assessed, according to program officials’ questionnaire responses. By reporting information on the outcomes of performance assessments completed in the previous year, the Office of Science and Technology Policy could enhance awareness of promising practices in federal STEM education programs. Program Participation Rates of Underrepresented Groups Are Not Reported The Committee on STEM Education has not reported STEM education programs’ participation rates of groups historically underrepresented in STEM fields, although broadening participation of those groups is one of the Committee’s strategic goals. Moreover, the COMPETES Act requires that the Committee report the participation rates of women, underrepresented minorities, and persons in rural areas in its inventory of federal programs (see sidebar). Committee leadership acknowledged they have not reported these data, and added that such participation data are not fully available across all STEM education programs. However, we found that such participation data were generally available. an inventory of federal STEM education programs that includes documentation of participation rates of women, underrepresented minorities, and persons in rural areas. In response to our questionnaire, nearly three-quarters of STEM education programs (120 of 163) reported tracking participants in fiscal year 2016. Of those programs, many also tracked specific participant characteristics. For example, 61 percent (73) of programs that tracked participants also captured whether their participants were women and 54 percent (65) documented those who were African American. Programs primarily intended to serve minority, disadvantaged, or underrepresented groups tracked participant characteristics at higher rates than programs that intended to serve broader groups of beneficiaries (see fig. 5). In addition, 7 of the 13 administering agencies, such as the Department of Health and Human Services, reported that they tracked participation in fiscal year 2016 for at least two-thirds of their STEM education programs. Officials from the Department of Health and Human Services said that the department maintains data for many of its STEM education programs in a database that captures individual participants’ demographic data, including race and gender, and aggregates such information for internal reporting. Officials also said they use this information to evaluate whether individual programs are meeting their goals of serving particular groups. Although we found that many agencies reported collecting data on participants in their STEM education programs, the Committee on STEM Education has not reported such information in its inventory, as required. Reporting information on the participation rates of women, underrepresented minorities, and persons in rural areas could help the Committee assess whether STEM education programs have broadened participation to groups historically underrepresented in STEM fields—a key goal of the Strategic Plan. Committee leadership said they measured progress toward this goal with general performance indicators, such as the number of women who earned STEM degrees, regardless of participation in federal programs, because such data were readily available. However, those performance indicators are influenced by various factors, including some external to federal STEM education efforts. For example, the number of women earning STEM degrees could be affected by broader economic factors or college enrollment trends, rather than the activities of the agencies. Conclusions The federal government continues to invest billions of dollars annually in STEM education programs to enhance the nation’s economic and educational competitiveness. Since 2010, the federal portfolio of STEM education programs has evolved considerably. The Committee on STEM Education reported that, through its leadership and strategic planning efforts, it fostered coordination among agencies administering STEM education programs, which helped them implement the STEM Education Strategic Plan. Such efforts to encourage interagency coordination can help ensure efficient use of resources, particularly given the overlap of programs in the STEM education portfolio. The Committee on STEM Education and the Office of Science and Technology Policy have not fulfilled their responsibilities to review, document, and report performance information on STEM education programs. Reviewing performance assessments of the many programs in the federal STEM education portfolio is a vital management responsibility that could, for example, improve the Committee’s ability to disseminate information on promising practices or make recommendations that agencies can use to make well-informed decisions about designing or revising their programs. Further, documenting programs’ performance assessments in the Committee’s federal STEM education inventory and reporting the outcomes of recent assessments in the Office of Science and Technology Policy’s annual reports could enhance the availability of performance information. In addition, the Committee falls short in reporting required information on programs’ participation rates of women, underrepresented minorities, and persons from rural areas. Without such information, it is unclear whether the federal investment in STEM education is ultimately supporting its strategic goal of broadening participation to groups historically underrepresented in STEM fields. Moreover, as the Committee on STEM Education begins to implement its new responsibilities prescribed by the American Innovation and Competitiveness Act, its efforts to review programs’ performance assessments could improve its capacity to make well-informed recommendations to further enhance the portfolio of STEM education programs. Recommendations for Executive Action We are making a total of four recommendations, including three to the Committee on STEM Education and one to the Office of Science and Technology Policy. Specifically: The leadership of the Committee on STEM Education should review performance assessments of federal STEM education programs and then take appropriate steps to enhance effectiveness of the portfolio, such as by sharing promising practices that agencies could use in designing or revising their programs. (Recommendation 1) The leadership of the Committee on STEM Education should improve public awareness of information on programs’ performance assessments by documenting program-level information on performance assessments in its federal STEM education inventory. (Recommendation 2) The leadership of the Committee on STEM Education should report required information on the participation rates of women, underrepresented minorities, and persons from rural areas in federal STEM education programs that collect this information. (Recommendation 3) The Director of the Office of Science and Technology Policy should report the outcomes of programs’ performance assessments completed in the previous year in its annual report. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to the National Science and Technology Council’s Committee on STEM Education and the Office of Science and Technology Policy for review and comment. These entities jointly provided written comments, which are reproduced in appendix IV, and technical comments, which we incorporated, as appropriate. They agreed with all four of our recommendations and noted initial strategies for how they would implement three of them. Regarding implementation of the fourth recommendation to report on participation rates of underrepresented groups in federal STEM education programs, they noted plans to examine confounding factors inhibiting the reporting of the information required under the COMPETES Act. Gaining insight on the challenges agencies face collecting this information is an important first step. However, to comply with the requirement of the COMPETES Act and help ensure programs reach populations historically underrepresented in STEM fields, we continue to believe that the Committee should report the participation rates of women, underrepresented minorities, and persons from rural areas in federal STEM education programs that collect this information. To do so, the Committee may also need to develop strategies to help agencies overcome some of these confounding factors. We are sending copies of this report to leadership of the Committee on STEM Education, and the Assistant Director of STEM Education at the Office of Science and Technology Policy, and the appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff should have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: GAO’s Methodology for Program Identification and Data Collection How GAO Identified Federal Education Programs on Science, Technology, Engineering, and Mathematics To identify the programs that should receive our questionnaire, we sought input from the 13 agencies that administer federal science, technology, engineering, and mathematics (STEM) education programs. We provided each of the agencies with our definition of a STEM education program and asked agency officials to identify programs funded in fiscal year 2016 that met this definition (see text box). We also asked agency officials to provide information on the status of the 209 STEM education programs we included in our previous report on STEM education programs. Specifically, we asked whether the programs were funded in fiscal year 2016 and, if not, whether they were consolidated or terminated. Definition of Science, Technology, Engineering, and Mathematics (STEM) Education Program GAO defined “STEM education program” as a program funded by allocation or congressional appropriation. An organized set of activities was considered a single program even when its funds were also allocated to other programs. A STEM education program that met the definition had one or more of the following as a primary objective: attract or prepare students to pursue classes or coursework in STEM areas through formal or informal education activities (informal education programs provide support for activities that offer students learning opportunities outside of formal schooling through contests, science fairs, summer programs, and other means; outreach programs aimed at the general public were not included); attract students to pursue degrees (2-year, 4-year, graduate, or doctoral degrees) in STEM fields through formal or informal education activities; provide training opportunities for undergraduate or graduate students in STEM fields (this can include grants, fellowships, internships, and traineeships that are intended for students; general research grants that involve hiring a student for lab work were not considered a STEM education program); attract graduates to pursue careers in STEM fields; improve teacher (preservice or in-service) education in STEM fields; improve or expand the capacity of K-12 schools or postsecondary institutions to promote or foster education in STEM fields; and conduct research to enhance the quality of STEM education programs provided to students. Programs designed to retain current employees in STEM fields were not included. Programs that fund retraining of workers to pursue a degree in a STEM field were included because these programs help increase the number of students and professionals in STEM fields by helping retrain non-STEM workers to work in STEM fields. Also included were health care programs that train students for careers that are primarily in scientific research, but not those that train students for careers that are primarily in patient care (e.g. those that trained nurses, doctors, dentists, psychologists, or veterinarians). Lastly, GAO considered STEM fields to include any of the following broad disciplines: agricultural sciences; astronomy; biological sciences; chemistry; computer science; earth, atmospheric, and ocean sciences; engineering; material science; mathematical sciences; physics; social sciences (e.g., psychology, sociology, anthropology, cognitive science, economics, behavioral sciences); and technology. GAO used this same definition of STEM education program in its 2012 report. However, in the current report, GAO explicitly specified astronomy and material science as STEM fields and also revised “mathematics” to be “mathematical sciences” based on feedback from agency officials. We reviewed the information agencies submitted and took steps to corroborate it, such as by reviewing program descriptions and budget documents. Based on our analysis of this information, we sent a web- based questionnaire to 198 programs (see table 1). To develop the questionnaire and collect the data, we used recognized survey design practices to enhance data quality. For instance, we ordered the questionnaire appropriately and ensured the questions were clearly stated and easy to understand. The questionnaire solicited information on federal STEM education programs, including programs’ objectives, intended groups served, services provided, STEM fields, and obligations. We did not conduct pretests because most of the questions were included in our prior questionnaire and had already been pretested. On May 8, 2017, we sent an email announcing the online questionnaire to the officials responsible for programs identified as STEM education and also notifying them that the questionnaire would be activated that week. On May 10, 2017, we sent a second message to officials informing them that the questionnaire was activated and providing them with unique usernames and passwords. As necessary, we followed-up with program officials by telephone and email. We collected responses through August 31, 2017. Based on our analysis of the questionnaire responses and other information we received from program officials, we excluded 35 programs from our inventory. (See table 2 for a summary of those 35 programs and the reasons we excluded them.) Nine of the 35 excluded programs had been reported by agency officials as STEM education programs in our previous report. In most cases (8 of 9), we excluded these programs in this report because the programs did not include STEM education as a primary objective in fiscal year 2016. In the remaining case, we excluded the program because it was a component of another fiscal year 2016 STEM education program, and thus would be duplicative. We confirmed this information and the programs’ exclusion with the administering agencies. After we completed our analysis, we identified 163 programs as STEM education for fiscal year 2016. Programs officials responsible for all 163 of these programs completed our questionnaire. We used standard descriptive statistics to analyze responses to these completed questionnaires. We also used recognized survey design practices to process and analyze data collected via the questionnaire. For instance, we performed automated checks to review the data and identify inappropriate answers. We also reviewed the data for missing or ambiguous responses and followed up with program officials when necessary to clarify their responses. We did not verify all responses since we had applied recognized survey design practices and follow-up procedures, and had determined that the data used in this report were of sufficient quality for the purposes of our reporting objectives. Appendix II: Federal Science, Technology, Engineering, and Mathematics (STEM) Education Programs and Reported Fiscal Year 2016 Obligations Appendix III: Current Implementation Status of Selected COMPETES Act Provisions to Coordinate Federal Science, Technology, Engineering, and Mathematics Education Appendix IV: Comments from the Office of Science and Technology Policy and the Committee on Science, Technology, Engineering, and Mathematics Education Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Bill J. Keller (Assistant Director), Kathryn O’Dea Lamas (Analyst-in-Charge), Morgan Jones, and Karissa Robie made significant contributions. Also contributing to this report were James Bennett, Deborah Bland, Charles Culverwell, Jill Lacey, Sheila McCoy, James Rebbe, Kathleen van Gelder, and Sarah Veale.
Why GAO Did This Study Education programs in STEM fields are intended to enhance the nation's global competitiveness. GAO reported in 2012 that there were more than 200 federal STEM education programs in fiscal year 2010. Since then, this portfolio of programs has changed. GAO was asked to review the landscape of federal STEM education programs. This report examines (1) how the federal investment in STEM education programs changed from 2010 to 2016, and (2) the extent to which the STEM education portfolio has been assessed.To answer these questions, GAO administered a web-based questionnaire to all federal STEM education programs funded in fiscal year 2016 and analyzed the results. GAO also reviewed relevant federal laws and agency documents, examined the implementation of relevant assessment requirements, and interviewed officials from relevant federal agencies. What GAO Found The federal investment in science, technology, engineering, and mathematics (STEM) education programs remained relatively stable from fiscal years 2010 to 2016, although the number of programs declined from 209 to 163 (see figure). While agencies reported that many of the same STEM education programs existed during this time period, the portfolio underwent various changes, including program consolidations, creations, and terminations. Nearly all STEM education programs in fiscal year 2016 overlapped to some degree with at least one other program in that they offered similar services to similar groups in similar STEM fields to achieve similar objectives. The Committee on STEM Education, an interagency body responsible for implementing the federal STEM education strategic plan, reported it managed this overlap through coordination with agencies administering these programs. The Committee on STEM Education has not fully met its responsibilities to assess the federal STEM education portfolio. Specifically, the Committee has not reviewed programs' performance assessments, as required by its authorizing charter, nor has it documented those assessments in its inventory, as required by law. Such efforts could encourage the use of evidenced-based practices across the portfolio—a key national goal of the STEM education strategic plan. These efforts could also enhance public awareness of the administering agencies' efforts to assess programs' performance. In addition, the Committee has not reported the participation rates of underrepresented groups in federal STEM education programs, as required by law. By reporting this information, the Committee could better assess whether programs are broadening access to groups historically underrepresented in STEM fields—another key goal of the strategic plan. What GAO Recommends GAO is making four recommendations, including three to the Committee on STEM Education to review performance assessments of STEM education programs, document those assessments, and report programs' participation rates of underrepresented groups. The Committee on STEM Education agreed with GAO's recommendations.
gao_GAO-19-57
gao_GAO-19-57_0
Background ISO 55000 defines asset management as “the coordinated activity of an organization to realize value from assets.” This approach includes, for example: developing an understanding of how each of an organization’s assets contributes to its success; managing and investing in those assets in such a way as to maximize that success; and fostering a culture of effective decision making through leadership support, policy development, and staff training. While ISO defines an asset as any item, thing, or entity that has potential or actual value to an organization, in this report we focus on real property assets. Asset management can help federal agencies optimize limited funding and make decisions to better target their policy goals and objectives. See fig. 1 for an example of an asset management framework. Asset management as a distinct concept developed in the 1980s, and since that time, organizations around the world have published a number of standards and leading practices. These include: Publicly Available Specification (PAS) 55: The British Standards Institution published this standard in its final form in 2008. This standard focuses on the management of physical assets such as real property and describes leading asset management practices in areas such as life cycle planning, risk management, cost avoidance, and collaborative decision-making. Additionally, the standard provides a checklist for organizations to assess the maturity of their asset management framework. Some public services, utilities, and oil and gas sectors in the United Kingdom and other countries have adopted this standard. The British Standards Institution formally withdrew this standard in 2015 after the publication of ISO 55000, but it remains in use as a reference for many organizations. ISO 55000: This standard, published in 2014, is a series of three documents, collectively referred to as “ISO 55000.” It is based on the earlier PAS 55 standard but with stated applicability to all types of assets as opposed to just the physical assets covered by PAS 55. Committees with members from more than 30 countries identified common asset management practices and developed this international consensus standard that, according to ISO, applies to the broadest possible range of assets, organizations, and cultures. Some public and private sector organizations from around the world including utilities, infrastructure management firms, cities, federal agencies, and others have adopted the standard for their real property assets. See appendix III for a summary of the key elements of the ISO 55000 standards. International Infrastructure Management Manual: Initially published in 2000, this manual became one of the first sets of internationally accepted asset management leading practices. The Institute of Public Works Engineering Australasia published the most recent edition in 2015. The current manual complements the ISO 55000 standards and includes case studies of how organizations in different sectors have approached asset management. It provides detailed information on how to create and implement an effective asset management framework, such as how to incorporate estimates of future demand for services. Various organizations, particularly in sectors that manage physical assets, have adopted the manual as a reference. In the United States, within the federal government’s executive branch, OMB and GSA are responsible for providing leadership in managing federal real property—one of the government’s major assets. OMB is tasked with overseeing how federal agencies devise, implement, manage, and evaluate programs and policies. OMB has provided direction to federal agencies by issuing various government-wide policies, guidance, and memorandums related to asset management. For example: OMB’s 2017 Capital Programming Guide outlines a capital- programming process, including how agencies should effectively and collectively manage a portfolio of capital assets and requirements for agencies strategic asset management plans; OMB’s Circular A-123 directs agencies to conduct enterprise risk management assessments to identify significant risks to agency goals and operations; OMB’s Memorandum 18-21 expands the responsibilities of federal agencies’ senior real property officers in leading and directing the agency’s real property program. GSA’s Office of Government-wide Policy is generally responsible for identifying, evaluating, and promoting best practices to improve the efficiency of real property management processes. This office has provided guidance for federal agencies and published performance measures. In 2004, the President issued Executive Order 13327 directing Chief Financial Officers Act (CFO Act) agencies to designate a senior real property officer responsible for establishing an asset management- planning process and developing a plan to carry out this process. Among other things, this plan was to describe the agency’s process for: identifying and categorizing all real property managed by the agency, prioritizing actions needed to improve the operational and financial management of the agency’s real property inventory, using life-cycle cost estimations for those actions, and identifying asset management goals and measuring progress towards those goals. The order also required agencies to manage their real property assets in a manner that supports the agency’s asset management plan, goals, and strategic objectives. In addition, Executive Order 13327 tasked GSA with providing policy oversight and guidance to inform federal agencies’ real property management efforts and required that OMB review agencies’ efforts in implementing their asset management plans and completing the other requirements specified in the executive order. The executive order also established the Federal Real Property Council (FRPC)—chaired by OMB and composed of senior management officials from CFO agencies—and called for the FRPC to develop guidance, collect best practices, and help federal agencies improve the management of real property assets. In response to this executive order, in 2004 the FRPC developed guidance describing guiding principles that agencies’ asset management practices should align with, requirements for what agencies should include in their asset management plans, and a template for agencies to follow when compiling these plans. Specifically, the guidance stated that each real property asset’s management plan should link the asset management framework to the agency’s strategic goals and objectives, describe a process for periodically evaluating assets, and describe a process for continuously monitoring the agency’s framework. More recent federal asset management initiatives have focused on efficiently managing and reducing federal agencies’ real property holdings. For example, in 2012 OMB directed the 24 CFO Act agencies to maintain their civilian real-estate inventory at or below their then-current levels, a policy known as Freeze the Footprint. In 2015, OMB issued its National Strategy for the Efficient Use of Real Property and its accompanying Reduce the Footprint policy requiring the CFO Act agencies to set annual targets for reducing their portfolio of domestic office and warehouse space. Subsequently, the Federal Assets Sale and Transfer Act of 2016 established the Public Buildings Reform Board to identify opportunities for the federal government to reduce its inventory of civilian real property and reduce its costs. The act also requires the head of each executive agency to provide annually to GSA information describing the nature, use, and extent of the agency’s real property assets. In addition, the Federal Property Management Reform Act of 2016 codified the Federal Real Property Council to, among other things, ensure efficient and effective real-property management while reducing costs to the federal government. The act requires executive branch agencies to annually submit to the Federal Real Property Council a report on all excess and underutilized real property in their inventory. Effective Asset Management Frameworks Include Six Key Characteristics Reflected in Selected Agencies’ Practices Based on our review of the ISO 55000 standards, asset management literature, and interviews with experts, we identified six key characteristics of an effective asset management framework: (1) establishing formal policies and plans, (2) maximizing an asset portfolio’s value, (3) maintaining leadership support, (4) using quality data, (5) promoting a collaborative organizational culture, and (6) evaluating and improving asset management practices (see fig. 2). See appendix II for a more detailed explanation of how we identified these key characteristics. Each of the six federal agencies we reviewed had a real property asset management framework that included some of these key characteristics. However, agencies varied in how they performed activities in these areas. In addition, the scope and maturity level of the agencies’ asset management frameworks varied. For example, while some agencies’ asset management policies applied to large portions of their portfolios, other agencies’ policies applied to only certain portions of their portfolios. In addition, two agencies—the Corps and Coast Guard—told us they were using the ISO 55000 standards. For example, according to Corps officials, the Corps is in the process of incorporating elements of the ISO 55000 standards into its frameworks. Coast Guard officials told us they were using the ISO 55000 standards as a benchmark to compare against their existing framework. According to OMB and GSA officials, some of the differences in agencies’ asset management frameworks can be attributed to differences such as agency mission needs and the types of assets that each manages. For example, the real property asset portfolios of the six agencies we reviewed differed substantially in the types, numbers, and total replacement values of the assets. See table 1 for more information on the agencies’ asset portfolios and fig. 3 for examples of agency assets and their primary uses. Below we discuss the six key characteristics of an effective asset management framework and how the six selected agencies performed asset management activities in these areas. Establishing Formal Policies and Plans Formal policies and plans can help agencies utilize their assets to support their missions and strategic objectives. According to literature we reviewed, developing a formal asset management plan can help agencies take a more strategic approach in their asset management decision making and identify key roles and responsibilities, resources required to implement their plans, potential implementation obstacles and strategies for overcoming these obstacles. In addition, several experts we interviewed stated that having an asset management plan that describes the overarching goals of the organization and how the organization’s assets relate to those goals is an important element of an asset management framework. Each of the six agencies we reviewed had some documentation such as asset management plans, investment strategies, or technical orders that lay out how the agency conducts asset management activities. This documentation covered important areas such as collecting data, prioritizing assets, and making investment decisions, along with documentation detailing the roles and responsibilities of key officials, for example: In 2014, the Corps published a Program Management Plan for Civil Works Asset Management that laid out a vision, tenets, and objectives for asset management along with the roles and responsibilities of key officials. Corps officials told us that this document functions as a strategic asset management plan for the Corps’ Civil Works asset portfolio, and the plan contains foundational principles such as how the Corps will assess risk and measure the performance of its framework. Since 2006, the Coast Guard Civil Engineering program has been developing a series of manuals, process guides, and technical orders that provide detailed procedures to support implementation of an overarching asset management model. Coast Guard officials told us this model will cover all of the Coast Guard’s real property assets and reflect the agency’s mission and objectives. In addition, each of the six agencies we reviewed had developed a formal asset management plan in response to Executive Order 13327 from 2004. One agency had a plan that officials said reflected their current practices. Officials from the remaining five agencies told us that the practices contained within their original asset management plans had been superseded by later policy documents. For example: NASA officials told us the agency’s 2008 Real Property Asset Management Plan no longer reflects NASA’s overarching asset management framework. Officials said that NASA instead uses a series of policy documents, procedural requirements, and annual data calls to set out its framework. Park Service officials told us the agency’s 2009 Asset Management Plan is still in place, though some of the practices in that document have been superseded by more recent policy documents including the Capital Investment Strategy. Further, five of the agencies linked their asset management goals and objectives to their agency mission and strategic objectives in their asset management plans. For example, GSA’s 2012 plan states that it supports GSA’s overall mission and goals, as well as the mission of the Public Buildings Service, by organizing real property decision making and supporting the Public Buildings Service’s objectives for owned assets. Maximizing an Asset Portfolio’s Value Prioritizing investments can help agencies better target resources toward assets that will provide the greatest value to the agency in meeting its missions and strategic objectives. Each of the six agencies we reviewed has documentation describing a process for prioritizing asset investments. For example, each agency has documentation describing a scoring process for prioritizing projects based on specific criteria, such as the risks an asset poses to agency operations, asset condition, project cost, and project impact. Some agency officials told us that scoring projects in this manner provides an objective foundation for decision making that can lead to more consistent investment decisions and improved transparency. In addition, each of the six agencies have implemented, or are in the process of implementing, a centralized decision-making process for prioritizing high value projects and delegating approval for lower cost projects to local or regional offices. The agencies vary, however, in the types of projects for which they use centralized decision-making and the degree to which they use the project scores, for example: NASA field centers are authorized to independently prioritize and approve certain projects with total costs under $1 million. For larger projects, however, NASA field centers develop project scores based on a mission dependency index measuring the relative risk an asset poses to NASA’s missions. To prioritize and approve these larger projects, NASA headquarters staff consider projects submitted by centers using the mission dependency scores, asset conditions, and other factors such as flooding risk, and make funding decisions using NASA’s available budget. GSA categorizes each of its assets into tiers based on the asset’s financial performance and capital investment needs. Additionally, since 2017 GSA has been using an Asset Repositioning Tool, which uses more detailed data analysis to rank assets within each tier. GSA uses these designations when prioritizing asset investments. For projects with projected costs below the prospectus level (approximately $3.1 million in fiscal year 2018), GSA regions use each asset’s tier and core designation to allocate funds across the region’s asset portfolio. For larger projects, the GSA Administrator and GSA’s Public Buildings Service Commissioner and Deputy Commissioner are responsible for determining the priority level of projects. The Corps is in the process of implementing a procedure that would base funding decisions for maintenance and repair projects on a portfolio-wide comparison of scores, with the goal of approving the projects that will reduce the greatest amount of risk. This differs from the Corps’ previous system of allocating projects’ funding to local divisions and districts based on historical amounts and staff judgement. To prioritize projects, the Corps calculates a score for each project based on an assessment of the asset’s condition and the risk the asset poses to operations. For example, the Corps measures risk for a lock and dam component such as a gate (see fig. 5) based on the potential economic impact of failure to users (e.g., shipping companies that use the waterway). The Corps has a plan to implement this process by 2020, a plan that Corps officials told us they expect to complete on schedule. Officials from these agencies told us that more centralized decision- making processes can provide improved standardization and clarity in the prioritization process, particularly for high value projects, and can help ensure that mission-critical projects receive funding. As an example, Coast Guard officials cited a project involving a permanent repair to a failed steam heating pipe at the Coast Guard Yard near Baltimore. They said that this failure left several key buildings, including the Coast Guard’s primary ship-painting facility, with intermittent service and an inability to complete certain critical tasks. According to officials, the Coast Guard’s centralized decision-making process scored this project as a high priority because of the importance of the facilities involved, the impact of the failure, and the fragility of the temporary pipe that runs on the surface amongst other equipment (see fig. 4). Leadership buy-in is important for organizational initiatives, and experts told us that management support is vital to implementing an asset management framework. However, officials from two of the six agencies told us that they have received varying levels of leadership support for asset management, for example: Corps officials told us that it can be a challenge to make senior leadership understand the value that improved asset management practices can provide to the agency, value that they said can affect the level of support the program gets. Forest Service officials told us that they have faced challenges obtaining the resources they need to develop their asset management program. In addition, in 2015 the Coast Guard received a report it had commissioned to examine the level of alignment between its asset management framework and the ISO 55000 standards. This report concluded, among other things, that the Coast Guard has faced challenges with strategic leadership related to asset management, including in balancing budgetary support for long-term initiatives—like developing an asset management framework—against short-term infrastructure investment needs and in communicating asset management policies. Using Quality Data Using quality information when making decisions about assets can help agencies ensure that they get the most value from their assets. Experts we spoke with cited data elements such as inventory information (e.g., asset age and location); condition information (e.g., how well the asset is performing); replacement value; and level of service (e.g., how the asset helps the agency meet its missions and strategic objectives) as important for maximizing an asset’s value. Each of the six agencies collected inventory and condition data on their assets, and used this data to make decisions about its assets, for example: The Forest Service requires its units, such as national forests and grasslands, to inventory and verify 100 percent of their asset data over a 5-year cycle. It has developed a standardized process for units to collect specific types of data for this inventory, such as condition data and deferred maintenance. According to Forest Service officials, the data tracked in the system informs several investment decisions, such as decisions on decommissioning of assets. GSA developed the Building Assessment Tool Survey to assess the overall condition of its assets and what investments they need. GSA uses the data collected from the survey, conducted every 2 years, to calculate a Facility Condition Index, which is the asset’s current needs divided by its replacement value. The Corps’ 2017 policy for operational condition assessments lays out a methodology for assessing condition based on visible attributes and asset performance, such as the degree to which water is leaking around a lock gate (see fig. 5 for an example of what Corps officials described as a minor water leak). Under this policy, Corps officials assign a letter grade to the performance of each individual component within a Corps’ asset. Corps officials told us that there are key differences between this system and the maintenance management system they used previously. For example, officials said the Corps is now able to more easily compare the condition of its assets across the portfolio, and grade the condition of more types of asset components, a process that Corps officials said gives them a more complete understanding of how their assets are performing. Some agencies told us that they faced challenges related to collecting and maintaining asset data, for example, The Park Service uses data on the condition of its assets to calculate a facility condition index. Park Service officials told us that when they developed their asset management program in the early 2000’s they had to change many of their existing data collection processes and train their staff to manage the new data. NASA field centers are required to assess assets and enter key asset data into NASA’s database, but according to NASA Headquarters officials, they have faced challenges collecting data from some Centers. For example, NASA Centers are required to review and revalidate the mission dependency scores for each of their assets every 3 years, but Headquarters officials told us not all Centers have entered such scores on all assets. Promoting a Collaborative Organizational Culture Aligning staff activities toward effective asset management and communicating information across traditional agency boundaries can ensure that agencies make effective decisions about their assets. Officials from three of the agencies we reviewed told us that having staff embrace asset management is a key to successful implementation, for example, Park Service officials told us they implemented an organizational change-management process and provided additional training to staff in key asset management areas such as data collection. Finally, they said that they tried to prevent asset management requirements from overwhelming the other tasks staff perform by, for example, considering staff time constraints when developing their data collection processes. Officials told us that they continue to streamline these processes to reduce field staff workload. The Corps’ Program Management Plan includes chapters on communications strategies and organizational change management to promote an asset management culture. While these agency officials told us that obtaining leadership and staff buy-in is important for asset management implementation to be effective, officials from three of our six selected federal agencies cited managing organizational culture changes as an implementation challenge. For example, Corps officials told us that, prior to developing their framework, the different functional areas in the Civil Works Program were each responsible their own assets and were not sharing asset information across areas. As a result, the Corps struggled with getting staff to work together and coordinate on asset management activities. To help mitigate this issue, Corps officials told us they have assigned dedicated asset management staff to each regional district to facilitate communication at the local level between staff in different functional areas, and developed a community of practice to discuss maintenance issues including asset management. Evaluating and Improving Asset Management practices Continuously evaluating the performance of an agency’s asset management framework and implementing needed changes can optimize the value the agency’s assets provide. According to literature we reviewed, an asset management plan should be evaluated and continuously improved over time to ensure it still reflects the organization’s goals. Officials from each of the six agencies told us that they collect data to measure the performance of their asset management policies, and two agencies have continuous evaluation processes laid out in their asset management plans. For example: GSA’s asset management plan describes the data GSA uses to track the performance of its framework, including information on operating costs, asset condition, asset utilization, operating income, and energy. The Corps evaluates its program by conducting maturity assessments. According to the Corps’ 2014 Program Management Plan, these assessments measure the maturity level of its asset management program to review and identify gaps in achieving the asset management system’s vision and objectives while efficiently using resources. Corps officials told us they self-assessed their own operations at the low end of the maturity scale, and they are using the results of the assessment to inform revisions to their Program Management Plan. In addition, officials from five of the six agencies told us they are in the process of developing or implementing major changes to their asset management policies, including developing new policies for collecting data, measuring asset criticality, and prioritizing investments, for example: The Coast Guard has been developing its asset management model since 2006 and, as previously mentioned, is in the process of developing manuals, process guides, and technical orders to support this model. NASA officials told us that they are in the midst of developing new policies and guidance for asset management based on a recently completed business process assessment. Officials said that the new process under development would involve more centralized planning and management across NASA instead of the more center-based asset management program they currently use, along with improved data collection practices. Park Service is undertaking a program focused on improving the operation and maintenance of its real property portfolio. Officials told us that there are two major pieces to this effort, one to improve efficiency of their data collection process by streamlining and consolidating systems to reduce the data collection and management burden on staff, and another to expand the Park Service’s investment strategies to reflect the agency's top priorities and strengthen the role of the Developmental Advisory Board to ensure consistent application of investment goals. Experts and Practitioners Said Implementing an Asset Management Framework Can Be Challenging but Also Provides Benefits Experts and Practitioners Cited Managing Organizational Culture Changes and Capacity as Challenges to Implementing an Asset Management Framework According to our interviews with asset management experts and practitioners whom we selected, organizations can face challenges implementing an asset management framework. The two challenges most frequently mentioned were managing both organizational culture changes and capacity challenges, such as lack of skills and knowledge of management practices. Managing Organizational Culture Changes Almost all the experts and over half of the practitioners we interviewed stated that managing the organizational culture changes that result from implementing a new asset management framework is a challenge. For example, several experts and practitioners stated that an effective framework requires enterprise-wide policies to manage assets and that changing the organizational culture from one in which departments or divisions are used to working independently to one that promotes interdepartmental coordination and information sharing can be challenging. Specifically, one expert representing a U.S. municipality told us that a key implementation challenge it faced was in setting up policies to promote more information sharing across the organization. This expert stated that previously the organization’s data systems were not set up to share information across departments, leading to data silos that hindered coordination across the agency. Similarly, another expert stated that asset management is by nature a multidisciplinary practice, which crosses through many functional silos that are typically present in large organizations. These silos are necessary to allow for the required level of specialization, but if these silos do not communicate, inefficiencies and errors in asset management result. He stated that in these organizations, a key challenge in implementing an asset management framework is getting officials in these different departments to agree upon and transition to a common set of goals and direction for the framework. Several experts and practitioners stated that obtaining the leadership and staff buy-in that is critical for asset management implementation to be effective can be a challenge. For example, one expert representing an organization that had recently implemented a new asset management framework stated that it faced resistance from some of its staff. These employees had been working for the organization for a long time, had not been updating their skills over time and were resistant to having to learn a new process. In addition, it was difficult to convince staff previously invested in the old decision-making process to adjust to a new process. A study examining asset management practices of public agencies in New Zealand found that obtaining buy-in and support from leadership and staff was critical. According to this study, for asset management to be successful, it has to become part of the organization’s culture, and for that to happen, leadership needs to “buy-into” the process, the reason why it is important, and the value of its outputs. Managing Capacity Challenges Over half of the experts and all of the practitioners we interviewed cited capacity challenges to implementing an effective asset management framework, such as lack of skills, knowledge of management practices, asset data, and resources. Some experts and practitioners stated that implementing an effective framework might require skills and competencies that the organization may not currently have. For example, one expert stated that organizations might not have the in-house expertise needed to implement a risk management approach. Similarly, a practitioner representing an asset management firm that provides consulting services to municipalities noted that lack of in-house expertise could lead to the organization’s over-reliance on consultants; such over- reliance, in turn, can result in the organization’s not following through with the new asset management practices once the consultants finish their work. Several experts and practitioners also stated that some organizations struggle with collecting and managing data needed to conduct asset management. For example, one expert stated that an important first step to implementing an asset management framework is to develop comprehensive records of the organization’s assets. However, according to this expert, it is difficult to actually collect and use good information about assets to deliver robust planning. The age of assets can compound this challenge because with older assets sometimes the original plans and specifications have been lost. Several experts and practitioners also mentioned lack of sufficient resources as an implementation challenge. Specifically, one expert noted that obtaining funding to support asset management activities is a challenge. This expert stated that it is more difficult to secure funding for improving components of an asset management framework, such as improving data collection processes, than it is to secure funding for tangible investments in new assets. As we previously discussed, some of the experts that we interviewed stated that evaluating and continually improving asset management practices is an important characteristic of an effective asset management framework. Addressing Culture Change and Capacity Challenges Experts and practitioners we interviewed identified potential strategies for addressing and overcoming implementation challenges, including strategies for managing culture change and capacity challenges such as lack of skills and resources. See table 2 for the strategies experts and practitioners identified. We have previously reported on practices and implementation steps that can help agencies manage organizational change and transform their cultures to meet current and emerging needs, maximize performance, and ensure accountability. Several of these practices—such as involving employees in the transformation effort, ensuring top leadership drives the transformation effort, and establishing a communication strategy—could address some of the potential change-management challenges that agencies might face when implementing an asset management framework. For example, in our prior work on organizational change we have noted that a successful transformation must involve employees and their representatives from the beginning to increase employees’ understanding and acceptance of organizational goals and objectives, help establish new networks and break down existing organizational silos, and gain their ownership for the changes that are occurring in the organization. Some of the experts we interviewed who had implemented ISO 55000 stated that they involved employees in the transformation effort. For example, one expert representing an organization with recent success in implementing ISO 55000 stated that the managers at person’s organization involved staff in the implementation process, which helped foster ownership of the new asset management program. Experts and Practitioners Cited Improved Data and Other Benefits to Adopting an Asset Management Framework Asset management experts and practitioners we interviewed cited a number of potential benefits to adopting an asset management framework that aligns with the six characteristics we identified, including: (1) improved data and information about assets, (2) better-informed decisions, and (3) financial benefits. Improved Data and Information about Assets About half of the experts and practitioners we interviewed stated that implementing an asset management framework that aligns with the six characteristics we identified previously and discussed can result in an organization’s collecting more detailed and quality information about assets. For example: One expert representing a U.S. municipality that had recently implemented a new asset management framework stated that it now collects and tracks more detailed asset data, including information about the condition and performance of its assets. According to this expert, this more detailed information provides asset managers with a better understanding of how much asset repairs actually cost in the long term, how long repairs take, and which assets are most critical to repair or replace. Additionally, they are in the process of integrating this data into the organization’s capital-improvement project modeling, a step that in turn has allowed the asset managers to make better investment decisions. This expert also noted that collecting detailed data about the municipality’s assets has enabled the asset managers to provide more information to the public and to decision-makers. Another expert we interviewed representing an organization that had recently adopted a new asset management framework stated that its data have improved as a result. According to this expert, prior to implementing the program, the organization had a good inventory of its assets, but it was missing dynamic information about condition and performance. The managers made several changes to address this situation, including investing in information technology systems and infrastructure to collect and track condition data in real time. As a result, the organization is now able to track trends in asset performance failures and anticipate that over time it will predict future performance failures with this information. Better-Informed Decisions Most of the experts and all of the practitioners who responded to this question stated that another benefit of implementing an asset management framework is that it can help organizations make better- informed asset management decisions. For example, some of these experts and practitioners stated that having a framework that includes improving interdepartmental coordination, collecting more detailed data, and having a strategic approach to asset management helps organizations make better-informed decisions about how to maintain and invest in their assets. In addition, about one-half of the experts stated such a framework can also help organizations better understand the risks the organization faces and make informed decisions about the organization’s assets. For example: One expert stated that a benefit to implementing an asset management framework that incorporates interdepartmental coordination is that everyone within the organization is working to achieve the same goals in both the short-term and long-term, which results in better decisions and better customer service. This expert worked with a foreign network operator to implement an asset management system that would support the company’s goals for increasing its electric grid capacity. He found that for different assets, the company had adopted different asset strategies to deal with future demand growth, approaches that resulted in misaligned asset strategies. The differences in the individual asset strategies were identified and realigned. If these differences had not been recognized, this lack of coordination could have resulted in inefficient decision- making and the loss of time and money. Another expert representing a U.S. municipality stated that by implementing an asset management framework, the municipality’s program managers are now able to make better-informed asset management decisions and present information and proposals to the city council and budget committee. In addition, this detailed information has allowed managers to better assess the condition of their assets across the portfolio and to compare it to industry standards in the respective asset classes. Financial Benefits Over half of the experts and a third of the practitioners we interviewed stated that effective asset management practices can result in financial benefits to the organization, such as cost avoidance and better management of financial resources. For example, One expert stated that asset management can lead to a greater understanding of budget needs and better long-term capital and lifecycle investment planning. In addition, this expert stated overall that asset management improves clarity in terms of where funds are spent. This enhanced insight can then inform asset management decision-making to produce future cost savings. A practitioner representing a local municipality in Canada stated that since implementing an asset management framework, the municipality is now making better-informed decisions about maintenance and have identified and eliminated unneeded maintenance activities, steps that have resulted in cost savings. For example, by analyzing condition data, the municipality identified an optimal point in time for addressing maintenance issues on its roads and achieved a fivefold-to-tenfold cost reduction over previous repairs. Government-Wide Asset Management Information Does Not Fully Reflect an Effective Asset Management Framework Experts and Practitioners Cited ISO 55000 Standards as a Resource to Inform Agency Efforts Experts and practitioners we interviewed most often cited the ISO 55000 standards as a useful resource that provided a solid foundation for an asset management framework and could inform federal agencies’ asset management efforts. Specifically, these experts and practitioners stated that the standards are flexible and adaptable to different types of organizations regardless of size or organization mission, applicable to different types of assets, and internationally accepted and credible. About half of the experts we interviewed had used the standards, and some of these experts shared examples of how their organization’s asset management approach improved by implementing ISO 55000. See, for example, the experience of Pacific Gas & Electric below. Pacific Gas and Electric’s (PG&E) experience with International Organization for Standardization (ISO) 55001 standard: In 2014 and 2017, PG&E, a public utility company in California, attained Publicly Available Specification (PAS) 55 and ISO 55001 certification and recertification for its natural gas operations. Its physical assets include gas transmission and distribution pipelines, pressure regulator stations, gas storage facilities, and meters. According to PG&E, a key benefit from implementing the standards is that PG&E has developed a consistent strategy for managing its natural gas operations assets. This, according to PG&E, has enabled the utility to develop a framework for program managers from different parts of the organization, such as finance, operations, engineering and planning, to collaborate more effectively and work together to wards one strategic goal rather than competing with one another for funding. According to PG&E, this new structure allows the program managers to prioritize investment decisions across their asset portfolio to align with corporate objectives. Officials from five of the six agencies we interviewed stated that they were familiar with the ISO 55000 standards, and officials from the Corps stated that they use selected practices from ISO 55000. Corps officials stated that using the standard has provided several benefits to their organization. For example, they stated that using the standard has informed their budget process and has helped them make better-informed decisions about critical reinvestment. In addition, it has allowed them to develop a consistent approach to managing all of their physical assets across different lines of business. However, officials from four agencies raised some concerns about using these standards. These included concerns about upfront costs and resources needed to implement the standards and their applicability to the federal government given the size, scope, and uniqueness of agencies’ assets, and the diverse missions of each agency. For example, officials from one selected agency stated that in their view, the standards are better suited for private organizations because federal agencies have federal requirements they need to meet, such as those for disposition of real property, which may affect their asset management decision making. We have previously reported on challenges federal agencies face with disposing of assets in part due to legal requirements agencies must follow. Several experts and officials from one practitioner organization we interviewed stated that they thought that federal agencies across the government could implement the ISO 55000 standard. The experts stated that key benefits of implementing the standard would be that it would result in a more consistent asset management approach and help federal agencies better manage resources. For example, one expert stated that a key benefit of implementing the standard would be to drive federal agencies to be better stewards of their resources by better utilizing mission assets. In addition, some experts and practitioners also stated that federal agencies do not need to implement the full standard or seek certification to achieve results; agencies can decide which practices in the standard are most relevant to their organization and implement those practices. The ISO technical committee that produced the ISO 55000 standards is drafting a new standard on asset management in the public sector. According to ISO, this standard, expected to be published in December 2019, will provide guidance to any public entity at the federal, state, or local level including more detailed information on how to implement an asset management framework. Government-Wide Asset Management Information Lacks Many Elements of an Effective Asset Management Framework While OMB has issued government-wide requirements and guidance to federal agencies related to asset management, this guidance does not present a comprehensive approach to asset management because it does not fully align with standards and key characteristics, nor does it provide a clearinghouse of information on best practices for federal real property management to agencies as required by Executive Order 13327. As mentioned earlier, OMB has issued various government-wide policies, guidance, and memorandums related to federal asset management. For example, in response to Executive Order 13327 in 2004, the FRPC— chaired by OMB—developed guiding principles for agencies’ asset management practices and for developing a real property asset management plan. Specifically, the guidance stated that each real property asset management plan should, among other things: link the agency’s asset management framework to the agency’s strategic goals and objectives, describe a process for periodically evaluating assets, and describe a process for continuously monitoring the agency’s framework. In addition, OMB’s Circular A-11 describes requirements for the agency capital planning process, such as prioritizing assets to support agency priorities and objectives, while OMB’s Circular A-123 describes risk management requirements for agencies, and OMB’s Memorandum 18-21 describes requirements for an agency’s senior real property officers, such as coordinating real property planning and budget formulation. Further, the Federal Assets Sale and Transfer Act and the Federal Property Management Reform Act—both of 2016—collectively contain provisions related to asset management including establishing procedures for agencies to follow when disposing of real property assets and requiring agencies to submit data on leases to the FRPC. Taken as a whole, the OMB guidance lacks many of the elements called for by the ISO 55000 standards and the key characteristics we identified. For example, the guidance: covers several different areas of asset management but does not direct agencies to develop a comprehensive approach to asset management that incorporates strategic planning, capital planning, and operations, as recommended by the ISO 55000 standards and the key characteristics we identified. directs agencies to continuously monitor their asset management frameworks and identify performance measures but does not direct agencies to use the results to improve their asset management frameworks in areas such as overall governance, decision making, and data collection, as called for in ISO 55000 standards and the key characteristics we identified. directs agencies to have a senior official in charge of coordinating the real property management activities of the various parts of the organization but does not direct agencies to demonstrate leadership commitment to asset management or to define asset management roles and responsibilities for each element of the agency, as called for in ISO 55000 standards and the key characteristics we identified. directs agencies to ensure that their real property management practices enhance their decision making, but does not direct agencies to actively promote a culture of information sharing or ensure that the agencies’ decisions are made on an enterprise-wide basis, as called for in ISO 55000 standards and the key characteristics we identified. directs agencies to identify asset management goals and enhance decision making, but does not direct agencies to establish the scope of their asset management frameworks by, for example, determining how the agency should group or organize the management of its different types of assets, as called for in ISO 55000 standards. Moreover, OMB staff told us that while the executive order’s requirements for federal agencies to develop an asset management plan and related processes remain in effect, OMB’s real property management focus has shifted to the National Strategy for the Efficient Use of Real Property and its accompanying Reduce the Footprint initiatives issued in 2015. These initiatives emphasize efficiently managing and using space, rather than overall asset management. OMB staff said that they view asset management as a tactical activity, separate from broader strategic and capital planning efforts, where agencies make operational-level policies to support their real property portfolio. However, this approach to asset management differs from ISO’s definition of asset management, which encompasses both the capital-planning and asset management levels of OMB’s policy model. Under the Reduce the Footprint initiative, federal agencies are required to submit annual Real Property Efficiency plans that specify their overall strategic and tactical approach to managing real property, provide a rationale for and justify their optimum portfolio, and direct the identification and execution of real property disposals, efficiency improvements, and cost-savings measures. As a result, according to OMB staff, they no longer require agencies to develop a comprehensive asset management plan. We recognize that reducing, and more efficiently managing government- owned and leased space are important goals. However, effective asset management is a more comprehensive objective that seeks to best leverage assets to meet agencies missions and strategic objectives. For example, some agencies have high-value real property assets that are not building space, such as those at the Corps and the Park Service. See table 2 for examples of these types of assets at the six selected agencies in our review. For example, the Corps has over 700 dams—the age and criticality of which require the Corps to conduct regular maintenance and, in some cases, major repairs to assure continued safe operation. In 2015, the Corps estimated the cost of fixing all of its dams that need repair at $24 billion. Similarly, in 2016, we reported that the Park Service’s deferred maintenance for its assets averaged about $11.3 billion from fiscal year 2009 through fiscal year 2015 and that in each of those years, deferred maintenance for paved roads made up the largest share of the agency’s deferred maintenance—about 44 percent. Assets classified as paved roads in the Park Service’s database include bridges, tunnels, paved parking areas, and paved roadways. For these and other agencies with similar portfolios, the agencies’ Real Property Efficiency plans are not relevant to managing the bulk of their assets, and the guidance primarily focused on buildings and office space is of limited use. In addition, without specific information to help all federal agencies evaluate their current practices and develop more comprehensive asset management approaches, federal agencies may not have the knowledge needed to maximize the value of their limited resources. In addition, while Executive Order 13327 requires the FRPC to provide a clearinghouse of information on best practices for federal real property management, this information is currently lacking from existing guidance or other available sources. GSA officials and OMB staff stated they do not currently have plans to compile this information. Because of this, existing guidance falls short of what an effective asset management framework might include. GSA officials told us that while certain agencies have shared information on asset management at meetings of the FRPC, the council does not take minutes or make this information readily available to agencies outside of the meetings. Given OMB’s shift in focus, OMB staff said that they did not plan to update their guidance. However, Standards for Internal Control in the Federal Government state that communicating information, such as leading practices, is vital for agencies to achieve their objectives. Further, government-wide information in some cases is not available, such as information on practices federal agencies have successfully used to conduct asset management. There is merit to having key information on successful agency practices readily accessible for federal agencies to use. For example, officials from three of the six agencies we spoke with said information on best practices for asset management would be helpful to them in developing their agencies asset management frameworks. Such information could include practices that are described in ISO 55000 and that federal agencies have successfully used to improve asset management. For example, one agency official stated that it would be useful to have a compilation of asset management practices that federal agencies use to determine if any of those practices might be applicable to an agency. Similarly, an official from another agency stated that the agency is currently evaluating opportunities to improve its asset management program and that the agency would be interested in learning more about asset management processes across the federal government in order to inform the agency’s asset management efforts. Without information such as these officials described, federal agencies lack access to practices geared to them on how to develop an asset management plan and other asset management practices. Conclusion Federal agencies collectively hold billions of dollars in real property assets—ranging from buildings, warehouses, and roads to structures including beacons, locks, and dams—and are charged with managing these assets. The effective management of all of an agency’s real property assets plays an important role in its ability to execute its mission now and into the future. However, because existing federal asset management guidance does not fully reflect standards and the key characteristics, such as, directing agencies to develop a comprehensive approach to asset management that incorporates strategic planning, capital planning, and operations, federal agencies may not have the knowledge needed to maximize the value of their limited resources. In addition, because there is no central clearinghouse of information to support agencies’ asset management efforts, as required by Executive Order 13327, agencies may not know how best to implement asset management activities, including using quality data to inform decisions and prioritize investments. A reliable central source of information on current effective asset management practices could support agencies in making progress in their asset management efforts, helping them more efficiently fulfill their missions and avoid unnecessarily expending resources. Further, sharing experiences across the government could assist agencies’ efforts to adopt, assess, and tailor an asset management approach appropriate to their needs and to support efforts to more strategically manage their real property portfolios. Recommendation We are making the following recommendation to OMB: The Director of OMB should take steps to improve existing information on federal asset management to reflect leading practices such as those described in ISO 55000 and the key characteristics we identified and make it readily available to federal agencies. These steps could include updating asset management guidance and developing a clearinghouse of information on asset management practices and successful agency experiences. (Recommendation 1) Agency Comments We provided a draft of this report for review to the Office of Management and Budget, the General Services Administration, the National Aeronautics and Space Administration, and the Departments of Agriculture, Defense, Homeland Security, and the Interior. The Forest Service within the Department of Agriculture agreed with our findings and noted that GAO's key characteristics for effective asset management will help the Forest Service manage their assets and resources effectively. Further, the Forest Service stated that asset management leading practices are critical in measuring efficiencies and meeting strategic goals for its diverse and large portfolio. The Forest Service’s written comments are reproduced in appendix IV. The Departments of Homeland Security and the Interior, and the General Services Administration provided technical comments, which we incorporated as appropriate. The Office of Management and Budget, the Department of Defense, and the National Aeronautics and Space Administration had no comments on the draft report. We are sending copies of this report to the appropriate congressional committees, the Secretaries of the Departments of Agriculture, Defense, Homeland Security, and the Interior; the Administrators of the General Services Administration and National Aeronautics and Space Administration; and the Director of the Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Asset Management in Canada As of 2016, public entities in Canada owned about $800 billion worth of infrastructure assets including roads, bridges, buildings, waste and storm water facilities, and public transportation assets. Municipalities owned the majority of these assets, around 60 percent, with provincial and federal entities owning around 38 percent and 2 percent respectively. Asset Management Policy and Support Federal Asset Management Policies The federal government of Canada owns or leases approximately 20,000 properties containing about 37,000 buildings with about 300 million square feet of floor space. In the fiscal year that ended in 2016, the federal government spent around $7.5 billion on managing its real property portfolio, of which about 80 percent went to operating expenditures and about 20 percent went to capital investments such as acquisitions and renovations. This portfolio is managed and controlled by 64 federal agencies, departments, and “Crown corporations” with primary uses including post offices, military facilities, government offices, employee housing, and navigation facilities such as lights. The Treasury Board of Canada, supported by the Treasury Board Secretariat, provides policy direction to agencies and departments for their real property assets along with approving certain larger projects, acquisitions, and disposals. The Treasury Board of Canada Secretariat is currently conducting a portfolio-wide review of the federal government’s real property management in order to develop a road map for the most efficient and effective model for federal real property asset management. Treasury Board Secretariat officials told us that they have preliminarily found that the federal government does not have a government-wide asset management strategy and faces challenges related to the availability of current and consistent asset condition data. Federal and Provincial Support for Municipal Asset Management Municipalities own and manage most of Canada’s public infrastructure, and in recent years, municipal governments have been leaders in developing and implementing asset management frameworks. By the early 2000’s several large cities including Hamilton, Calgary, and Edmonton began developing frameworks to reduce costs and improve the management of certain types of municipal assets such as those related to water distribution and treatment. More recently, the federal government and several provincial governments have promoted asset management for municipalities in a variety of ways including by awarding grants and attaching requirements to infrastructure funding. Some of these programs have focused on small municipalities that make up the large majority of the total but may face particular challenges in obtaining the resources to develop and implement an asset management framework. The federal government provides infrastructure funding to municipalities through several programs, including the Federal Gas Tax Fund. This fund provides around $1.5 billion in funding to municipalities each year for projects such as water treatment, roads and bridges, broadband connectivity, airports, and public transit, and does not require yearly reauthorization. Each of Canada’s municipalities receives funding through this program by formula, and funds are routed through the provinces, which can attach their own requirements. In the 2014 set of agreements between the federal government and the provinces, provinces were required to institute asset management requirements for municipalities to receive gas tax funds, and each of the provinces developed separate requirements for municipalities under its jurisdiction. These requirements took several forms. For example, Ontario required each municipality to develop an asset management plan by the end of 2016 while Nova Scotia has withheld a small portion of its total provincial gas tax allocation to use toward developing a province-wide asset management framework for municipalities to use. The federal government also provides funding to municipalities for asset management. Through the Municipal Asset Management Program, administered by the Federation of Canadian Municipalities (FCM), Infrastructure Canada made available $38 million over 5 years for Canadian municipalities and partnering not-for-profit organizations to improve municipal asset management practices. The maximum grant amount for municipalities is $38,000. Eligible activities under this program include assessing asset condition, collecting data on asset costs, implementing asset management policies, training staff, and purchasing software. FCM officials told us that, as of March 2018, they had received 253 grant applications and that, of the grants they had disbursed so far, around: 25 percent of grantees used the funds for data projects, 15 percent to develop asset management plans, 2 percent for staff training, 4 percent for asset management system operations, and 60 percent for some combination of these purposes. Canadian provinces have also taken several actions to improve asset management practices at the municipal level by establishing requirements for municipalities in their jurisdiction or by providing funding programs. For example, in 2017, Ontario issued an asset management planning regulation, which requires municipalities to develop a strategic asset management policy by July 1, 2019, and then develop progressively more detailed asset management planning documents in later years. In addition to this regulation, in 2014, Ontario also introduced a funding program for small and rural municipalities to provide long-term, formula and application-based funding for these municipalities to develop and repair their infrastructure. Under the program, municipalities are required to have an asset management plan as a condition of receiving funding. In addition, municipalities can use formula-based program funds for certain asset management activities including purchasing software, staff training, or direct staff activity related to asset management. In 2016, Ontario announced plans to increase the funding available per year from about $75 million to about $150 million in 2019. Experiences with Implementing Asset Management Frameworks Selected Federal Asset Management Experiences Much of the federal government’s real property is managed by a federal department known as Public Services and Procurement Canada (PSPC) whose nationwide portfolio includes around 350 owned buildings and an additional 1,200 building leases. PSPC uses a portfolio-wide asset management framework, which begins with developing national portfolio strategies and plans every 5 years. Staff in each of PSPC’s five regional offices then use these plans to develop regional and community-based portfolio strategies and plans, which then inform annual management plans for each PSPC asset. To determine how to best allocate funds across its portfolio of assets, PSPC places each of its assets into one of four tiers based on three major criteria: (1) the asset’s strategic importance to PSPC’s portfolio as measured by criteria such as the asset’s location and design, (2) the asset’s operating and functional performance such as cost per unit area, and (3) the asset’s condition based on a metric called the Liability Condition Index, which measures the risk an asset poses to continuing operations and occupant safety. Using this method, PSPC designates its highest tier assets as those that have excellent financial performance, that have non-financial attributes that support PSPC’s objectives, and that are not expected to need major capital investments in the next 5 years. The lowest tier assets have poor performance and are in need of either major investments or disposal in the next 5 to 10 years. PSPC officials told us that they are in the midst of making major changes to their asset management framework, including by moving to a component-based system of accounting where they will treat each asset as 12 components, including 11 for the building such as roofs or heating and air conditioning systems, and 1 for tenant equipment. Additionally, PSPC plans to move to more modern enterprise systems to eliminate paper records and improve the quality of the data they use to make budgeting decisions. Officials said that they consider the ISO 55000 requirements when evaluating their asset management framework, but they also use other best practices from the private sector that they said better suit their needs by providing more detailed information on how to develop and implement the various elements of an asset management framework. Selected Municipal Asset Management Experiences Over the past 20 years, several Canadian municipalities have developed detailed asset management frameworks to improve management efficiency and cost-effectiveness as well as to obtain improved levels of service from municipal infrastructure. In the late 1990’s, the City of Hamilton, Ontario, began developing an asset management framework for its core municipal infrastructure assets, and in 2001, the city established an office dedicated to asset management within its public works department, which produced its most recent municipal asset management plan for public works in 2014. This plan sets a strategic vision and goals for the asset management program, which are designed to align with the city’s overall strategic plan, capital and operating budgets, master plan, and other business documents, and describes how the city’s asset management activities will support the objectives laid out in those documents. Additionally, the asset management plan provides an overview of the current state of Hamilton’s infrastructure assets in four categories: drinking water supply, wastewater management, storm water management, and roads and bridges. The plan states the total value of the assets in each category and, the condition of those assets and has an indicator of the recent trends in the condition of those assets. The plan also defines the levels of service Hamilton aims to provide in each of the four main asset categories and sets goals for each category such as safety, reliability, regulatory compliance, and customer service. Next, the plan defines an asset management strategy for the city, which includes taking an inventory of assets, measuring asset condition, assessing risk, measuring the performance of the asset management framework, making coordinated citywide decisions, and planning for capital investments. Finally, the document contains a plan for managing each of the four main asset categories over their entire life cycles. Hamilton officials stressed the importance of collecting and using quality data when deciding where and when to allocate resources. They told us that the data they have collected under their asset management framework have allowed them to make better-informed investment decisions, and have provided them with the information necessary to make business cases for investment and to better defend their decisions when they solicit funding from the City Council. For example, officials described how the city assesses the condition of its road network and uses the results to prioritize investment in its assets. To assess the condition of each road, the city uses a 100-point scale where, for example, above 60 indicates the road is only in need of preventative maintenance and 20 or less indicates the road is in need of total reconstruction. Officials said that a total reconstruction could cost ten times as much as a minor rehabilitation and that the window of time between when a road needs only a minor rehabilitation and a full reconstruction is only around 10 years. Because of this, Hamilton officials said that it is important to conduct rehabilitation on roads and other infrastructure assets before they deteriorate to the point where they either fail or are in need of a full rehabilitation. For example, Hamilton undertook a major re-lining project for a storm sewer that was in danger of complete collapse, as shown in fig. 6. Officials told us this project would preserve storm sewer service at significantly lower cost than waiting for the structure to fail or completely rebuilding it, either of which would have been cost prohibitive. Additionally, Hamilton officials noted that they do not need all of their assets to be at a 100 rating and that their asset management framework directs them to allow some assets to deteriorate to a certain extent while rehabilitating others by making investment decisions on a system-wide service basis, as opposed to an individual project basis. The City of Calgary, Alberta, began developing its asset management framework in the early 2000’s, first focusing on the Calgary’s municipal water-management assets because they are expensive to maintain and are only funded from water utility customer bills, as opposed to tax revenue. City officials told us that the primary impetus for initially exploring asset management was to be able to maintain levels of service as the city rapidly expanded in both population and physical size; this expansion forced Calgary to make major investments in the water system. Since that time, Calgary has expanded its asset management framework to include nearly all of its assets, including its software, bridges, public recreation facilities, and even its trees. Between 2008 and 2010, the Calgary took steps to align its asset management to its business processes, steps that culminated with the development of the city’s first citywide asset management policy in 2010. Calgary officials told us that between 2004 and 2008 they worked to align their initial asset management framework with the British Standards Institution Publicly Available Specification 55 (PAS 55). After this experience, officials from Calgary participated in the development of the ISO 55000 standards and provided the Standards Committee information about tactics for asset management such as policy development and business strategy. When the ISO 55000 standards were officially published in 2014, the city began working on aligning their asset management framework with the new standards, a process that led to a new framework including a strategic asset management plan, which city officials published in 2016. Calgary officials said that aligning their asset management framework with the ISO 55000 standards has given them support from the city’s top management and has improved their relationship with the various bodies that audit the city’s operations because it gives them a common language to use when describing management processes. Calgary officials told us that the ISO 55000 standards are credible internationally recognized best practices and that in practice they are a good guide for developing an asset management framework. However, Calgary is not planning on certifying its operations to the ISO 55000 standard because officials told us that they are not required to be certified; certification is expensive and needs to be repeated; and they are unsure of what additional value certification to the standards would provide. The City of Ottawa, Ontario, began developing its asset management framework in 2001. Since that time, the city’s asset management framework has gone through several versions, the most recent of which it developed beginning in 2012 based on PAS 55. Ottawa officials told us that implementing their asset management framework has allowed them to collect better information about their assets and improve their long-term financial-infrastructure-planning process. While Ottawa officials developed and implemented an asset management framework, they have a number of ongoing initiatives to further develop some areas of the framework. For example, officials said that they consider determining the levels of service to be provided by each asset class the most difficult aspect of asset management, especially for those assets that do not necessarily provide a measureable service. Ottawa officials are working on ways to better measure the services each of their assets provides and the levels of risk that each asset poses to these service levels. Officials said that accurately measuring service and risk levels is critical for their financial planning and will allow them to improve how they prioritize funding and ensure that funds are spent on priority assets. See fig. 7 for an example of an asset officials said was intended to improve levels of service for Ottawa’s pedestrian multi-use pathways. Another ongoing initiative is an updated report card for the condition of the city’s assets, which officials said they use to transparently communicate to stakeholders the current state of their infrastructure. Appendix II: Objectives, Scope, and Methodology This report discusses: (1) key characteristics of an effective asset management framework, and how selected federal agencies’ frameworks reflect these characteristics; (2) views of selected asset management experts and practitioners on challenges and benefits to implementing an asset management framework; and (3) whether government-wide asset management guidance and information reflect standards and key characteristics of an effective asset management framework. To obtain information for all three objectives, we reviewed relevant literature, including academic and industry literature on asset management, publications describing asset management leading practices, and the ISO 55000 and related standards. We selected the ISO 55000 standards because they are international consensus standards on asset management practices. We also reviewed laws governing federal real-property asset management, Office of Management and Budget’s (OMB) guidance and prior GAO reports describing agencies’ real-property management and efforts to more efficiently manage their real property portfolios. In addition, to address all three objectives, we collected information from and interviewed a judgmental sample of 22 experts to obtain their perspectives on various asset management issues. To identify possible experts to interview, we first worked to identify relevant literature published in the topic area. Specifically we searched in October 2017 for scholarly and industry trade articles and other publications that examined effective asset management practices. We limited our search to studies and articles published from January 2014 through January 2017. From this search, we screened and identified studies and articles for relevance to our report and selected those that discussed asset management practices and the ISO 55000 standards. In addition, we conducted preliminary interviews with selected asset management practitioners, who included representatives from public and private organizations knowledgeable about asset management practices, to learn about key asset management issues and obtain recommendations about experts in this field. Through these methods, we identified a total of 82 possible candidates to interview. To ensure a diversity of perspectives, we used the following criteria to assess and select a sample from this group: type and depth of an expert’s experience, affiliations with asset management trade associations, experience with government asset management practices, relevance of published work to our topic, and recommendations from other entities. We selected a total of 22 experts representing academia, private industries, foreign private and public entities, and entities that have implemented ISO 55000. See table 3 for a list of experts whom we interviewed. Their views on asset management practices are not generalizable to those of all experts; however, we were able to secure the participation of a diverse, highly qualified group of experts and believe their views provide a balanced and informed perspective on the topics discussed. We interviewed the selected 22 experts between January 2018 and February 2018 and used a semi-structured interview format with open- ended questions for those interviews. We identified the topics that each of the experts would be able to respond to, based on the individual’s area of expertise and each responded to questions in the semi-structured interview guide in the areas in which they had specific knowledge. During these interviews, we asked for experts’ views on key characteristics of an effective asset management system, opportunities for improving federal agencies’ asset management approaches, experiences with using ISO 55000, and their views on the applicability of ISO 55000 to the federal government. After conducting these semi-structured interviews, we conducted a content analysis of the interview data. To conduct this analysis, we organized the responses by interview question, and then one GAO analyst reviewed all of the interview responses to questions and identified recurring themes. Using the identified themes, the analyst then developed categories for coding the interview responses and independently coded the responses for each question. To ensure the accuracy of our content analysis, a second GAO analyst reviewed the first analyst’s coding of the interview responses, and then the two analysts reconciled any discrepancies. To identify key characteristics of an effective asset management framework and how selected federal agencies’ frameworks reflect these characteristics, we obtained and analyzed the ISO 55000 standards, which include leading practices, and asset management literature, and we analyzed information collected from our interviews with experts. We synthesized information from these sources to identify six commonly mentioned characteristics. We then selected six bureau-level and independent agencies as case studies and compared these agencies’ asset management frameworks to the six key characteristics that we identified. Because the agencies are not required to follow the key characteristics we identified, we did not evaluate the extent to which agencies’ efforts met these characteristics. Instead, we provide this information as illustrative examples of how the agencies’ asset management practices reflect these characteristics. We used a variety of criteria to select these agencies, such as: whether the agency was among the agencies that had the largest real property portfolio; replacement value and total square footage of the portfolio; extent to which the bureau or independent agency had a notable asset management program as described by recommendations from practitioners we interviewed; and whether the agency was implementing the ISO 55000 standards. In order to ensure that we had a diversity of experiences and expertise from across the federal government, we limited our selection to independent agencies and one bureau-level entity from each cabinet department. Based on these factors, we selected: (1) U.S. Coast Guard (Coast Guard); (2) U.S. Army Corps of Engineers (Corps); (3) General Service Administration (GSA); (4) National Aeronautics and Space Administration (NASA); (5) National Parks Service (Park Service); and (6) United States Forest Service (Forest Service). While our case-study agencies are not generalizable to all Chief Financial Officers Act (CFO) agencies, they provide a range of examples of agencies’ experiences with implementing asset management practices. We reviewed documents and interviewed officials from each of the six selected agencies to learn about the agency’s practices, its experiences with the ISO 55000 standards, and challenges it has faced in conducting asset management. In addition, we analyzed fiscal year 2017 Federal Real Property Profile (FRPP) data, as managed by GSA, to obtain information about each agency’s portfolio, such as the number of real property assets and total asset-replacement value, and to obtain examples of the types of buildings and structures owned by the six selected agencies. The Corps and Coast Guard noted small differences between our analysis of the FRPP data and the data from their reporting systems. For example, the Corps reported having 139,744 real property assets as of August 2018 with an estimated asset replacement value $273.4 billion as of September 2017. In addition, the Coast Guard reported 44,226 real property assets with an estimated asset replacement value of $17.6 billion as of September 2017. To ensure consistency, and because these differences were small, we relied on FRPP data rather than data from these agencies’ reporting systems. We conducted a data reliability assessment of the FRPP data by reviewing documentation, interviewing GSA officials, and verifying data with officials from our selected agencies, and concluded the data were reliable for the purposes of our reporting objectives. We also visited four locations from our case study agencies to discuss and view examples of how our selected case-study agencies are conducting asset management. Specifically, we visited the Park Service’s Santa Monica, CA, Mountains National Recreation Area; the Coast Guard’s Baltimore Shipyard in Curtis Bay, MD; the Corps’ Washington Aqueduct in Washington, D.C.; and the Brandon Road Lock and Dam in Joliet, IL. We selected these locations based on several factors including geographic and agency diversity, costs to travel to location, recommendations from officials at our case study agencies, and extent to which the location provided illustrative examples of how federal agencies are managing their assets. To determine the 32 experts’ and practitioners’ views on challenges and benefits to implementing an asset management framework, we analyzed information collected from our interviews with the 22 experts previously mentioned. We also reviewed documents from and interviewed asset management practitioners from 10 additional organizations familiar with asset management practices and the ISO 55000 standards. The 10 organizations included representatives from private industry, one federal agency and local municipalities in Canada. We selected these additional 10 organizations by reviewing published materials related to asset management and referrals from our preliminary interviews. We interviewed the 32 experts and practitioners about their views on challenges and benefits to conducting asset management, ISO 55000, and illustrative examples of practices in other countries. The information gathered from our interviews with experts and practitioners is not generalizable but is useful in illustrating a range of views on asset management issues. See table 4 for a list of organizations we interviewed. To assess whether government-wide guidance and information on asset management reflect standards and key characteristics of an effective asset management framework, we reviewed current federal guidance and evaluated the extent to which this guidance incorporates practices described in the ISO 55000 standards and the six key characteristics of an effective asset management framework that we identified. Specifically, we reviewed the Federal Real Property Council’s (FRPC’s) 2004 Guidance for Improved Asset Management, OMB’s, National Strategy for the Efficient Use of Real Property 2015-2020: Reducing the Federal Portfolio through Improved Space Utilization, Consolidation, and Disposal and OMB’s Implementation of OMB Memorandum M-12-12 Section 3: Reduce the Footprint, Management Procedures Memorandum No. 2015- 01. We also reviewed other OMB guidance, such as OMB’s 2017 Capital Programming Guide, OMB’s Circular A-123, OMB’s Memorandum 18- 21 and other guidance. In addition, we reviewed asset management requirements in the Federal Real Property Management Act of 2016 and in the Federal Assets Sale Transfer Act of 2016. We interviewed OMB and GSA officials about their role in supporting federal agencies’ asset management efforts. In addition, we obtained information from our interviews with the 32 asset management experts and practitioners about practices that could be applicable to the federal government and opportunities to improve federal agencies’ asset management approaches. Lastly, we obtained documents and, as previously discussed, interviewed representatives from private organizations, federal agencies, and local municipalities in Canada—a country with over 20 years of experience in conducting asset management—to learn about their asset management practices, including their use of the ISO 55000 standard. We also conducted a site visit to Canada to learn more about their practices and to view examples of assets in local municipalities. See appendix I for more information on Canada’s asset management practices. We conducted this performance audit from August 2017 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Key Elements of the International Organization for Standardization (ISO) 55000 Standards ISO 55000 Section Appendix IV: Comments from the Department of Agriculture Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Amelia Shachoy, Assistant Director; Maria Mercado, Analyst-in-Charge; Sarah Arnett; Melissa Bodeau; Leia Dickerson; Alex Fedell; Geoffrey Hamilton; Terence Lam; Malika Rice; Kelly Rubin; and Tasha Straszewski made key contributions to this report.
Why GAO Did This Study The federal government is the largest real property owner in the United States and spends billions of dollars to operate and maintain these assets, which include buildings, roads, bridges, and utility systems. Federal agencies are responsible for developing asset management policies, processes, and plans. In 2014, the ISO 55000 asset management standards were issued. GAO was asked to examine federal agencies' real property asset management practices and the applicability of ISO 55000. This report discusses: (1) key characteristics of an effective asset management framework and how selected federal agencies' frameworks reflect these characteristics, and (2) whether government-wide asset management guidance and information reflect standards and key characteristics of an effective asset management framework, among other objectives. To conduct this work, GAO reviewed the ISO 55000 standards, relevant studies and literature, and interviewed 22 experts and 10 practitioners. GAO selected six federal agencies as case studies, including agencies with the largest real property portfolio and some agencies that were using the ISO 55000 standards. GAO reviewed documentation and interviewed officials from these six agencies, GSA, and OMB. What GAO Found GAO identified six key characteristics of an effective asset management framework (see table 1) that can help federal agencies manage their assets and resources effectively. GAO identified these key characteristics through reviews of the International Organization for Standardization (ISO) 55000 standards—an international consensus standard on asset management—studies and articles on asset management practices, and interviews with experts. GAO reviewed the asset management practices of six federal agencies: the U.S. Coast Guard (Coast Guard); U.S. Army Corps of Engineers (Corps); General Services Administration (GSA); National Park Service (Park Service); National Aeronautics and Space Administration (NASA); and U.S. Forest Service (Forest Service). Each of the six federal-agency frameworks GAO reviewed included some of the key characteristics. Source: GAO analysis of ISO 55000 standards, asset management literature, and comments from experts. | GAO-19-57 While the Office of Management and Budget (OMB) has issued guidance to inform federal agencies' real property management efforts, the existing guidance does not reflect an effective asset management framework because it does not fully align with ISO 55000 standards and the key characteristics. For example, this guidance does not direct agencies to develop a comprehensive approach to asset management that incorporates strategic planning, capital planning, and operations, or maintaining leadership support, promoting a collaborative organizational culture, or evaluating and improving asset management practices. In addition, the guidance does not reflect information on successful agency asset management practices, information that officials from three of the six agencies GAO spoke with said would be helpful to them. OMB staff said that they did not plan to update existing government-wide guidance because OMB's real property management focus has shifted to the Reduce the Footprint initiative, which emphasizes efficiently managing and using buildings and warehouse space, rather than all assets. Without a more comprehensive approach, as described above, federal agencies may not have the knowledge needed to maximize the value of their limited resources. What GAO Recommends OMB should take steps to improve information on asset management to reflect leading practices. OMB had no comments on this recommendation.
gao_GAO-18-606
gao_GAO-18-606_0
Background Federal Grant Programs The federal government uses grants to address national priorities—such as substance use prevention, treatment, and recovery—through nonfederal parties, including state and local governments, federally recognized tribes, educational institutions, and nonprofit organizations. While there is variation among different grant program goals and grant types, most federal grants follow a common life cycle that includes an award, implementation, and closeout stage for administering the grants. During the award stage, the federal awarding agency enters into an agreement with the grantee stipulating the terms and conditions for the use of grant funds including the period that funds are available for the grantee’s use. During the implementation stage, the grantee carries out the requirements of the agreement and requests payments, while the awarding agency monitors the grantee and approves or denies payments. The grantee and the awarding agency close the grant once the grantee has completed all the work associated with a grant agreement, the grant period of performance end date (or grant expiration date) has arrived, or both. Federal grant programs may fund various types of grants, including discretionary grants, formula grants, and cooperative agreements. Discretionary grants are generally awarded on a competitive basis for specified projects that meet eligibility and program requirements. Formula grants are noncompetitive awards based on a predetermined formula, typically established in statute, and are provided to eligible applicants that meet specified criteria outlined by statute or regulation, such as a state. A cooperative agreement is a type of federal financial assistance similar to a grant, except the federal government is more substantially involved with the implementation. Substance Use Prevention, Treatment, and Recovery Services Substance use prevention programs and services (which we refer to collectively as “prevention services” in this report) are designed to prevent or delay the early use of substances and stop the progression from use to problematic use or to a substance use disorder. Prevention services generally focus on reducing a variety of risk factors and promoting a broad range of protective factors through various activities that include, for example, setting policies that reduce the availability of substances in a community, teaching adolescents how to resist negative social influences, and communicating the harms of substances such as the nonmedical use of prescription opioids and marijuana through media campaigns. In addition, prevention services can be targeted at all members of a given population without regard for risk factors, such as all adolescents, or to particular subgroups of individuals or families, such as those who are at increased risk of substance use due to their exposure to risk factors. Targeted audiences for such services may include families living in poverty or children of substance-using parents. When substance use progresses to a point that it is clinically diagnosed as causing significant impairments in health and social functioning, it is characterized as a substance use disorder. Treatment services for substance use disorders are designed to enable an individual to reduce or discontinue substance use and to address health problems, and typically include behavioral therapy. Behavioral therapies use various techniques to modify an individual’s behaviors and improve coping skills, such as incentives and reinforcements to reward individuals who reduce their substance use. For opioid use disorders, treatment may involve combining behavioral therapy with medications—an approach commonly referred to as medication-assisted treatment. Some of these treatment services may be paid for by private insurers, public health coverage programs, nonprofit organizations, or consumers (out-of-pocket), but federal grant programs and various state and local programs also provide funding for these services. Substance use recovery services are designed to help engage and support individuals with substance use disorders in treatment and provide ongoing support after treatment. There are a variety of recovery services such as peer recovery coaching, which involves the use of coaches— peers who identify as being in recovery and use their knowledge and experience to inform their work—to help individuals who are transitioning out of treatment to connect with community services and address barriers that may hinder the recovery process. Other examples include recovery housing, which provides a substance-free environment and support from fellow recovering residents, and recovery high schools, which help students recovering from substance use disorders focus on academic learning. Some recovery services may be paid for through various sources, including Medicaid programs in certain states, some private insurers, and federal grant programs. In addition, some recovery services may be offered by member-led, voluntary associations that charge no fees, such as 12-step groups. Three Federal Agencies Operated 12 Grant Programs That Funded Services Specifically Targeting Adolescents and Young Adults in Fiscal Year 2017 Eight of the 12 Federal Grant Programs for Adolescents and Young Adults Funded Substance Use Prevention Services We identified 12 federal grant programs within three of the four agencies in our review that funded substance use prevention, treatment, and recovery services in fiscal year 2017 and targeted adolescents’ and young adults’ use of illicit substances. Eight of these programs focused on prevention, and all 8 remain active in fiscal year 2018. The 8 grant programs have varying purposes and were administered by two entities within HHS—SAMHSA or IHS—or by ONDCP. For example, the Drug- Free Communities Support Program is funded and directed by ONDCP to support community coalitions in preventing and reducing substance abuse among youth aged 18 and younger. As another example, the Strategic Prevention Framework for Prescription Drugs program, administered by SAMHSA, is designed to raise awareness about the dangers of sharing prescription medications such as opioids, and to promote collaboration between states and pharmaceutical and medical communities to understand the risks of overprescribing to youth (aged 12 to 17) and adults (aged 18 and older). In addition, this program is intended to provide prevention activities and education to schools, communities, and parents. In total, the 8 grant programs targeting the prevention of substance use among adolescents and young adults had 1,146 active grantees in fiscal year 2017. The Drug-Free Communities Support Program had the largest number of active grantees—713 community coalitions—and the other 7 programs had a combined total of 434 that included states and federally recognized tribes. The total number of active grantees in fiscal year 2017 includes those that received a single- or multi-year award in fiscal year 2017, as well as those that received a multi-year award in fiscal year 2016 for a project that was ongoing in fiscal year 2017. Grantees were awarded a total amount of about $266 million in fiscal year 2017, with SAMHSA’s Strategic Prevention Framework-Partnerships for Success program providing the largest amount of funding (about $95 million). (See table 1.) All 8 prevention grant programs had ongoing or planned evaluations to assess the effectiveness of their grantees in accomplishing a variety of program goals, according to agency officials. For example, ONDCP is overseeing the ongoing evaluation of the Drug-Free Communities Support Program through semi-annual progress reports and through the collection of data, such as data on past 30-day substance use, from coalitions that received awards. A recent evaluation of this program found that coalitions included about 19,000 community members who were targeting prevention services to about 20 percent of the population in the United States (including 2.5 million middle school and 3.5 million high school youth) in fiscal year 2015. In addition, this evaluation found that middle and high school youth in communities with a coalition reported a significant decrease in the past 30-day use of marijuana, prescription drugs, alcohol, and tobacco, from 2002 to 2016. However, at the same time, the perceptions of the risk of marijuana use decreased significantly among high school youth in communities with community coalitions, according to the evaluation. As another example, IHS’s planned evaluation of the Methamphetamine and Suicide Prevention Initiative- Generation Indigenous grant program will focus on measures such as the types of services that grantees implemented to prevent methamphetamine use and promote positive development among American Indian and Alaska Native youth, according to agency officials. For the other 6 prevention grant programs, planned evaluations will examine the extent to which reductions in substance use are observed over time among the grantees’ targeted adolescents or young adults. Four of the 12 Federal Grant Programs for Adolescents and Young Adults Funded Substance Use Treatment and Recovery Services Of the 12 federal grant programs targeting adolescents’ and young adults’ use of illicit substances, we identified 4 that focused on the provision of substance use treatment and recovery services and had active grantees in fiscal year 2017. Two of the 4 programs ended at the close of fiscal year 2017 and the other 2 remained active in fiscal year 2018. The 4 programs had different purposes and were administered by OJJDP or SAMHSA, within DOJ and HHS, respectively. For example, the Cooperative Agreements for Adolescent and Transitional Aged Youth Treatment Implementation, administered by SAMHSA, is still active, and intends to increase the capacity of states to provide treatment and recovery services to adolescents (aged 12 to 18) and transitional-aged youth (aged 16 to 25) that have substance use disorders or co-occurring substance use disorders and mental disorders. This program aims to increase states’ capacity by increasing the number of qualified treatment providers. The other 3 grant programs were designed to improve different aspects of the existing juvenile drug treatment courts, which DOJ defines as a court calendar or docket that provides specialized treatment and services for youth with substance use or co-occurring mental health disorders. As an example, the Fiscal Year 2017 Juvenile Drug Treatment Court Program, which is still active and administered by OJJDP, aims to deliver services that are consistent with DOJ’s Juvenile Drug Treatment Court Guidelines—a set of best practices for effective juvenile drug treatment courts. In total, the 4 grant programs that targeted substance use treatment and recovery services among adolescents and young adults had 57 active grantees in fiscal year 2017. SAMHSA’s Cooperative Agreements for Adolescent and Transitional Aged Youth Treatment Implementation had the largest number of active grantees (36), which included state substance abuse agencies and federally recognized tribes. The three juvenile drug treatment court programs had a total of 21 active grantees that included, for example, county juvenile drug treatment courts and a state judicial department. The total number of active grantees in fiscal year 2017 included those that received a single- or multi-year award in fiscal year 2017 as well as active grantees that received multi-year awards in prior years. In total, active grantees from 2 of the 4 programs were awarded about $23 million in fiscal year 2017. (See table 2.) Two of the 4 treatment and recovery grant programs had ongoing or planned evaluations to assess the effectiveness of their grantees in accomplishing a variety of program goals, according to agency officials. SAMHSA officials told us that its ongoing evaluation of the Cooperative Agreements for Adolescent and Transitional Aged Youth Treatment Implementation is assessing the types of treatment services provided to adolescents and young adults as well as the extent to which they abstained from substance use. Officials added that the evaluation is examining grantees’ efforts to expand the qualified workforce of treatment providers for adolescents and young adults. A recent evaluation that was completed for this program found that most grantees provided training to treatment providers on evidence-based treatment services and other topics, and about one-third of grantees identified additional training needs such as training on co-occurring disorders and trauma-informed services. This evaluation also found a decrease in substance use among adolescents and young adults who received treatment services after 6 months and that enhanced provider training was associated with this decrease. OJJDP’s Fiscal Year 2017 Juvenile Drug Treatment Court Program includes a planned evaluation of the impact of the DOJ juvenile drug treatment court guidelines on participant outcomes. That is, OJJDP plans to compare the outcomes of participants in courts aligned with the guidelines to participants in other court programs that will serve as “comparison courts.” OJJDP officials told us that the evaluation plans to assess youth outcomes such as recidivism in substance use, quality of relationships with parents and peers, and mental wellbeing. OJJDP officials stated that while they are not evaluating their fiscal year 2015 and 2014 juvenile drug treatment court grant programs, grantees must report on various performance measures related to substance use to assist DOJ with fulfilling its responsibilities under the Government Performance and Results Act of 1993 and the GPRA Modernization Act of 2010. For example, grantees must report on a semiannual basis the number of drug and alcohol tests performed on juveniles and the number of positive tests recorded. Other Federal Grant Programs Fund Prevention, Treatment, and Recovery Services, but Do Not Specifically Target Adolescents and Young Adults Other federal grant programs beyond the 12 we identified provide funds for substance use prevention, treatment, and recovery services across age groups but do not specifically target adolescents and young adults. The Substance Abuse Prevention and Treatment Block Grant is the largest of such grant programs that fund prevention, treatment, and recovery services across age groups. SAMHSA, which administers this grant, awarded a total of $1.8 billion in fiscal year 2017 to grantees which included states, the District of Columbia, territories, and one federally recognized tribe. The amount of awards that states receive is based on a formula that takes into account a grantee’s: population at risk of substance abuse; relative costs of providing prevention and treatment services; and relative ability to pay for prevention and treatment services. States have some flexibility in determining how to use their Substance Abuse Prevention and Treatment Block Grant funds, and our analysis shows variation in the extent to which grantees used these funds to provide prevention, treatment, and recovery services to adolescents and young adults in 2014, the most recent year for which data were available. For prevention services that target individuals, such as those delivered to middle school students in the classroom, the percentage of persons served that grantees could identify as being adolescents and young adults ranged from 0.1 percent (Oklahoma) to 100 percent (American Samoa and United States Virgin Islands). However, most of the grantees reported percentages that fell in the range of 23 to 61 percent. For prevention services that target populations rather than individuals, such as media campaigns, grantees similarly reported that the percentage of adolescents and young adults served ranged from 0.1 percent (Indiana) to 100 percent (United States Virgin Islands). However, most of the grantees reported percentages that fell in the range of 18 to 46 percent. For treatment and recovery services, grantees reported that the percentage of all persons served who were adolescents and young adults ranged from 8 percent (District of Columbia) to 100 percent (Red Lake Band of Chippewa Indians). However, most of the grantees reported percentages that fell in the range of 17 to 26 percent. (See app. I for the percentages of persons served that were adolescents and young adults, by grantee.) In addition to the Substance Abuse Prevention and Treatment Block Grant, other federal grant programs provide funds for prevention, treatment, and recovery services across age groups, but do not specifically target adolescents and young adults. For example, the State Targeted Response to the Opioid Crisis grant program, administered by SAMHSA, aims to help states and others reduce the number of opioid overdose related deaths by providing funds for prevention, treatment, and recovery services for opioid use disorders. In fiscal year 2017, SAMHSA awarded about $485 million in grants to 50 states, the District of Columbia, and 6 territories through this program. As another example, the Targeted Capacity Expansion: Medication Assisted Treatment – Prescription Drug and Opioid Addiction grant program, also administered by SAMHSA, provides funding to states to expand access to medication- assisted treatment services as well as recovery services among individuals with opioid use disorders. In fiscal year 2017 SAMHSA awarded $31 million in additional grants to 6 states through this program. NIDA Had 186 Active Grant-Funded Research Projects Focused on Substance Use Prevention, Treatment, and Recovery among Adolescents and Young Adults in 2017 Most of NIDA’s 186 Active Grant-Funded Research Projects for Adolescents and Young Adults in 2017 Focused on Substance Use Prevention Our analysis found that HHS’s NIDA had 186 active grant-funded research projects focused on illicit substance use prevention, treatment, or recovery among adolescents and young adults in October and November 2017, and most of these projects addressed substance use prevention. Specifically, 126 research projects, or about 68 percent of NIDA’s ongoing research projects for this population, involved research related to preventing the use of illicit substances, such as the use of marijuana or nonmedical use of opioids and other prescription drugs. The remaining 60 projects, or about 32 percent, involved research related to treatment for or recovery from the use of illicit substances among adolescents and young adults, or a combination of categories (e.g., substance use prevention, treatment, and recovery). Among the categories of research projects, the fewest involved research exclusively about recovery (4 out of 186 projects, or about 2 percent), as shown in table 3. Our analysis also found that about 12 percent of the ongoing projects (22 of 186) involved the use of brain imaging in research on prevention, treatment, or recovery. In total, of the 186 research projects that were active in October and November 2017, 135 received $61.3 million in grants from NIDA in fiscal year 2017. NIDA did not provide awards in fiscal year 2017 for the remaining 51 projects that were active in October and November 2017. The following examples illustrate the types of research activities funded by the prevention, treatment, and recovery grants identified in our review: Prevention research projects. One research project involved testing whether a parenting intervention is associated with lower substance use and other high-risk behaviors among adolescents in the long term, including how such outcomes relate to genetic risk factors. The project’s participants included 731 adolescents to be assessed over multiple years. The project planned to collect DNA; observations of family interaction; parent, youth, and teacher reports regarding adolescents’ conduct; and assessments of their peer environments. Treatment research projects. One research project involved testing the effectiveness of the use of the medication naltrexone (extended release), compared to the use of buprenorphine in treating adolescents and young adults with opioid use disorders. The project’s participants included 340 adolescents and young adults and the project planned to provide counseling to the participants during the course of the study. The project planned to assess a variety of outcomes after 3 and 6 months, including the number of days participants were in treatment, participants’ use of opioids as well as other drug and alcohol use, and the cost- effectiveness of the treatment. Recovery research projects. One research project involved testing the effectiveness of a smartphone application to deliver recovery services to adolescents after they received treatment for a substance use disorder, compared to a control group of adolescents that received recovery services via traditional methods. Examples of recovery services delivered with a smartphone application include participating in online recovery group discussions and receiving motivational messages. The project’s participants included 400 adolescents to be assessed over a 9-month period. The project planned to collect a variety of information, such as how frequently participants used the smartphone application, how long they abstained from substance use, and their quality of life. In Fiscal Year 2017, NIDA and Nine Other HHS Entities Funded a Large Study Examining the Effects of Substance Use on Adolescent Brain Development In fiscal year 2017, NIDA and nine other entities within HHS provided grant funding for a large study—the Adolescent Brain Cognitive Development study—designed to examine the effects of substance use and other factors on development of the adolescent brain. This study was established as a result of the collaboration of several federal agencies that determined such a study was needed because of gaps in knowledge about how substance use and other factors affect brain development. This study is a longitudinal study that plans to collect data from a sample of about 11,000 children across the country for 10 years, beginning when they are 9 or 10 years old. Twenty-one research sites across the country were selected to collect information from children about their brain development, genetics, substance use, mental health, physical health, environment, and other measures. In addition, this study is funding a data analysis and informatics center to develop the procedures for data collection, create and maintain a common database pooling data from all of the research sites, and conduct data analysis. According to NIDA officials, data from the Adolescent Brain Cognitive Development study will be made available to researchers for future use through a data archive. In fiscal year 2017, 15 federal grants provided funding for this study, of which NIDA contributed $18.1 million. Stakeholders Identified Gaps in Services and Research for Adolescents and Young Adults, and Ongoing Federal Efforts Aim to Address Gaps Stakeholders Identified Gaps in Services for Adolescents and Young Adults, and Federal Agencies Have Ongoing Efforts to Address Them Stakeholders that we interviewed identified various gaps in services, and among the most frequently cited were a lack of available recovery services and treatment providers for adolescents and young adults with substance use disorders. They also identified gaps in substance use prevention services such as a lack of prevention services tailored for certain subgroups within these ages. In general, officials from the agencies in our review agreed that these gaps exist, and described actions the agencies are taking that may help address them. Stakeholders Identified Gaps in Research, Such as for Adolescent-Specific Substance Use Treatment Services, and in Recovery Services for both Adolescents and Young Adults Stakeholders that we interviewed commonly identified gaps in research concerning adolescent-specific substance use treatment approaches, as well as in recovery services for both adolescents and young adults. They also identified other gaps, such as a lack of knowledge about how to effectively communicate to adolescents and young adults the harms of substance use. Officials from HHS’s NIDA agreed that such gaps in research exist. Gaps in substance use research related to adolescents and young adults. Stakeholders commonly identified the following gaps in research: Substance use disorder treatment with adolescents. Four of the stakeholders we interviewed identified gaps in adolescent- specific substance use disorder treatment research. Officials from one research organization said that it can be challenging to recruit a sufficient number of adolescents with a substance use disorder to participate in research studies focused on substance use treatment, both because fewer adolescents have such disorders compared to adults, and because adolescents—or potentially their parents—may be in denial about the need for treatment. These officials further stated that having too few funding announcements that focus on adolescent-specific research contributes to the gaps in research in this area, because it is easier for researchers to simply work with adults when announcements do not specify an age group of interest. An official from another research organization said there is also a gap in knowledge about how to deliver treatment services to adolescents in ways that are developmentally appropriate. The official stated that adolescents who receive treatment services generally are less likely to complete substance use disorder treatment, and, as a result, additional research is needed to identify how to engage and retain adolescents in a developmentally appropriate way. The official explained that adolescents often do not believe they need treatment and are not certain they want to stop using substances. Recovery services. Three of the stakeholders we interviewed identified gaps in recovery service research for adolescents and young adults. Officials from one advocacy and education organization said there has been little research conducted to determine the types of recovery services that are most effective for adolescents in preventing relapse. Officials from one research organization said that it would be beneficial to develop a variety of recovery services, since services are likely to vary in effectiveness for different groups of adolescents and young adults. Translating research into practice. Three of the stakeholders we interviewed identified gaps in knowledge about how to translate evidence-based services from research into sustainable, real world practices. For example, an official from one research organization explained that translating evidence-based treatment services from research into real world settings can be difficult for a variety of reasons—such as, because services that are grant- funded may have components that are impractical to implement or are not reimbursable. The official said one example of such an impractical component would be having an expert observer periodically rate the fidelity of providers’ implementation of the service—a component that makes sense when testing the efficacy of the service under the grant, but which can be disruptive to workflow and may not be reimbursable by insurers once the grant ends. Officials from another research organization similarly commented that more research is needed to identify which components of services make them effective. Communicating harms of substance use. Officials from two of the three research organizations identified a gap in knowledge about how to effectively communicate the harms of substance use to adolescents and young adults. They stated that it is particularly difficult to effectively communicate the harms of cannabis to adolescents and young adults. One official explained that societal changes in attitudes towards cannabis have made it more difficult to convince adolescents of both its harm and of the need for treatment when its use develops into a substance use disorder. Federal response to gaps in research. Officials from NIDA agreed that these gaps in research exist and explained that while additional research is needed to address them, the process by which NIDA funds research through grants ultimately relies on researchers to submit proposals for consideration. While NIDA officials stated that researchers can submit proposals for research projects addressing adolescent or young adult substance use prevention, treatment, or recovery under general funding announcements for grants, NIDA also had eight funding announcements (as of May 2018) that either focused on these age groups or included them as a population of interest, three of which were new as of fiscal year 2018. Agency Comments We provided a draft of this report to HHS, DOJ, ONDCP, and Education for comment. HHS, DOJ, and ONDCP provided technical comments, which we incorporated as appropriate. Education did not have comments on our draft. We are sending copies of this report to the appropriate congressional committees; the Secretaries of the Departments of Health and Human Services, Justice, and Education; the Director of the Office of National Drug Control Policy; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: The Use of Substance Abuse Prevention and Treatment Block Grant Funds for Adolescents and Young Adults Table 4 shows the percentage of persons who were provided services with Substance Abuse Prevention and Treatment Block Grant funds in 2014, and who were also identified by grantees as being adolescents or young adults. Percentages are listed for two broad types of substance use prevention services (individual and population-based), as well as substance use disorder treatment and recovery services. Substance Abuse Prevention and Treatment Block Grant grantees include states, territories, and one federally recognized tribe. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Gerardine Brennan, Assistant Director; Pamela Dooley, Analyst-in-Charge; Spencer Barr; and Brandon Nakawaki made key contributions to this report. Also contributing were Kaitlin Farquharson, Derry Henrick, and Laurie Pachter.
Why GAO Did This Study According to the Surgeon General, adolescence and young adulthood are critical at-risk periods for illicit substance use, and such use can harm the developing brain. Congress included a provision in law for GAO to review how federal agencies, through grants, are addressing substance use prevention, treatment, and recovery among adolescents and young adults. Related to prevention, treatment, and recovery targeting adolescents (aged 12 to 17) and young adults (aged 18 to 25), this report describes (1) grant programs to provide services; (2) NIDA grant-funded research, and (3) gaps stakeholders identified in related services or research. GAO selected four agencies to review—HHS, ONDCP, DOJ, and Education—the key agencies that fund grant programs for services for adolescents and young adults. GAO analyzed documents on grant programs and on research funded by NIDA. GAO interviewed officials from the four agencies and 20 stakeholder groups (including advocacy and education, and research organizations, as well as a non-generalizable selection of state substance abuse, education, and judicial agencies in four states) about gaps in services or research and agency efforts to help address them. States were selected for variation in geography and overdose rates. HHS, DOJ, and ONDCP provided technical comments on a draft of this report, which GAO incorporated as appropriate. What GAO Found GAO identified 12 federal grant programs within three federal agencies that funded substance use prevention, treatment, and recovery services in fiscal year 2017 and targeted adolescents' and young adults' use of illicit substances such as marijuana and nonmedical use of prescription opioids. The three agencies included the Department of Health and Human Services (HHS), the Office of National Drug Control Policy (ONDCP), and the Department of Justice (DOJ). While the Department of Education (Education) has grant programs that can fund prevention services for adolescents, they do not specifically target such services. Eight programs targeted substance use prevention. In total, they had 1,146 active grantees in fiscal year 2017 and provided about $266 million in awards that year. Four programs targeted treatment and recovery services. In total, they had 57 active grantees in fiscal year 2017. Two of the 4 grant programs awarded about $23 million in funding in that year (the other two awarded funding in prior years). In addition, other grant programs beyond these 12 also fund substance use prevention, treatment, and recovery services across age groups, but are not specifically targeted to adolescents and young adults. HHS's National Institute on Drug Abuse (NIDA)—the agency that is the primary funder of research on illicit substance use—also had 186 active grant-funded research projects focused on substance use prevention, treatment, and recovery among adolescents and young adults as of October and November 2017. Most of these research projects—126—were examining prevention, 45 were examining treatment, 4 were examining recovery, and 11 were examining a combination of research categories. In total, these 186 research projects received about $61 million from NIDA in fiscal year 2017. Most of the 20 stakeholders GAO interviewed identified gaps in services for adolescents and young adults, including insufficient access to recovery services and a shortage of treatment providers, and described financial and other reasons that likely contribute to these gaps. Federal agency officials GAO interviewed agreed that these gaps exist, and described grant programs and other efforts to help address them, such as a grant program that HHS established in 2018 to expand recovery services for these age groups. Stakeholders also identified gaps in research, such as too few treatment studies with adolescent participants, and described reasons for these gaps, including too few federal grants focused on adolescent research. NIDA officials agreed that these gaps exist, and stated that NIDA had eight grant opportunities (as of May 2018) that focused on these age groups or included them as a population of interest, three of which were new in 2018.
gao_GAO-18-348
gao_GAO-18-348_0
Background The EFMP provides support to families with special needs at their current and proposed locations. Servicemembers relocate frequently, generally moving every 3 years if in the Army, Marine Corps, and Navy, and every 4 years if in the Air Force. In fiscal year 2016, the Military Services relocated approximately 39,000 servicemembers enrolled in the EFMP to CONUS installations. To implement DOD’s policy on support for families with special needs, DOD requires each Service to establish its own EFMP for active duty servicemembers. EFMPs are to have three components—identification and enrollment, assignment coordination, and family support. Identification and enrollment: Medical and educational personnel at each installation are responsible for identifying eligible family members with special medical or educational needs to enroll in the EFMP. Once identified by a qualified medical provider, active duty servicemembers are required to enroll in their service’s EFMP. Servicemembers are also required to self-identify when they learn a family member has a qualifying condition. Assignment coordination: Before finalizing a servicemember’s assignment to a new location, DOD requires each Military Service to consider any family member’s special needs during this process, including the availability of required medical and special educational services at a new location. Family support: DOD requires each Military Service’s EFMP to include a family support component through which it helps families with special needs identify and gain access to programs and services at their current, as well as proposed, locations. Servicemembers assigned to a joint base would receive family support from the Service that is responsible for leading that installation. For example, an Airman assigned to a joint base where the Army is the lead would receive family support from the Army installation’s EFMP. As required by the NDAA for Fiscal Year 2010, DOD established the Office of Community Support for Military Families with Special Needs (Office of Special Needs or OSN) to develop, implement, and oversee a policy to support these families. Among other things, this policy must (1) address assignment coordination and family support services for families with special needs; (2) incorporate requirements for resources and staffing to ensure appropriate numbers of case managers are available to develop and maintain services plans that support these families; and (3) include requirements regarding the development and continuous updating of a services plan for each military family with special needs. OSN is also responsible for collaborating with the Services to standardize EFMP components as appropriate and for monitoring the Services’ EFMPs. OSN has been delegated the responsibility of implementing DOD’s policy for families with special needs by the Undersecretary of Defense for Personnel and Readiness through the Assistant Secretary for Manpower and Reserve Affairs according to DOD officials. Currently, OSN is administered under the direction of the Deputy Assistant Secretary of Defense for Military Community and Family Policy through the Office of Military Family Readiness Policy. In addition, each Military Service has designated a program manager for its EFMP who is also responsible for working with OSN to implement its EFMP (see fig. 1). DOD’s guidance for the EFMP (1) identifies procedures for assignment coordination and family support services; (2) designates the Assistant Secretary of Defense for Manpower and Reserve Affairs as being responsible for monitoring overall EFMP effectiveness; (3) assigns the OSN oversight responsibility for the EFMP, including data review and monitoring; and (4) directs each Service to develop guidance for overseeing compliance with DOD requirements for their EFMP. Table 1 provides an overview of the procedures each Service must establish for the assignment coordination and family support components of the EFMP. As a part of its guidance for monitoring military family readiness programs, DOD also requires each Military Service to certify or accredit its family readiness services, including family support services provided through the EFMP. In addition, DOD states that each Service must balance the need for overarching consistency across EFMPs with the need for each Service to provide family support that is consistent with their specific mission. To accomplish this, each Service is required to jointly work with DOD to develop a performance strategy, which is a plan that assesses the elements of cost, quality, effectiveness, utilization, accessibility, and customer satisfaction for family readiness services. In addition, each Military Service is required to evaluate their family readiness services using performance goals that are linked to valid and reliable measures such as customer satisfaction and cost. DOD also requires each Service to use the results of these evaluations to inform their assessments of the effectiveness of their family readiness services for families with special needs. Key Aspects of Support for Families with Special Needs Vary Widely Across the Services, Leading to Potential Gaps in Assistance for Families with Special Needs According to DOD officials, each Military Service provides family support services in accordance with DOD guidance as well as Service-specific guidance. However, we found wide variation in each Service’s requirements for family support personnel as well as the practices and expectations of EFMP staff. As a result the type, amount, and frequency of assistance enrolled families receive varies from Service to Service and when a servicemember from one Service is assigned to a joint base led by another Service (see table 2). For example, in terms of a minimum level of contact for families with special needs enrolled in the EFMP, the Services vary in the frequency with which they require family support providers to contact families with special needs: The Marine Corps specifies a frequency (quarterly) with which families with special needs should be contacted by their family support providers. The Air Force has each installation obtain a roster of families with special needs enrolled in the EFMP on a monthly basis, but it does not require family support providers to, for example, use this information to regularly contact these families. The Navy assigns one of three service levels to each family member enrolled in the EFMP. These service levels are based on the needs of each family with special needs; family support providers are responsible for assigning a “service level” that directs the frequency with which the family must be contacted. The Army has no requirements for how often families with special needs should be contacted. The Services also vary as to whether they offer legal assistance to families with special needs as follows: The Marine Corps employs two attorneys who can represent families with special needs who fail to receive special education services from local school districts, as specified in their children’s individualized education programs (IEP). They can also advise EFMP-enrolled families on their rights and options if a family believes their child needs special education services from a local school district (e.g., an IEP). The Air Force, Army, and Navy choose not to employ special education attorneys. Officials with whom we spoke said families with special needs in these Services can receive other types of assistance that may help them resolve special education legal issues. For example, Air Force officials said servicemembers and their families can receive support from attorneys that provide general legal assistance on an installation, Army officials said installation EFMP managers can refer families with special needs to other organizations that provide legal support, and Navy officials said families can find support through working with their installation’s School Liaison Officers. Services Plans The NDAA for Fiscal Year 2010 requires DOD’s policy to include requirements regarding the development and continuous updating of a services plan (SP) for each family with special needs, and DOD has specifically required these plans as part of the provision of family support services. These plans describe the necessary services and support for a family with special needs and document and track progress toward meeting related goals. According to DOD guidance, these plans should also document the support provided to the family, including case notes. In addition, the DOD reference guide for family support providers emphasizes that timely, up-to-date documentation is especially important each time a family relocates, as military families regularly do. Therefore, SPs are an important part of providing family support during the relocation process, and provide a record for the gaining installation. Requiring timely and up-to-date documentation is consistent with federal internal control standards, which state that agencies should periodically review policies, procedures, and related control activities for continued relevance and effectiveness in achieving their objectives. SPs follow families with special needs each time they relocate and without timely and up-to-date documentation, DOD cannot ensure that all families continue to receive required medical and/or special educational services once they relocate to another installation. For every Service the number of SPs was relatively few when compared to the number of servicemembers (known as sponsors) or the number of family members enrolled in the EFMP (see table 3). The Services and OSN provided a range of reasons as to why the Services do not develop and maintain a SP for each family with special needs. For example, Air Force officials said their family support providers consider the needs of each family with special needs before determining whether a SP will help them receive the required services. In addition, Army and Marine Corps officials said they may not develop these plans if families do not request them. Further, according to a Navy official, some families lack the required SPs because installations may not have the staff needed to develop them—even though DOD requires the Services to maintain sufficient staff and certify their EFMPs. OSN officials with whom we spoke also said that the Services may not have developed many SPs during fiscal year 2016 because DOD had not yet approved a standardized form that could be used to meet this requirement. Finally, OSN officials also said that each family with special needs enrolled in the EFMP may not need a SP because their condition does not require this type of family support. Resources To meet requirements of the NDAA for Fiscal Year 2010, in April 2017, DOD issued to the Services guidance that directed them to “rogram, budget, and allocate sufficient funds and other resources, including staffing,” to meet DOD’s policy objectives for the EFMP. According to OSN officials, DOD relies on each Service to determine what level of funds and resources is sufficient and what constitutes an appropriate number of family support personnel. To determine family support providers and related personnel staffing levels, the Service officials with whom we spoke said they consider a number of factors, including the number of families with special needs enrolled in the EFMP at any given installation (see app. II for more information about the EFMP data by installation). See Table 4 for a summary of EFMP family support providers and other key personnel at CONUS installations. As required by DOD, all of the Services employ family support providers to assist families with special needs. In addition, some Services employ additional personnel to support implementation of the EFMP (see sidebar). For example, the Air Force employs family support coordinators to administer its EFMP and no other personnel are dedicated to assisting these coordinators or enrolled families. The Army employs “system navigators” who provide individualized support to families with special needs at selected installations through its EFMP, as well as other personnel to administer the EFMP. workers at most of its CONUS installations to administer individualized support to families with special needs. In addition, the Marine Corps employs program managers, administrative assistants, as well as training and education outreach specialists. The Navy contracts regional case liaisons and case liaisons at selected CONUS installations to administer individualized support to families with special needs. In addition, the Navy employs collateral duty case liaisons who assist with the delivery of family support services at all other CONUS installations. Senior OSN officials said they rely on each Service to determine the extent to which its EFMP complies with DOD’s policy for families with special needs because they consider OSN to be a policy-making organization that is not primarily responsible for assessing compliance. In addition, these officials said the Services need flexibility to implement DOD’s policy for families with special needs because they each have unique needs and the number of enrolled families in the EFMP is constantly changing. However, DOD has not developed a standard for determining the sufficiency of funding and resources each Service allocates for family support. Air Force officials at one of the installations we visited said the Air Force identified the lack of staff and funding to provide individualized support to most families with special needs as an issue. In addition, officials from the Army and Navy said they have not received any guidance from OSN officials about their Service-specific guidance, including requirements for resources and services plans. Further, the Services may not know the extent to which their Service- specific guidance complies with DOD’s policy for families with special needs. The NDAA for Fiscal Year 2010 requires DOD to identify and report annually to the congressional defense committees on gaps in services for military families with special needs and to develop plans to address these gaps. However, DOD’s most recent reports to the congressional defense committees did not address the relatively few SPs being created for families with special needs, or whether the Services are providing sufficient resources to ensure an appropriate number of family support providers. Federal internal control standards require that agencies establish control activities, such as developing clear policies, in order to accomplish agency objectives such as those of the Services’ EFMPs. Without fully identifying and addressing potential gaps in family support across these programs, some families with special needs may not get the assistance they require, particularly when they relocate. Each Service Has Mechanisms to Monitor EFMP Assignment Coordination and Family Support Activities, but DOD Lacks Common Performance Measures and a Process to Fully Evaluate These Activities Each Service Has Mechanisms to Monitor Assignment Coordination and Family Support Each Service monitors EFMP assignment coordination and family support using a variety of mechanisms, such as regularly produced internal data reports. However, DOD has not yet established common performance measures to track the Services’ progress in implementing its standard procedures over time or developed a process to evaluate the overall effectiveness of each Service’s assignment coordination and family support procedures. DOD requires each Service to monitor implementation of their EFMP, including their procedures for assignment coordination and family support. To comply with this requirement, each Service has developed guidance that establishes monitoring protocols and assigns oversight responsibilities. Officials from each Service told us they use internal data reports from each installation to monitor assignment coordination and family support. To monitor assignment coordination, officials from each Service told us their headquarters reviews proposed assignment locations for families with special needs enrolled in the EFMP. These officials said monitoring proposed assignment locations helps ensure that enrolled families will be able to access required services at their new installations. In addition, Army officials said each Army unit commander is responsible for tracking the number of families with special needs that have expired enrollment paperwork because it affects assignment coordination worldwide. Several years ago, the Army determined that 25 percent of soldiers (over 13,000) enrolled in the EFMP had expired enrollment paperwork, complicating the task of considering each enrolled family’s special medical or educational needs as part of proposed relocations. In response, in August 2011, the Army revised its policies and procedures for updating enrollment paperwork which would help ensure a family member’s special needs are considered during the assignment coordination process. To monitor family support provided by installations worldwide, each Military Service told us they use a variety of mechanisms (see table 5). The Marine Corps pays particular attention to customer satisfaction. Marine Corps officials told us that every three years Marine Corps headquarters administers a survey of family members enrolled in the EFMP. We previously reported that organizations may be able to increase customer satisfaction by better understanding customer needs and organizing services around those needs. This survey is one of the primary ways Marine Corps headquarters measures customer satisfaction with family support services at installations worldwide. Marine Corps officials also said this survey helps ensure its EFMP is based on the current needs of families with special needs. DOD Has Not Developed Common Performance Measures or Fully Developed a Process for Evaluating the Results of the Services’ Monitoring Activities To improve its oversight of the EFMP and implement its policy for families with special needs, DOD, through OSN, has several efforts under way to standardize the Services’ procedures for assignment coordination and family support. However, DOD has not developed common performance measures to monitor its progress toward these efforts and has not developed a process for assessing the Services’ related monitoring activities. Federal internal control standards emphasize the importance of assessing performance over time and evaluating the results of monitoring activities. DOD Has Begun to Standardize Procedures To help improve family member satisfaction by addressing gaps in support that may exist between Services, OSN has begun to standardize procedures for assignment coordination and family support. To date, OSN’s efforts have focused on ensuring each Service’s EFMP considers the needs of family members during the assignment process and helps family members identify and gain access to community resources. According to OSN’s April 2017 Report to Congress, the long-term goal of these efforts is to help ensure that all families with special needs enrolled in the EFMP receive the same level of service regardless of their Military Service affiliation or geographic location. In addition, OSN officials told us its standardized procedures will also help DOD perform required oversight by improving its access to Service-level data and its ability to validate each Service’s monitoring activities. To date, efforts to standardize assignment coordination and family support have included efforts such as developing new family member travel screening forms which will be the official documents used during the assignment coordination process and completing a DOD-wide customer service satisfaction survey on EFMP family support (see table 6). Despite its efforts to begin standardizing assignment coordination and family support services, DOD is unable to measure its progress in standardizing assignment coordination and family support procedures for families with special needs and assessing the Services’ performance of these processes because it has not yet developed common metrics for doing so. Federal internal control standards emphasize the importance of agencies assessing performance over time. We have also reported on the importance of federal agencies engaging in large projects using performance metrics to determine how well they are achieving their goals and to identify any areas for improvement. By using performance metrics, decision makers can obtain feedback for improving both policy and operational effectiveness. Additionally, by tracking and developing a baseline for all measures, agencies can better evaluate progress made and whether or not goals are being achieved—thus providing valuable information for oversight by identifying areas of program risk and causes of risks or deficiencies to decision makers. Through our body of work on leading performance management practices, we have identified several attributes of effective performance metrics relevant to OSN’s work (see table 7). OSN officials said each Service is currently responsible for assessing the performance of its own EFMP, including the development of Service- specific goals and performance measures. OSN officials said that they recognize the need to continually measure the department’s progress overall in implementing its policy for families with special needs, and are considering ways to do so. They also said they have encountered challenges to developing common performance measures. In addition, OSN officials said its efforts to reach consensus among the Services about performance measures for the overall EFMP are still ongoing because each Service wants to maintain its own measures, and DOD has not required them to reach a consensus. Absent common performance measures, DOD is unlikely to fully determine whether its long-standing efforts to improve support for families with special needs are being implemented as intended. DOD Does Not Systematically Review the Services’ Monitoring Activities DOD requires each Service to monitor its own family readiness programs, including procedures for assignment coordination and family support through the EFMP, but lacks a systematic process to evaluate the results of these monitoring activities. To monitor family readiness services, as required by DOD, each Service must accredit or certify its family support services, including the EFMP, using standards developed by a national accrediting body not less than once every 4 years. In addition, personnel from each Service’s headquarters are required to periodically visit installations as a part of their monitoring activities for assignment coordination, among other things. The Services initially had the Council on Accreditation accredit family support services provided through their installations’ EFMPs using national standards developed for military and family readiness programs, according to the officials with whom we spoke. However, by 2016, each Service was certifying installations’ family support services using standards that meet those of a national accrediting body, Service-specific standards, and best practices. According to officials from each Service with whom we spoke, this occurred due to changes in the funding levels allocated to this activity. Table 8 provides an overview of the certification process currently being used by each Service. OSN officials said they do not have an ongoing process to systematically review the results of the Services’ activities, including the certification of EFMPs because they choose to rely on the Services to develop their own monitoring activities and ensure they provide the desired outcomes. In doing so, DOD allows each Service to develop its own processes for certifying installations’ family support services, including the selection of standards. In addition, OSN officials told us that efforts to standardize certification of EFMPs are ongoing because the Military Services have not been able to reach consensus on a set of standards that can be used across DOD for installations’ family support services. Further, OSN has not established a process to assess the results of the Services’ processes for certifying installations’ family support services. Federal standards for internal control state that management should evaluate the results of monitoring efforts—such as those the Services are conducting on their own—to help ensure they meet their strategic goals. The lack of such a process hampers OSN’s ability to monitor the Services’ EFMPs and determine the adequacy of such programs as required by the NDAA for Fiscal Year 2010. Conclusions OSN’s job of developing a policy for families with special needs that will work across DOD’s four Services is challenging given the size, complexity, and mission of the U.S. military. It has had to consider, among other things, the Services’ mission requirements, resource constraints, and the myriad demands on servicemembers and their families during their frequent relocations. Anything that further complicates a relocation—such as not receiving the required family support services for family members with special needs—potentially affects readiness or, at a minimum, makes an already stressful situation worse. By providing little direction on how the Services should provide family support or what the scope of family support services should be, some servicemembers get more—or less—from the EFMP each time they relocate, including when a servicemember from one Service is assigned to a joint base led by another Service. By largely deferring to the Services to design, implement, and monitor their EFMPs’ performance, DOD cannot, as required by the NDAA for Fiscal Year 2010, fully determine the adequacy of the Services’ EFMPs in serving families with special needs, including any gaps in services these families receive, because it has not built a systematic process to do so. Instead, it relies on the Services to self-monitor and address, within each Service, the results of monitoring activities. However, because servicemembers relocate frequently and often depend on the EFMP of a Service other than their own, a view of EFMP performance across all of the Services is essential to ensuring, for example, that relocating servicemembers get consistent EFMP service delivery no matter where they are stationed. Evaluating and developing program improvements based on the results of the Services’ monitoring would help DOD ensure the Services’ EFMPs achieve the desired outcomes and improve its ability to assess the overall effectiveness of the program. Recommendations for Executive Action We are making the following three recommendations to DOD: We recommend the Secretary of Defense direct the Office of Special Needs (OSN) to assess the extent to which each Service is (1) providing sufficient resources for an appropriate number of family support providers, and (2) developing services plans for each family with special needs, and to include these results as part of OSN’s analysis of any gaps in services for military families with special needs in each annual report issued by the Department to the congressional defense committees. (Recommendation 1) We recommend that the Secretary of Defense direct the Office of Special Needs (OSN) to develop common performance metrics for assignment coordination and family support, in accordance with leading practices for performance measurement. (Recommendation 2) We recommend that the Secretary of Defense implement a systematic process for evaluating the results of monitoring activities conducted by each Service’s EFMP. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to the Department of Defense (DOD) for comment. DOD provided written comments, which are reproduced in appendix IV. DOD also provided technical comments, which we incorporated as appropriate. DOD agreed with all three of our recommendations. In its written comments, DOD stated that additional performance metrics need to be developed for assignment coordination and that it is in the process of measuring families’ satisfaction with family support provided through the EFMP. DOD also stated that it is developing plans for evaluating the results of each Service’s monitoring activities for the EFMP. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Defense and Education, and other interested parties. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0580 or nowickij@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology The National Defense Authorization Act (NDAA) for Fiscal Year 2017 includes a provision for GAO to assess the effectiveness of the Department of Defense’s (DOD) Exceptional Family Member Programs (EFMP). This report focuses on the assignment coordination and family support components of the EFMP for dependents with special needs and examines: (1) the extent to which each Service has provided family support as required by DOD, and (2) the extent to which the Services monitor and DOD evaluates assignment coordination and family support. To address these objectives, we used a variety of data collection methods. Key methods are described in greater detail below. For both objectives, we reviewed relevant federal laws, regulations, and DOD guidance and documentation that pertain to the EFMP, including the following: The NDAA for Fiscal Year 2010, which established the Office of Special Needs and defined program requirements for assisting families with special needs, including assignment coordination and family support. DOD’s guidance for administering the EFMP. We assessed how DOD implements the requirements in the NDAA for Fiscal Year 2010; how each Service implements assignment coordination and family support; and how the Services and DOD monitor assignment coordination and family support using performance measures. Specially, we reviewed DOD Instruction 1315.19 - Exceptional Family Member Program; Service-specific guidance and related documents from the Air Force, Army, Marine Corps, and Navy; and DOD Instruction 1342.22 - Military Family Readiness. Standards for internal control in the federal government related to the documentation of responsibilities through policies, performance measures, and evaluating the results of monitoring activities. We compared each Service’s procedures for monitoring assignment coordination and family support to these standards. To determine the extent of the Services’ EFMP family support, we obtained and analyzed fiscal year 2016 EFMP data (the most recent available) for each Service. We reviewed DOD policy to identify data variables that each Service maintains related to its EFMP. We used these data to summarize key characteristics of each Service’s EFMP. The selected variables provided Service-wide and installation-specific EFMP information on, the number of continental United States (CONUS) and outside the continental United States (OCONUS) installations; the number of servicemembers (sponsors) enrolled in the EFMP; the number of family members with special needs enrolled in the EFMP; the number of EFMP family support personnel; and the number of services plans created for families with special needs enrolled in the EFMP. We determined that the selected data variables from each Service are sufficiently reliable for the purposes of providing summary results about family support for fiscal year 2016. To learn more about how the Services implement their EFMPs, we visited seven installations in five states. We selected the seven installations based on their location in states with the largest number of military- connected students in school year 2012-2013 (the most recent available and reliable data) or in states with the largest percentage of students enrolled in U.S. DOD schools as of May 2017, as well as their status as a joint base. At each installation, we interviewed installation officials, EFMP managers, selected family support personnel, and family members and caregivers enrolled in the program. In states we visited that had the largest number of military-connected students, the EFMP personnel we interviewed collectively served 66 percent of students who attend local public schools and 42 percent of the students attending U.S. DOD schools. To obtain illustrative examples about how the EFMP serves families with special needs, we conducted seven group interviews with EFMP-enrolled family members and caregivers (one at each of the seven installations we visited). Using a prepared script, we asked participants to describe how they were identified and enrolled in the EFMP, how they were assigned to new installations, and the types of family support services they received. We also asked about how these services aligned with their family member’s EFMP-eligible condition, the benefits and challenges they experienced, as well as their overall satisfaction. A total of 38 self- selected volunteers participated in the seven group discussions. While the participants in these groups included a variety of family members and caregivers, the number of participants and groups were very small relative to the total number of family members enrolled in the EFMP. Their comments are not intended to represent all EFMP-enrolled family members or caregivers. Other EFMP-enrolled family members and caregivers may have had other experiences with the program during the same period. Finally, for both objectives, we conducted interviews with a variety of DOD, Service-level, and nonfederal officials. We spoke with DOD officials from the Office of the Assistant Secretary of Defense–Offices of Manpower and Reserve Affairs, Military Community and Family Policy, Military Family Readiness Policy, and Special Needs. We also spoke with EFMP Managers from Air Force, Army, Marine Corps, and Navy headquarters. We also met with officials from selected national military family advocacy organizations including the National Military Family Association; the Military Family Advisory Network; and the Military Officers Association of America to discuss the EFMP. We conducted this performance audit from February 2017 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Services’ Fiscal Year 2016 Exceptional Family Member Program Data Each Service has an Exceptional Family Member Program (EFMP) that provides support to military families with special needs. The tables below present the following information on selected EFMP and family support categories for each Service’s program at continental United States (CONUS) and outside the continental United States (OCONUS) installations in fiscal year 2016: City, state or country; Number of exceptional family members; Number of family support providers (by Full-Time Equivalent); Number of family support provider vacancies; Number of services plans; Number of indirect contacts; and Number of direct contacts. The information below is listed sequentially in alphabetical order by Service. Appendix III: Issues Identified by Discussion Group Participants We held small group discussions with Exceptional Family Member Program (EFMP) participants at the seven military installations we visited. Family members and caregivers who attended each session reported they had children or spouses with EFMP-eligible conditions. The discussion group participants were self-selected; and their comments are not intended to represent all EFMP -enrolled family members or caregivers in fiscal year 2016. In addition, other EFMP -enrolled family members and caregivers may have had different experiences with the program during the same period. There were a total of 38 participants representing all the Services. The following issues were discussed by one or more participants during the small group discussions at the installations we visited. The issues that emerged relate to the current and future overall effectiveness of the EFMP. Overall Satisfaction with EFMP (Discussed by 30 of 38 participants): Measure of participants’ approval of the family support services offered and experience with the EFMP. Many participants expressed overall satisfaction with the EFMP. Several participants expressed dissatisfaction with the EFMP. A participant expressed dissatisfaction with the lack of consistency in the provision of family support services (i.e., special education advocacy) across installations. School Liaison Officers (Discussed by 20 of 38 participants): Serve as the primary point of contact for school-related matters as well as assist military families with school issues. Several participants noted that they received no response to their request for assistance from their School Liaison Officer or they only received general information. Several participants said School Liaison Officers were not helpful. Some participants found School Liaison Officers were helpful. Some participants were unaware of School Liaison Officers being available at their installation and the service(s) they provide. A few participants said School Liaison Officers did not follow up on requests for information. A participant noted there seems to be a disconnect between family support services provided through the EFMP and services provided by School Liaison Officers. Family Support Personnel (Discussed by 12 of 38 participants): Provide information and referral to military families with special needs. Some participants at one installation noted that the EFMP was understaffed. Some participants at one installation noted high turnover of family support personnel. Some participants noted family support personnel did not provide support for their family with special needs. Stigma (Discussed by 12 of 38 participants): A perception that participating in the EFMP may limit a soldier’s assignment opportunities and/or compromise career advancement. Several participants believe there is still stigma associated with participating in the EFMP. Some participants said participating in the EFMP has not affected career advancement. Assignment Coordination (Discussed by 10 of 38 participants): The assignment of military personnel in a manner consistent with the needs of armed forces that considers locations where care and support for family members with special needs are available. Some participants found the assignment coordination process challenging. Some participants described limitations with the assignment coordination process. A few participants noted there is a lack of information among families with special needs regarding how to express the need for stabilization and /or continuity of care. A few participants cited the challenges of assignment coordination as contributing to their decision to retire. One participant commented that the opinion of a medical professional was not reflected in the assignment coordination process. Special Education Services (Discussed by 10 of 38 participants): The provision of staff capable of assisting families with special needs with special education and disability law advice and/or assistance and attendance at individualized education program (IEP) meetings where appropriate. Several participants who had a family support provider assist them with preparing for or attending a school-based meeting, including IEP meetings, spoke positively of their experience(s). Some participants at one installation agreed that assistance from family support providers during meetings with school officials regarding special education services is helpful. A few participants who were unable to get assistance with special education services from the EFMP sought the services of private attorneys at their own expense. Family Support Services (Discussed by 9 of 38 participants): The non-clinical case management delivery of information and referral for families with special needs, including the development and maintenance of a services plan. Some participants found that family support providers were helpful. Some participants could not identify needed resources or were unaware of the resources or services available to them. One participant noted that the family support provider had minimal contact. One participant said navigating the system can be challenging. Surveys (Discussed by 8 of 38 participants): The process of collecting data from a respondent using a structured instrument and survey method to ensure the accurate collection of data. Several participants noted that they had not or rarely had the opportunity to evaluate the family support services provided through the EFMP. One participant noted that comment cards used by each service are not effective for evaluating the EFMP. Warm hand-off (Discussed by 6 of 38 participants): Assistance to identify needed supports or services and facilitating the initial contact or meeting with the next program. Many participants at one installation agreed that the warm hand-off process worked well for them. Several participants said they found the warm hand-off process helpful when moving from one installation to the next. Outreach (Discussed by 5 of 38 participants): Developing partnerships with military and civilian agencies and offices (local, state, and national), improving program awareness, providing information updates to families, and hosting and participating in EFMP family events. Some participants found it difficult to obtain information regarding the types of family support services that are available. A few participants noted that communications regarding the EFMP were not targeted to address their needs. A few participants noted communications regarding the EFMP are untimely, (e.g., newsletters not issued periodically). Joint Base Family Support Services (Discussed by 1 of 38 participants): Family support services provided by the lead Service of the Joint Base that is different from that of the servicemember enrolled in the EFMP. One participant said that using family support services on joint bases may pose a challenge as each Service has different rules and procedures and as a result provides different types of family support services. Appendix IV: Comments from the Department of Defense Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Bill MacBlane (Assistant Director), Brian Egger (Analyst-in-Charge), Patricia Donahue, Holly Dye, Robin Marion, James Rebbe, Shelia Thorpe, and Walter Vance made significant contributions to this report. Also contributing to this report were Lucas Alvarez, Bonnie Anderson, Connor Kincaid, Brian Lepore, Daniel Meyer, and Mimi Nguyen.
Why GAO Did This Study Military families with special medical and educational needs face unique challenges because of their frequent moves. To help assist these families, DOD provides services plans, which document the support a family member requires. The National Defense Authorization Act for Fiscal Year 2017 included a provision for GAO to review the Services' EFMPs, including DOD's oversight of these programs. This report examines the extent to which (1) each Service provides family support and (2) the Services monitor and DOD evaluates assignment coordination and family support. GAO analyzed DOD and Service-specific EFMP guidance and documents; analyzed fiscal year 2016 EFMP data (the most recent available); visited seven military installations, selected for their large numbers of military-connected students; and interviewed officials responsible for implementing each Service's EFMP, as well as officials in OSN that administer DOD's EFM policy. What GAO Found The support provided to families with special needs through the Department of Defense's (DOD) Exceptional Family Member Program (EFMP) varies widely for each branch of Military Service. Federal law requires DOD's Office of Special Needs (OSN) to develop a uniform policy that includes requirements for (1) developing and updating a services plan for each family with special needs and (2) resources, such as staffing, to ensure an appropriate number of family support providers. OSN has developed such a policy, but DOD relies on each Service to determine its compliance with the policy. However, Army and Navy officials said they have not received feedback from OSN about the extent to which their Service-specific guidance complies. Federal internal control standards call for developing clear policies to achieve agency goals. In addition, DOD's most recent annual reports to Congress do not indicate the extent to which each Service provides services plans or allocates sufficient resources for family support providers. According to GAO's analysis, the Military Services have developed relatively few services plans, and there is wide variation in the number of family support providers employed, which raises questions about potential gaps in services for families with special needs (see table). Each Service uses various mechanisms to monitor how servicemembers are assigned to installations (assignment coordination) and obtain family support, but DOD has not established common performance measures to assess these activities. DOD has taken steps to better support families with special needs, according to the DOD officials GAO interviewed. For example, DOD established a working group to identify gaps in services. However, OSN officials said that DOD lacks common performance measures for assignment coordination and family support because the Services have not reached consensus on what those measures should be. In addition, OSN does not have a process to systematically evaluate the results of the Services' monitoring activities. Federal internal control standards call for assessing performance over time and evaluating the results of monitoring activities. Without establishing common performance measures and assessing monitoring activities, DOD will be unable to fully determine the effect of its efforts to better support families with special needs and the adequacy of the Services' EFMPs as required by federal law. What GAO Recommends GAO makes a total of three recommendations to DOD. DOD should assess and report to Congress on the extent to which each Service provides sufficient family support personnel and services plans, develop common performance metrics for assignment coordination and family support, and evaluate the results of the Services' monitoring activities. DOD agreed with these recommendations and plans to develop performance metrics for assignment coordination and develop plans to evaluate the Services' monitoring activities.
gao_GAO-18-449
gao_GAO-18-449_0
Background Global Train and Equip Authority to Build Foreign Partner Capacity DOD has used the Global Train and Equip program to provide training, equipment, and small-scale military construction activities intended to build the capacity of partner nations’ military forces to conduct counterterrorism operations. The program was originally authorized under Section 1206 of the 2006 NDAA and has been amended several times. The 2015 NDAA permanently authorized the Secretary of Defense, with concurrence of the Secretary of State, to conduct programs to (1) build the capacity of a foreign country’s national military forces to conduct counterterrorism operations or participate in, or support, ongoing allied or coalition military or stability operations that benefit the national security interests of the United States; (2) build the capacity of a foreign country’s national maritime or border security forces to conduct counterterrorism operations; and (3) build the capacity of a foreign country’s national-level security forces that have among their functional responsibilities a counterterrorism mission in order for such forces to conduct counterterrorism operations. The fiscal year 2017 NDAA repealed Section 2282 of Title 10 of the U.S. Code and created Section 333 of the same title (Section 333). Section 333 authorized DOD to continue providing training and equipment to the national security forces of foreign countries for the purpose of building the capacity of such forces to conduct counterterrorism operations, among other things. The fiscal year 2017 NDAA also contained several administrative and organizational instructions for the management and oversight of DOD security cooperation policy. According to DOD, counterterrorism and stability operations assistance generally consist of security capability projects that fortify a partner nation’s land, sea, or air capability. Projects often provide equipment or training intended to build partner communications, intelligence, surveillance, and reconnaissance capabilities. Figure 1 shows an example of a UH-60 helicopter—a type of equipment that has been provided through Global Train and Equip projects. U.S. Security Assistance Policy Presidential Policy Directive 23, published in April 2013, was aimed at strengthening the ability of the United States to help allied and partner nations build their own security capacity. The directive states that U.S. agencies should target security sector assistance where it can be effective. The directive identifies principal goals of, and guidelines for, security sector assistance that highlight the importance of including the following four planning elements in project design and execution: identifying objectives that address partner nation needs; considering partner nations’ capacity to absorb U.S. assistance; integrating assessment, monitoring, and evaluation to provide policymakers, program managers, and implementers with information and evidence necessary to make effective decisions and maximize program outcomes; and anticipating sustainment needs. Global Train and Equip Program Management and Project Planning During the reporting period covered by this review, DOD’s Office of the Assistant Secretary of Defense for Special Operations/Low-Intensity Conflict was responsible for providing policy guidance and oversight of the Global Train and Equip program. The office coordinated with State’s Bureau of Political-Military Affairs and other stakeholders in an interagency process to solicit project proposals annually, in accordance with guidance that DOD revises each year to reflect lessons learned, congressional concerns, and other considerations. DOD 2016 and 2017 guidance implements Presidential Policy Directive 23, requiring that project proposals for the Global Train and Equip program address the four planning elements highlighted in the directive. Figure 2 illustrates the conceptual framework of the project proposal, approval, and implementation processes in 2016 and 2017. According to DOD officials, various elements of the proposal development, review, selection, and notification process occurred simultaneously, as proposal submission and review occurred on a rolling basis and agency-approved projects were notified to Congress in multiple groups throughout each fiscal year. As figure 2 shows, DOD instituted some changes to the proposal development and approval process for projects notified to Congress in 2017. According to DOD officials, for 2017, geographic combatant commands and embassy staff first submitted high-level concepts for review rather than fully drafted project proposals. These concepts were intended to provide information on project objectives for an interagency working group’s review and approval before further resources were committed to developing full proposals. DOD officials told us that the 2017 process remains in place for 2018 and 2019 projects. DOD officials said that in prior years, including 2016, geographic combatant commands and embassy staff were required to draft full proposals without confirmation that DOD and State would approve the proposals for notification to Congress. In 2016 and 2017, DOD and State officials reviewed proposals— approved by the geographic combatant command and ambassador or chief of mission—and selected projects to recommend to the Secretaries of Defense and State. Following approval by the Secretary of Defense, with concurrence from the Secretary of State, DOD prepared and submitted congressional notifications for each project it intended to fund through the program. These notifications summarized project information such as the project’s objectives, the partner nation’s absorptive capacity, the baseline assessment of the recipient unit’s capabilities, and arrangements for the project’s sustainment. Congressional notifications were submitted for each project to the appropriate committees at least 15 days before activities were initiated. According to DOD, project implementation did not begin immediately after the 15-day notification period if congressional staff requested additional time for briefings and for DOD to ensure that the congressional committees agreed with the proposed activities. After congressional notification, DOD’s Defense Security Cooperation Agency assumed responsibility for overseeing the obligation of funds for training and equipment procurement before the end of the relevant fiscal year, while officials from the security cooperation office at U.S. embassies were responsible for coordinating in-country project implementation. DOD planned to conduct assessments of selected projects 12 to 18 months after delivering major project components, to evaluate the extent to which U.S. assistance has contributed to building recipient unit capabilities and the extent to which the partner nation applied its capabilities consistent with the project’s intent. DOD Has Obligated the Majority of Over $4 Billion Allocated for Global Train and Equip Projects since 2009 and Disbursed About Two-Thirds of Obligated Funds Of the $4.1 billion allocated for Global Train and Equip projects in 2009 through 2017, DOD has obligated approximately $3.7 billion and disbursed $2.5 billion. Table 1 details Global Train and Equip program funding, by fiscal year of appropriation, in 2009 through 2017. As table 1 shows, DOD reported no unobligated balances as of December 2017. Figure 3 details Global Train and Equip allocations in 2009 through 2017, according to the fiscal year in which DOD allocated the funds. As figure 3 shows, allocations averaged about $276 million in 2009 through 2014 and about $827 million in 2015 through 2017. DOD’s allocations for Global Train and Equip activities increased from $675 million in 2015 to about $1.2 billion in 2016 because of an influx of funding from the Counterterrorism Partnerships Fund, which was created in 2015 and authorized to fund Global Train and Equip projects. In addition, in 2015, DOD allocated funds from the European Reassurance Initiative, which also was created that year and authorized to fund Global Train and Equip projects. DOD’s allocations for Global Train and Equip activities for 2017 totaled $635 million. DOD concentrated allocations of Global and Train Equip funding in 2016 and 2017 on projects for Jordan and Lebanon, which received a combined total of $856 million, or 47 percent of total allocations during that period (see fig. 4). In 2016, allocations for projects in Jordan and Lebanon amounted to about $579 million—nearly 50 percent of approximately $1.2 billion in total allocations that year. In 2017, allocations for projects in those countries amounted to about $279 million—44 percent of $635 million in total allocations. For more information about allocations for specific Global Train and Equip projects in 2016 and 2017, see appendix II. DOD Consistently Addressed Only One of Four Planning Elements in 2016 and 2017 Proposals but Reported Efforts to Ensure Inclusion of All Elements in 2018 DOD’s 2016 and 2017 proposals for Global Train and Equip projects consistently addressed only one of the four security assistance planning elements called for by DOD guidance, but agency officials reported implementing an informal process to improve coverage of these planning elements in 2018 proposals. DOD’s 2016 and 2017 guidance for Global Train and Equip project proposals called for proposal packages to address (1) project objectives, (2) partner nation absorptive capacity, (3) baseline assessments of partner nation capabilities, and (4) project sustainment needs. All 72 proposal packages we reviewed for 2016 and 2017 included project objectives. Slightly more than 30 percent of proposal packages in 2016 and over 80 percent in 2017 included information about partner nations’ absorptive capacity, compared with 19 percent in 2015 (see fig. 5). More than 90 percent of 2016 and 2017 proposal packages included baseline assessments, in contrast to 63 percent in 2015. However, less than three-quarters of proposal packages in 2016 and 2017 included complete sustainment plans, with the percentage that did so declining from 73 percent in 2016 to 68 percent in 2017. Although DOD’s 2016 and 2017 guidance called for proposals to address sustainment planning, it did not provide instructions for doing so when sustainment was not anticipated. According to DOD officials, the department has hired additional staff and developed an informal quality review process to better ensure that proposal packages include all key elements but, as of February 2018, had not documented this process as written policy. Standards for Internal Control in the Federal Government calls for documenting internal control activities aimed at ensuring effective use of resources and documenting in policies an organization’s internal control responsibilities. More complete information about each of the four planning elements—including sustainment costs, even when negligible—would improve DOD’s ability to plan and allocate funding for the program, while formalizing the quality review process would also enable DOD to provide greater consistency in its oversight of project development. Project Proposals in 2016 and 2017 Consistently Addressed Only Project Objectives but Improved Coverage of Absorptive Capacity and Baseline Assessments All 2016 and 2017 Project Proposals Included Information about Project Objectives We found that DOD included information that addressed project objectives in all 72 proposals for Global Train and Equip projects in 2016 and 2017. We previously reported that all 2015 proposals for the program addressed project objectives. DOD’s guidance notes that it is important for geographic combatant commands and chiefs of mission to produce proposals that include a clear narrative about how the proposed capability-building effort will fit into the theater campaign plans and integrated country strategies and advance U.S. interests. DOD officials from one geographic combatant command noted that 2017 Global Train and Equip project objectives were initially developed at the country level by the Security Cooperation Office and other embassy personnel and were based on theater campaign plans. Each proposal we reviewed from 2016 and 2017 outlined the objectives for the project. For example, one proposal stated that the training and equipment outlined in the proposal would enhance the partner nation’s armed forces’ ability to effectively conduct border security, counterincursion, and other night operations. Less Than Half of 2016 Project Proposals Included Information about Absorptive Capacity, but Most 2017 Proposals Addressed This Element DOD improved its efforts to include information about partner nations’ absorptive capacity in Global Train and Equip project proposals in 2016 and 2017. Thirty-two percent (13 of 41) of 2016 proposals and 84 percent (26 of 31) of 2017 proposals addressed this planning element. We previously reported that less than 20 percent (10 of 54) of 2015 proposals addressed absorptive capacity. Before 2017, DOD guidance called for project proposals to address absorptive capacity, but the project proposal template did not include a required field for it. However, DOD updated its proposal template in 2017 to include a required field for analyzing and assessing the partner nation’s security forces’ current capability and current performance level in employing the proposed counterterrorism capabilities while serving in the desired counterterrorism role. According to DOD officials, they updated the proposal template to better identify problems with absorptive capacity because of its importance and because it is an area of high congressional interest. DOD assessments of partner nations’ absorptive capacity noted a range of abilities to absorb assistance. For example, DOD assessed one country as having the capacity to immediately employ new equipment once training was completed and assessed another country’s ability to absorb training and equipment as average, noting that previous training had resulted in continuous improvements. DOD officials acknowledged that assessing absorptive capacity has been a consistent challenge. One senior official also noted that pressing national security goals, such as quickly developing the capabilities of strategic partners for ongoing operations, required the U.S. government to assume some risk by supporting a project without fully assessing or documenting a partner nation’s absorptive capacity. Most Project Proposals Included Baseline Assessments in 2016 and 2017 We found that 92 percent (66 of 72) of 2016 and 2017 Global Train and Equip proposal packages included baseline assessments, compared with 63 percent (34 of 54) of 2015 proposal packages. DOD’s assessment framework is based on a dual-purpose document that includes portions for assessing the recipient unit’s capabilities at baseline—that is, before a project begins—and after project delivery and implementation. DOD’s 2016 and 2017 program guidance states that a baseline assessment of recipient unit capabilities should be completed prior to submission of each proposal. According to DOD officials, baseline assessments are the primary mechanisms to identify and document the recipient unit’s capabilities at the time the project is proposed and its needs to improve its capabilities to meet its mission. The baseline assessments are intended to be submitted with project proposals and later used for project outcome assessments by assessment teams, policy officials, embassy staff, and other stakeholders. Less Than Three-Quarters of Proposals Included Complete Sustainment Plans in 2016 and 2017 Less than three-quarters of Global Train and Equip proposals included complete sustainment plans in 2016 and 2017, and the percentage of proposals with complete plans declined from 2016 to 2017. While 73 percent (30 of 41) fully addressed this planning element in 2016, 68 percent (21 of 31) fully addressed it in 2017. We previously reported that 76 percent of 2015 proposals included complete sustainment plans. According to DOD’s Global Train and Equip guidance for 2016 and 2017, complete sustainment plans include three elements: (1) an identification of funding sources for project sustainment, (2) an estimate of the annual sustainment costs, and (3) an assessment of the sustainment capability of the partner nation. Most 2016 and 2017 proposals included information about sustainment funding sources and the partner nation’s sustainment capability. However, the percentage of proposals that estimated annual sustainment costs varied: 85 percent of proposals estimated sustainment costs in 2016 and 71 percent of proposals estimated such costs in 2017. DOD officials told us that sustainment costs may not have been documented in some cases if sustainment was not expected to be a significant factor in the proposed project. For example, officials explained that some projects provided assistance, such as ammunition and training, that is expendable and does not require sustainment. Officials also noted that other projects provided assistance that may not have been intended to be sustained. For instance, long-term sustainment would be unnecessary for a project with a discrete objective, such as providing equipment to allow for closer coordination with U.S. and North Atlantic Treaty Organization forces in support of the International Security Assistance Force–Afghanistan. Nevertheless, DOD officials said that when project sustainment is not anticipated, proposals for the projects should explain why sustainment costs are not included. DOD’s 2015 guidance for Global Train and Equip proposals included instructions for addressing sustainment planning when sustainment is not anticipated; however, the guidance for 2016 and 2017 did not include these instructions. Standards for Internal Control in the Federal Government states that internal control activities aimed at ensuring effective use of resources should be clearly documented and that documentation should be readily available for examination. Updating the guidance for Global Train and Equip proposals to include instructions addressing sustainment planning when sustainment is not anticipated would help ensure decision makers’ access to complete information on annual sustainment costs, including costs expected to be negligible. DOD Recently Implemented an Informal Process to Ensure Proposals Address All Four Planning Elements but Has Not Formalized the Process as Policy To improve management of the Global Train and Equip program, DOD officials told us that they developed an informal quality review process designed to ensure that proposals in 2018 and subsequent years address required elements. According to DOD officials, this informal process includes the following steps: Interagency “red teams” evaluate each proposal line by line to verify that the proposal is complete. Proposals with missing elements are returned to the drafters for revision and reevaluation. After proposals clear interagency review, senior DOD officials also review the proposals for completeness before approving them. According to DOD officials, the department is developing this process as part of its review and approval of proposals under the new Section 333 authority to build partner capacity and is in the process of hiring staff to support this effort. For example, in February 2018, DOD officials said they had created a position for a full-time contractor who will be based at headquarters and charged with verifying that proposal packages include all required security assistance planning elements. DOD officials told us in February 2018 that they were also soliciting feedback on the process from relevant stakeholders. However, according to the officials, DOD had not yet determined whether to formalize the proposal review process as written policy. According to Standards for Internal Control in the Federal Government, management should document in policies the internal control responsibilities of an organization. Formalizing as written policy its informal process to ensure that proposals address all four required planning elements would enable DOD to provide consistent oversight of Global Train and Equip project development and ensure decision makers have access to complete information about each element. Such information would, in turn, help DOD and State decision makers to ensure the efficient use of funding under the new Section 333 authority. DOD Reported Progress in Achieving Project Objectives, Factors Limiting Progress, and Efforts to Improve Assessments DOD reporting on the achievement of Global Train and Equip project objectives in 2016 and 2017 indicated progress in building partner capacity to combat terrorism and conduct stability operations as well as factors that affected the progress achieved. According to DOD assessment reports and supporting documents, partner nation recipient units’ overall capabilities were greater after implementation of 8 of 21 Global Train and Equip projects, and some of the remaining 13 projects produced some positive results. (See app. III for the number of assessment reports conducted between 2006 and 2015 out of the total number of projects implemented in those years.) DOD documents and officials also identified several factors—including proposal design weaknesses, equipment suitability and procurement issues, partner nation shortfalls, and workforce management challenges—that may have affected the extent to which DOD was able to achieve project objectives. DOD officials described several changes they are making to improve assessments of Global Train and Equip projects. Reports on Projects Assessed in 2016 and 2017 Indicate Some Progress in Building Partner Capacity DOD assessment reports for 2016 and 2017, which included baseline and post-implementation assessments of recipient units’ capabilities for 21 Global Train and Equip projects, indicated some progress in building partner capacity. For 8 of the 21 projects, the recipient units’ capability levels were assessed as having increased by at least one rating level after the project’s implementation (see fig. 6). Although the recipient units for the remaining 13 projects were assessed as showing no change in capability levels, the assessment reports for some of these projects described some positive project outcomes. For example, one 2017 assessment report of a project initiated in 2015 found that, while the recipient unit had not yet been integrated into the special operations force (a stated goal of the project), the project had resulted in some increased capacity for the recipient unit. Specifically, the assessment found that the project increased the recipient unit’s capability to support counterterrorism operations while also enhancing command and control capabilities and interoperability. Further, the 2016 assessment report for several related projects in one country found that, although the recipient unit had not increased its overall capability level, the equipment provided by the Global Train and Equip projects had assisted the recipient unit in executing its border security mission. Additionally, the 2016 assessment report for a 2010 project found that, whereas the recipient unit’s overall capability level had not changed, the unit’s abilities to conduct internal defense operations throughout the country had increased as a result of Global Train and Equip assistance. To conduct the assessments, DOD uses a standard framework for evaluating the capabilities and performance of each recipient unit before and after a project has been implemented. For the baseline assessments, DOD rates the recipient unit’s level of capability and performance on a 5- point scale; 1 is defined as the ability to perform some basic tasks to at least a low standard of performance and 5 is defined as the ability to perform most of the advanced tasks for the unit’s missions and to operate almost continuously throughout its assigned area of operations. After project implementation, DOD uses the same 5-point scale to identify any changes in the recipient unit’s level of capability and performance since receiving the assistance. As we have previously reported, these ratings do not represent only the effect of the provision of training and equipment on the recipient unit’s capability and performance, as other factors may contribute to changes in performance level. DOD Reports and Officials Described Several Factors That Can Limit Achievement of Global Train and Equip Objectives DOD’s assessment reports and supporting documents, as well as agency officials we interviewed, described several factors that can affect the extent to which DOD is able to achieve Global Train and Equip project objectives. These factors—project design weaknesses, equipment suitability and procurement issues, partner nation shortfalls, and workforce management challenges—are consistent with the challenges noted in our April 2016 report. Project design weaknesses. According to DOD assessment reports, project designs that did not adequately reflect a partner nation’s ability to contribute resources to a project or sufficiently address recipient unit needs and capabilities challenged the achievement of project objectives. For example, DOD’s 2016 assessment of several projects in one partner nation indicated that small-scale construction projects often present problems in achieving objectives. According to the assessment, these problems are largely due to the limited number and capability of construction firms willing to bid on work in remote locations and a dollar ceiling for small-scale projects ($750,000) that often cannot cover all expenses at such sites. The assessment found that relying on a partner nation to provide the additional funds frequently results in the construction not being completed. In addition, DOD’s 2016 assessment report indicated a problem with the adequacy of an airplane spare-parts package provided in some Global Train and Equip projects. The assessment found that the Cessna Caravan spare parts, intended to cover 2 years of maintenance, proved insufficient for high-speed combat flight operations. (See fig. 7 for an example of a Cessna Caravan at a partner nation airbase.) The report also noted that this problem had been identified in other Global Train and Equip projects that included spare-parts packages for Cessna Caravans. The report indicated that the equipment manufacturers determine the package contents without regard to the unique operational and environmental conditions in the receiving partner nation. Equipment suitability and procurement issues. A lack of suitability of equipment provided by Global Train and Equip projects, as well as problems with procuring the equipment, can make it difficult to achieve desired capability-building objectives. For example, a 2017 assessment report of a 2015 project found that size distributions for body armor and helmets were not aligned with the general size requirements—an issue that had been identified in other countries receiving Global Train and Equip assistance. Additionally, the assessment noted that consideration was not given to providing body armor with built-in buoyancy for personnel operating in a maritime environment. Further, the assessment noted that bright orange life jackets were provided as tactical equipment, when a subdued color would have been more appropriate. Moreover, the 2016 assessment report found that equipment procurement issues in a 2012 project caused maintenance problems for the partner country. According to the report, the U.S. Army did not have an existing contract to obtain diesel vehicles from the manufacturer specified in the project proposal and congressional notification and therefore used an existing contract to obtain vehicles from a different manufacturer. The assessment observed that, while delivery of available vehicles provides some value, in this case it created maintenance problems for the partner nation because there was no dealership in the country to provide repairs and spare parts for the vehicles. The assessment found that in such situations it may be best to delay fulfillment until a contract is available to procure vehicles from the specified manufacturer. Partner nation shortfalls. Shortfalls of partner nations, including not using assistance for the envisioned purposes, inability to maintain and sustain equipment, and difficulty in manning and training recipient units, can negatively affect the achievement of project objectives. For example, the 2016 assessment report for a 2015 project found that, although the recipient unit was able to plan and execute more complex operations to combat regional threats, such as Boko Haram, in a professional manner, the assessment team received no evidence that the unit had played more than a minor role in counter–Boko Haram operations. In a separate review of a partner nation’s Global Train and Equip projects, the 2016 assessment found that the recipient unit had difficulties in maintaining weapons in a fully mission- capable status. The assessment found that a number of the unit’s small arms were old and many had warped barrels, making them much less accurate. A 2017 assessment of a 2013 project found that the recipient unit suffered from shortages of junior noncommissioned officers and officers. The unit was also found to have few soldiers in specialty jobs who had received school training. The assessment report acknowledged that certain conditions in the partner nation, such as low levels of education, presented a multitude of problems in ensuring the development and maintenance of national security forces capable of working with, and integrating, a range of modern combat systems. Workforce management challenges. DOD officials indicated that workforce challenges, particularly related to turnover and staffing levels, can inhibit effective project design, program implementation, and oversight. DOD officials acknowledged that staff turnover, an issue that we previously identified, remains a challenge. According to the officials, there is a high degree of institutionalized turnover, particularly among security cooperation officers, at U.S. embassies and to some extent within the geographic combatant commands. As a result, the officials overseeing project implementation may not have been responsible for project development and are less likely to understand the capabilities of the intended recipient units or the capability gaps that could be addressed by equipment and training. DOD officials also told us that they have been challenged to meet programmatic demands with current staffing levels, particularly given the influx of funds appropriated for the Counterterrorism Partnerships Fund in 2015. DOD officials said that the volume of Global Train and Equip projects expanded with the large increase in funding in 2015 and 2016, which stressed the foreign military sales system as well as geographic combatant commands’ ability to plan for, and manage, the program with existing resources. For example, DOD officials said that teams of three staff at geographic combatant commands were managing over three times more funding than in prior years. As a result, staff were unable to maintain consistent levels of due diligence on issues such as ensuring that proposal packages addressed absorptive capacity and sustainment planning. According to DOD officials, negative effects of this inconsistent due diligence included the arrival of equipment not suitable for operations and overestimation of one partner nation’s absorptive capacity, necessitating unplanned training and resulting in project delays. DOD officials said that they are now in the process of acquiring additional staffing to address capacity constraints. DOD Officials Described Several Ongoing Changes to Improve Assessments DOD officials told us that they are in the process of evaluating the effectiveness of the assessment process conducted in 2016 and 2017 and described a variety of changes that they are making to improve assessments of Global Train and Equip projects. DOD officials acknowledged that baseline and post-implementation assessments, as well as monitoring activities, had been conducted inconsistently in prior years, including for the projects developed and implemented in 2016 and 2017. DOD officials said that staffing constraints were a contributing factor. In March 2017, we also identified some weaknesses in the design of evaluations for Global Train and Equip projects and recommended that DOD develop a plan for improving the quality of these evaluations. While prior laws required DOD to conduct assessments and evaluate the program’s effectiveness, the fiscal year 2017 NDAA requires that DOD maintain a program of assessment, monitoring, and evaluation in support of the agency’s security cooperation programs and activities. Given the requirements for an assessment, monitoring, and evaluation program, and recognizing the importance of improving the assessment processes, DOD officials said they are developing an enhanced assessment process that includes increased staffing dedicated to monitoring and evaluation. For example, DOD officials said that they had hired several full-time contractors to perform key tasks related to monitoring and evaluation. According to the officials, several full-time contractor positions will be located in the various geographic combatant command locations, with responsibilities to develop baseline assessments in coordination with the geographic combatant commands and oversee the quality and completeness of those assessments; write performance indicators and performance plans into every Global Train and Equip project proposal; conduct monitoring and provide reports to the geographic combatant command and to the Defense Security Cooperation Agency on the status of project objectives and performance indicators; and conduct annual, independent evaluations to assess a few Global Train and Equip projects in detail. In addition, DOD officials stated that they had hired a full-time contractor who will be based at headquarters and provide further support for each geographic combatant command and who will be charged with documenting that baseline assessments were completed and conducting quality reviews of assessment-related documents. Conclusions The Global Train and Equip program is a critical tool for building partner capacity to counter terrorism worldwide, and allocations for the program totaled more than $4.1 billion in 2009 through 2017. DOD has established an interagency process to develop and select Global Train and Equip projects that takes into account four required security assistance planning elements. However, although DOD consistently addressed project objectives in its 2016 and 2017 project proposals, DOD did not consistently address the other three planning elements. In addition, DOD guidance no longer includes instructions for addressing one of these elements, sustainment planning, in proposals for projects for which DOD does not intend or anticipate sustainment. Updating its guidance to include such instructions would help ensure decision makers’ access to complete information on annual sustainment costs, even costs anticipated to be negligible. Moreover, although officials reported having recently developed an informal quality review process designed to ensure that proposal packages address all required planning elements, DOD has not formalized this process as written policy. Formalizing the process would enhance DOD’s ability to provide consistent oversight of project development and to ensure that decision makers have access to complete information about each planning element for proposed projects. This information would, in turn, help DOD and State decision makers ensure the efficient use of funding under the new Section 333 authority to build partner capacity. Recommendations for Executive Action We are making the following two recommendations to DOD: The Director of the Defense Security Cooperation Agency should update guidance for project proposal packages to require an explanation when sustainment plans are not documented for projects for which sustainment is not intended or anticipated. (Recommendation 1) The Director of the Defense Security Cooperation Agency should formalize as written policy its informal process for ensuring that project proposal packages fully address and document all four required security assistance planning elements. (Recommendation 2) Agency Comments We provided a draft of this report to DOD and State for comment. In its comments, DOD concurred with our recommendations and noted that the Defense Security Cooperation Agency will seek to update guidance for project proposal packages. DOD’s comments are reproduced in appendix IV. State did not provide comments. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Defense and State, and the Director of the Defense Security Cooperation Agency. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5130 or mazanecb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology The Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act (NDAA) for Fiscal Year 2015 contains a provision for GAO to conduct biennial audits of such program or programs conducted or supported pursuant to 10 U.S.C. § 2282 during the preceding 2 fiscal years. This report examines (1) the status of funding that the Department of Defense (DOD) allocated for Global Train and Equip projects in 2009 through 2017; (2) the extent to which DOD addressed security assistance planning elements in project proposals in 2016 and 2017; and, (3) DOD’s reporting on the achievement of Global Train and Equip project objectives and any factors affecting its ability to achieve those objectives. To address these objectives, we analyzed funding data, program guidelines, project proposal documents, and congressional notifications. We discussed the funding data, project proposal process and key elements of project planning, documentation, and assessment with officials from DOD and the Department of State (State); geographic combatant commands in whose areas of responsibility partner nations received 2016 or 2017 assistance—the U.S. Africa Command, the U.S. Central Command, and the European Central Command; and the U.S. embassies in Jordan, Niger, and Uganda. We selected these countries on the basis of their having received a higher proportion of DOD’s allocations for the Global Train and Equip program in fiscal years 2016 and 2017; we also considered factors such as the number of project assessments conducted in each country, the maturity of projects, embassy officials’ project assessment experience, and the countries’ geographic distribution. To identify the status of funding that DOD allocated for Global Train and Equip projects in fiscal years 2009 through 2017, we assessed funding data for 2009 through 2017. DOD provided data on allocations, amounts reallocated, unobligated balances, unliquidated obligations, and disbursements of funds for program activities according to the fiscal year when the funds were appropriated. We analyzed these data to determine the extent to which funds had been allocated, obligated, and disbursed. DOD also provided data on project funding by year of allocation. We used these data to report allocations for Global Train and Equip projects by fiscal year and recipient country. We assessed the reliability of these data by interviewing cognizant agency officials and comparing the data with previously published data. We determined that the data were sufficiently reliable for our purposes. To assess the extent to which DOD addressed key elements of security sector assistance for projects it planned to implement in 2016 and 2017, we analyzed agency documents and interviewed agency officials. We reviewed Presidential Policy Directive 23 on Security Sector Assistance, which identified four key elements to be considered for security sector assistance programs: (1) project objectives that address partner needs, (2) the absorptive capacity of the recipient unit, (3) the baseline capabilities of the recipient unit, and (4) the arrangements for the sustainment of the project. We also reviewed DOD guidance, which requires these elements to be considered in project proposal development. To determine the extent to which DOD addressed these elements in project proposals, we analyzed the content of agency- approved project proposals in 2016 and 2017. Two reviewers independently analyzed 41 proposal packages for 2016 and 31 proposal packages for 2017. The reviewers resolved any disagreements through discussion of the information used to make their independent determinations. We also interviewed State and DOD officials who develop and review proposals, discussing (1) how they use information in the project proposal packages to consider planning elements and (2) other factors they may consider in developing and reviewing proposals. Further, we reviewed congressional notifications DOD developed subsequent to agency approval of Global Train and Equip project to determine the extent to which those documents included information about the four planning elements. With respect to our reporting on support for information about baseline assessments, congressional notifications lay out a standardized assessment framework to be used to assess the effects of projects. This framework includes a baseline assessment that DOD requires to be completed for inclusion in project proposal packages. DOD provided baseline assessments for 38 of 41 project proposals notified to Congress in 2016 and 30 of 31 project proposals notified to Congress in 2017. To evaluate the completeness of the required baseline assessment sections, we compared these 38 baseline assessment documents included in 2016 project proposal packages and 30 baseline assessment documents in 2017 project proposal packages with DOD internal guidance. To assess the completeness of sustainment plans, we used DOD’s Global Train and Equip guidance for 2016 and 2017, which defined complete sustainment plans to include three elements: (1) an identification of funding sources for project sustainment, (2) an estimate of the annual sustainment costs, and (3) an assessment of the sustainment capability of the partner nation. To examine DOD reporting on the achievement of project objectives in 2016 and 2017, we reviewed agency documents and interviewed agency officials. In particular, we analyzed DOD’s annual project assessment reports and supporting documents for 2016 and 2017 as well as the assessment framework handbook. DOD submitted an annual assessment report to Congress in 2016 but was not required to submit an annual assessment report in 2017. As a result, DOD prepared country-level assessments in 2017 but did not compile them and submit them to Congress as it did in 2016. To examine the extent to which DOD’s assessments and supporting documents indicated progress in building partner capacity, we compared baseline assessments of recipient unit capability and performance levels, conducted when projects were proposed, with post-implementation assessments of recipient unit capability levels, conducted after the delivery of program assistance. DOD uses a standard framework for evaluating the capabilities and performance of each recipient unit. Baseline assessments rate the recipient unit’s level of capability and performance before project implementation on a 5-point scale, with 1 defined as the ability to perform some basic tasks to at least a low standard of performance and 5 defined as the ability to perform most of the advanced tasks for the unit’s missions and to operate almost continuously throughout its assigned area of operations. After project implementation, project assessments and supporting documents use the same 5-point scale to rate any changes (positive or negative) in the recipient unit’s level of capability and performance. DOD’s 2016 assessment report and 2017 country-level assessment reports included information on 84 Global Train and Equip projects; of these, 21 projects included both a baseline and a post- implementation assessment of the recipient unit. We relied on DOD’s assessment reports and did not systematically validate the assessment results because it was beyond the scope of this engagement to assess the reliability of the assessments. However, for the purposes of this analysis, we met with DOD and contracted officials responsible for conducting and reviewing project assessments to gather information about their processes for assessing recipient unit capabilities. In addition, we reviewed DOD’s project assessment guidance and their template for conducting project assessments, which was consistently used in the assessments we reviewed. Finally, to examine DOD reporting on factors affecting the achievement of project objectives, we reviewed the assessment reports and interviewed DOD officials responsible for implementing the program, including officials from DOD’s policy guidance and oversight office and its geographic combatant commands; officials at embassies in the three selected countries; and officials at State’s Bureau of Political-Military Affairs. We also considered the factors that we identified as affecting the achievement of project objectives for our 2016 report that considered 2015 project proposals. On the basis of our review of DOD’s assessments and supporting documents and our interviews with agency officials, we grouped the key factors they identified into four categories: (1) proposal design weaknesses, (2) equipment suitability and procurement issues, (3) partner nation shortfalls, and (4) workforce management challenges. We conducted this performance audit from July 2017 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Allocations for Global Train and Equip Projects in Fiscal Years 2016 and 2017 Table 2 shows the total amount of funding DOD allocated for Global Train and Equip projects in 2016 and 2017 combined. Appendix III: Global Train and Equip Projects and Allocations Included in DOD’s 2012- 2017 Assessment Reports As figure 8 shows, in 2012 through 2017, the Department of Defense (DOD) prepared assessment reports for 31 percent of the projects (82 of 262 projects) it had implemented in 2006 through 2015. These 82 projects account for 28 percent of the nearly $3 billion DOD allocated for the program in those fiscal years. The Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015 required DOD to assess the results of the Global Train and Equip program; however, DOD was not required to assess a specific number or percentage of projects in each fiscal year. Appendix IV: Comments from the Department of Defense Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Drew Lindsey (Assistant Director), Jon Fremont (Analyst-In-Charge), Emily Desai, Reid Lowe, Martin de Alteriis, and Ashley Alley made key contributions to this report. In addition, Chris Keblitis provided technical assistance.
Why GAO Did This Study The United States has undertaken several efforts, including DOD's Global Train and Equip program, to help foreign partners strengthen their security capacity. Presidential Policy Directive 23 states that agencies should target security assistance where it can be effective and highlights the importance of addressing several planning elements in project proposals. DOD develops proposals, using guidance implementing the directive, and selects projects with the Department of State. The fiscal year 2015 National Defense Authorization Act included a provision for GAO to review the Global Train and Equip program. In this report, GAO examines (1) the status of funding DOD allocated for Global Train and Equip projects in fiscal years 2009 through 2017, (2) the extent to which DOD addressed key security assistance planning elements in project proposals in fiscal years 2016 and 2017, and (3) DOD's reporting on the achievement of Global Train and Equip project objectives and any factors affecting its ability to achieve those objectives. GAO analyzed agency data and program documents and interviewed DOD and State Department officials in Washington, D.C., and at selected combatant commands and embassies. What GAO Found The Department of Defense (DOD) obligated $3.7 billion of $4.1 billion allocated for the Global Train and Equip program in fiscal years 2009 through 2017 to build partner nations' capacity to counter terrorism. DOD increased allocations for the program in 2016, responding to an influx of funding from appropriations to the Counterterrorism Partnerships Fund. As of December 2017, DOD had disbursed about $2.5 billion of the obligated funds. Global Train and Equip project proposals for fiscal years 2016 and 2017 consistently addressed only one of four elements of security assistance planning outlined in Presidential Policy Directive 23 . GAO found all 72 proposals in those years included the first element, project objectives. From 2016 to 2017, the percentage of proposals addressing the second element—absorptive capacity—rose from 32 percent to 84 percent. Most 2016 and 2017 proposals included the third element, baseline assessments, but less than three-quarters included complete sustainment plans, the fourth element. DOD guidance for 2016 and 2017 did not include instructions for addressing project sustainment when sustainment was not anticipated, though the 2017 guidance included instructions for addressing the other three planning elements. According to DOD officials, they have developed an informal quality review process to better ensure that 2018 project proposals address all four planning elements. However, DOD has not formalized this informal process as written policy. Standards for Internal Control in the Federal Government calls for documenting internal control activities and policies. Formalizing the proposal review process would help DOD provide consistent oversight of project development and ensure access to complete information about each planning element, including sustainment needs. Such information is critical in helping decision makers ensure efficient use of funding to build partners' capacity. DOD reporting for 2016 and 2017 indicates progress in building partner capacity to combat terrorism and conduct stability operations as well as factors affecting the progress achieved. According to DOD documents, partner nation recipient units' overall capabilities were greater after implementation of 8 of 21 Global Train and Equip projects, and some of the remaining 13 projects produced some positive results. DOD documents and officials also identified factors—such as equipment suitability and procurement issues—that may have limited the achievement of project objectives. What GAO Recommends GAO recommends DOD (1) update project proposal guidance to include instructions for documenting sustainment planning and (2) formalize as written policy its informal process for ensuring Global Train and Equip project proposals fully document the four required planning elements. DOD agreed with the recommendations.
gao_GAO-19-44
gao_GAO-19-44_0
Background The Judgment Fund is a permanent, indefinite appropriation, statutorily created in 1956, available to pay many types of eligible monetary claims that may be judicially or administratively ordered against the U.S. government. The Judgment Fund is also available to pay interest and costs on claims in certain circumstances. Administration of the Judgment Fund has changed substantially since its inception, with varying degrees of control and oversight by Congress, GAO, and Treasury. Originally, the Judgment Fund was limited to paying judgments of less than $100,000, as certified by the Comptroller General and entered by the U.S. Court of Claims (the predecessor to the current U.S. Court of Federal Claims) or a U.S. District Court, as well as authorized interest and costs. In the 1960s, new laws extended the Judgment Fund’s availability to awards and compromise settlements. In the next decade, the Supplemental Appropriations Act, 1977, eliminated the Judgment Fund’s $100,000 payment ceiling, resulting in no upper limit on the amount that could be paid from the Judgment Fund on any particular claim. The General Accounting Office Act of 1996 transferred certification of payments from the Judgment Fund from GAO to Treasury. Since 1996, Treasury has managed the Judgment Fund, including certifying payments. Treasury established Fiscal Service in October 2012, and delegated key Judgment Fund functions to that bureau. Fiscal Service is responsible for, among other things, providing central payment services to federal agencies. Fiscal Service is the primary disburser of payments to individuals and businesses on behalf of federal agencies, including benefit payments made by the U.S. Social Security Administration and the U.S. Department of Veterans Affairs, federal income tax refund payments, and payments to businesses for goods and services provided to the federal government. Annually, Fiscal Service disburses more than a billion payments, with an associated total dollar value of more than $2.4 trillion. Administering the Judgment Fund is among the services that Fiscal Service provides. A federal agency may request payment of a claim from the Fund on its behalf only in instances where funds are not legally available to pay the claim from the agency’s own appropriations or other funding source. Amounts paid from the Fund vary from year to year. Treasury reported that the Fund paid about $3 billion and $4 billion for administrative and litigative claims in fiscal years 2015 and 2016, respectively. Fiscal Service carries out its mission through direct support from its three divisions. The primary focus of the Judgment Fund Branch is to receive and process claims for Judgment Fund payments. As shown in figure 1, the Judgment Fund Branch operates within Fiscal Service’s Financial Services and Operations Division. Fiscal Service only certifies payments of claims from the Judgment Fund when the following four tests have been met: (1) claims are final, (2) claims are monetary, (3) one of the authorities specified in the Judgment Fund statute permits payment, and (4) payment is not legally available from any other source of funds (e.g., claims are only paid from the Judgment Fund when payment is not otherwise provided for in a specific appropriation or by another statutory provision). Generally, federal agencies are not required to reimburse the Judgment Fund. Two exceptions are Judgment Fund payments made pursuant to (1) the Contract Disputes Act of 1978 (CDA) and (2) the Notification and Federal Employee Antidiscrimination and Retaliation Act of 2002 (No FEAR Act). Currently, Treasury produces, and posts on its website, a voluminous spreadsheet—referred to as the Judgment Fund Transparency Report to Congress—when Congress requests it, but is not otherwise required to do so. The spreadsheets are data extracts from JFICS that provide information on the types and amounts of claims and the agencies for which the payments were made. Members of Congress introduced legislative proposals in the recent past related to the Judgment Fund. For example, in the 115th Congress, a bill entitled the Judgment Fund Transparency Act of 2017 (H.R. 1096), as reported (amended) by the Committee on the Judiciary on October 16, 2017, would amend the Judgment Fund statute to require Treasury to post on its website information related to claims on the Judgment Fund. Treasury-Provided Information Was Not Fully Responsive and Not Fully Reconciled In response to the Committee’s request for Schedules of the Judgment Fund Non-Entity Assets, Non-Entity Costs, and Custodial Revenues prepared in accordance with U.S. GAAP and related information, Treasury provided to the Committee nine “exhibits” that contained selected information on Judgment Fund payments and other related information to answer nine questions in the Committee’s request. We reviewed the Treasury-provided information and found that it did not provide the Schedules of Judgment Fund Non-Entity Assets, Non-Entity Costs, and Custodial Revenues for fiscal years 2010 through 2016, prepared in accordance with U.S. GAAP, and appropriate note disclosures or MD&A to the Committee, as requested. In addition, we identified numerous differences between amounts included in the exhibits provided to the Committee and those reported in Treasury’s (1) unaudited transparency reports, (2) audited Schedules, or (3) audited Financial Statements. For example, we identified differences between administrative and litigative payments for fiscal years 2010 through 2016 reported on Exhibits 1 and 2 - Judgment Fund Administrative and Litigative Payments by Defendant Agency and Fiscal Year and those reported in Treasury’s (1) unaudited transparency reports, (2) audited Schedules, and (3) audited Financial Statements, for all years presented (as shown in tables 1, 2, and 3). Further, we identified numerous differences between financial and nonfinancial information in Treasury’s exhibits and comparable information contained only in the transparency reports. For example, the Committee asked Treasury to disclose the amount of Judgment Fund payments for attorneys’ fees pursuant to the Equal Access to Justice Act (EAJA) for fiscal years 2010 through 2016. In response, Treasury provided Exhibit 8 - Amounts Paid from the Judgment Fund for EAJA Claims by Fiscal Year. We compared total payments for each fiscal year reported in Exhibit 8 with those reported in the transparency reports for the same years and identified differences in payments for principal, attorneys’ fees, and costs, as shown in table 4. We provided Treasury the results of our comparisons and requested explanations for the differences we identified, and Treasury provided explanations for some of them. Subsequently, Treasury officials informed us that they discovered that the exhibits were created in a faulty manner, and rather than expending resources to reconcile and explain the numerous differences we identified, they indicated that Fiscal Service staff would submit new exhibits to the Committee; however, they did not provide a date by which they would do so. Judgment Fund Branch staff further explained that the Committee’s request was a unique request for information that could not be fulfilled with existing standard reports and queries. To respond to the request, Fiscal Service created ad hoc queries of the JFICS database using different instructions for extracting data for the exhibits than those used for creating the transparency reports. The Judgment Fund Branch relied on these ad hoc queries, primarily from JFICS, to prepare the exhibits answering the nine questions included in the Committee’s request. However, according to Judgment Fund Branch officials, the Judgment Fund Branch does not prepare financial statements, such as the Schedules of Non-Entity Assets, Non-Entity Costs, and Custodial Revenues. Rather, its primary focus is receiving and processing claims for Judgment Fund payments. In addition, these officials told us that they could not confirm whether the Judgment Fund Branch worked with the Fiscal Accounting Branch to respond to the Committee’s request or prepare the exhibits provided to the Committee. Treasury’s policy is to ensure and maximize the quality, objectivity, utility, and integrity of the information that it disseminates to the public. This policy directs Treasury bureaus and departmental offices to develop standards for information quality and ensure that the standards are used when disseminating information. The policy also directs that such information be accurate, clear, complete, and unbiased. In addition, policy guidelines specifically state that in situations where public access to data and methods will not occur, especially rigorous checks to analytic results should be applied and documented. According to Fiscal Service officials, this policy applies strictly to information disseminated to the public, and the related procedures in the policy do not apply to information transmitted to federal entities, including Congress. Fiscal Service officials did not provide evidence of a similar policy or procedures for ensuring the quality of the information disseminated to Congress and other federal entities. Fiscal Service officials also did not provide us with documentation indicating that any checks or reviews were performed on the exhibits—in a manner consistent with Treasury’s written policy and review procedures for disseminating information to the public—before Treasury provided them to the Committee. As a result, the exhibits that Treasury provided to the Committee were not responsive to the Committee’s request and are at increased risk that they may contain unreliable information. Accordingly, the Committee lacks important, reliable information needed to effectively oversee Judgment Fund activities, including considering whether enacting new legislation would benefit the American people by ensuring better management of the Judgment Fund. Treasury Has Documented Procedures and Control Activities for Processing Payments According to Fiscal Service’s documented policies and procedures, payments from the Judgment Fund may be made only upon certification by Fiscal Service. An important step in the claims payment certification process is for the Fiscal Service claims analyst and claims reviewer to confirm that an agency’s claim for payment from the Judgment Fund is not otherwise provided for by another source of funds. This confirmation is necessary to make sure that the Judgment Fund is not used for payments that should be paid directly by the involved agency or another funding source. Another important step in the claims payment certification process is to confirm that the claim is final, meaning that the applicable federal officials have fully resolved the claim’s underlying dispute and the only outstanding issue is payment of the claim. Additionally, Fiscal Service calculates the amount of any interest that may be authorized and initiates action under federal debt collection law to offset any known indebtedness to the United States by the claimant. In the actual “certification” step, Fiscal Service does not review or evaluate the merits of the underlying claim. Payments made by the Treasury Judgment Fund on behalf of agencies are initiated upon the receipt of claim requests that agencies submit to Fiscal Service. These requests must be submitted online through JFICS or by sending completed payment request forms to the Judgment Fund Branch via fax or mail. Claims submitted through JFICS must be accompanied by a FS Form 197, Voucher for Payment, page 2, signed by the claimant, and either a (1) settlement agreement or (2) court order. Claims submitted via fax or mail must contain a (1) FS Form 194, Judgment Fund Transmittal Form; (2) FS Form 196, Judgment Fund Award Data Sheet; and (3) FS Form 197, Voucher for Payment, page 1, and a document that authorizes payment. Upon receipt of mailed or faxed forms, Fiscal Service staff manually enter the data from the submitted forms into JFICS. Fiscal Service staff review the forms for completeness and ensure that each FS Form 194 has been signed by the agency authorizing official. Fiscal Service relies on this signature and the presence of a U.S. government email address on the FS Form 194 as its primary controls for ensuring that a mailed or faxed claim has been authorized by the agency. Fiscal Service also relies on this signature to confirm that the claim is appropriate and is eligible to be paid from the Judgment Fund. For claims entered directly in JFICS by an agency, the agency authorizing official must click on “I agree” on the JFICS certification page to affirm that the claim is authorized by the agency and appropriate for payment from the Judgment Fund. (See fig. 2 for a depiction of the Judgment Fund claims process.) Depending on the claim amount, Fiscal Service staff perform a minimum of two levels of review on Judgment Fund claims, whether the claims are received by fax or mail or directly entered into the JFICS system by agencies. First, the claims analyst reviews the claim to ensure that the agency has provided all of the information necessary to process it. Once the claims analyst determines that all of the information has been provided, the claim is forwarded electronically to the claims reviewer. The claims reviewer performs a secondary review to determine if all the information required has been provided, as well as to ensure that the claims analyst entered the mailed or faxed information into JFICS correctly. Claims for less than $1 million do not require further review and are submitted to the Treasury Disbursing Office for payment. Claims for $1 million or more are subject to management review, and claims for $50 million or more are sent to the Fiscal Service Office of Chief Counsel for review. Conclusions In connection with its oversight efforts, the Committee requested certain information from Treasury about Judgment Fund financial balances, activities, and other information. However, the information that Treasury provided to the Committee in response to this request did not include Judgment Fund Schedules of Non-Entity Assets, Non-Entity Costs, and Custodial Revenues prepared in accordance with U.S. GAAP, including appropriate note disclosures and MD&A, as requested. Further, Treasury officials stated that the exhibits provided to the Committee were created in a faulty manner, resulting in an increased risk that they may contain unreliable information. Although Treasury directs its bureaus and offices to take steps to ensure the quality of information disseminated to the public, Fiscal Service did not take appropriate steps to ensure that the information it provided to the Committee was responsive and complete. Without sufficient financial and other information, the Committee’s ability to effectively oversee Judgment Fund activities, including considering whether enacting new legislation would benefit the American people by ensuring better management of the Judgment Fund, may be hampered. Recommendation for Executive Action We are making the following recommendation to Treasury: The Commissioner of the Bureau of the Fiscal Service should take steps to ensure that information provided to Congress undergoes a documented review to ensure the quality and responsiveness of the information provided. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report to Treasury for review and comment. In written comments, reproduced in appendix IV, Fiscal Service did not concur or nonconcur with our recommendation, but stated that it agreed with our concerns regarding the reliability of information contained in the exhibits provided to the Committee and that a new set of data has been compiled and undergone a documented review to ensure its reliability. We are encouraged by the steps being taken to ensure the reliability of this information, but it is unclear to what extent steps have been, or will be, taken to ensure the quality and responsiveness of other information that may be provided to Congress in the future. We believe that such steps are necessary to help ensure that the Committee has sufficient financial and other information to effectively oversee Judgment Fund activities. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Treasury, the Inspector General of the Department of the Treasury, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9816 or rasconap@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology The objectives of our audit were to (1) evaluate the extent to which the information the U.S. Department of the Treasury (Treasury) provided to the House Committee on the Judiciary (Committee) responds to the Committee’s May 2017 request for information about Judgment Fund balances and activities and reconciles to financial information included in annual, audited financial reports and other selected reports and (2) describe the Bureau of the Fiscal Service’s (Fiscal Service) documented procedures and related control activities for processing agency requests for payments from the Judgment Fund, including how Fiscal Service ensures that appropriate agency official approve claims and what reviews are required, if any, to ensure receipt of required documentation. To determine the extent to which the Treasury-prepared information responds to the Committee’s request for information about the Judgment Fund balances and activities, we compared the information provided by Treasury to the Committee with the Committee’s request letter to Treasury. For each item requested by the Committee, we reviewed the information provided by Treasury and determined whether it was responsive to the request. To determine the extent to which the Treasury-prepared exhibits reconcile to information included in annual, audited financial statements and other reports, we compared, and identified any differences between, the Treasury-prepared exhibits and certain information included in the following Treasury reports: unaudited Judgment Fund transparency reports to Congress for fiscal years 2010 through 2016; audited Schedules of Non-Entity Assets, Non-Entity Costs, and Custodial Revenues for fiscal years 2010 through 2013; and audited department-wide Financial Statements for fiscal years 2010 through 2016. To determine the reliability of the financial information contained in the unaudited transparency reports, we reviewed relevant documentation, interviewed knowledgeable agency officials, and conducted basic testing of the data. Based on these efforts, we concluded that the data were sufficiently reliable for the purpose of our reporting objective. In addition, we interviewed Fiscal Service staff to obtain (1) explanations for and reconcile differences we identified based on our comparisons and (2) Treasury’s related policies for reviewing information provided to Congress to ensure its quality and responsiveness. Further, because the Treasury Office of Inspector General (OIG) is currently conducting an audit that includes the Treasury Judgment Fund, we communicated with the OIG staff regarding the OIG’s current audit to ensure no duplication in our audit work. To describe Fiscal Service’s documented procedures and related control activities for processing agency requests for payments from the Judgment Fund, we reviewed Treasury’s standard operating procedures and external user manuals for the application Fiscal Service uses to process claims (the Judgment Fund Internet Claims System (JFICS)). We also observed Fiscal Service staff entering and reviewing Judgment Fund claims in JFICS. In addition, we obtained and reviewed selected independent public accountant (IPA) audit documentation related to processing Judgment Fund claims supporting the IPA’s fiscal year 2017 audit of Treasury’s department-wide financial statements. Appendix II: Differences GAO Identified between Treasury-Prepared Exhibits and Other Treasury-Issued Reports The U.S. Department of the Treasury (Treasury) provided the House Committee on the Judiciary (Committee) nine exhibits in response to nine questions included in the Committee’s request. Information included in these exhibits and differences we identified based on comparisons of this information with information included in certain Treasury annual audited financial reports and other reports is summarized below. Exhibits 1 and 2 - Judgment Fund Administrative and Litigative Payments by Defendant Agency and Fiscal Year shows, by agency and type of payment, the amounts paid from the Judgment Fund on behalf of federal agencies. We compared information in these exhibits with Treasury’s (1) unaudited Judgment Fund transparency reports to Congress for fiscal years 2010 through 2016; (2) audited Schedules of Non-Entity Assets, Non-Entity Costs, and Custodial Revenues (Schedules) for fiscal years 2010 through 2013; and (3) audited department-wide financial statements (Financial Statements) for fiscal years 2010 through 2016 (see tables 5, 6, and 7). Exhibit 3 - Judgment Fund Collections from Federal Agencies by Fiscal Year presents, by Treasury account symbol, recoveries and reimbursements from federal agencies. Exhibit 4 - Judgment Fund Accounts Receivable from Federal Agencies by Fiscal Year presents, by Treasury account symbol, amounts due from federal agencies for payments made on their behalf. We compared information in Exhibit 3 with the Schedules and information in Exhibit 4 with the Schedules and the Financial Statements for all available fiscal years. Information contained in Exhibits 3 and 4 were not payment related (these exhibits were receipts from agencies and accounts receivable owed by agencies) and therefore could not be traced to the transparency reports. The differences identified based on our comparisons of Exhibit 3 to the Schedules and Exhibit 4 to the Financial Statements are shown in tables 8 and 9, respectively. Exhibit 5 - Judgment Fund Costs Paid by Citation Code and Fiscal Year shows, by fiscal year, amounts paid for each type of citation code. We identified differences in each fiscal year between the total amounts paid as presented in Exhibit 5 and the total amounts contained in the transparency reports (see table 10). Exhibit 6 - Top 25 Attorney Law Firms that Received Payments from the Judgment Fund by Fiscal Year presents, by attorney and law firm, amounts paid for each of the 7 years. Because Treasury has identified this exhibit as containing personally identifiable information protected by the Privacy Act of 1974, we do not present information from Exhibit 6. Exhibit 7 - EAJA Payments to Plaintiffs’ Counsel in Decending Order shows, by attorney and law firm, amounts paid to each related to Equal Access to Justice Act (EAJA) claims. When we compared the exhibit to the transparency reports, we identified differences in the total amounts for all fiscal years (see table 11). Exhibit 8 - Amounts Paid from the Judgment Fund for EAJA Claims by Fiscal Year shows, by cost citation code, amounts paid for principal, attorneys’ fees, costs, and interest for each fiscal year. When we compared Exhibit 8 to the transparency reports, we identified differences in the amounts reported for principal, attorney’s fees, and costs for most fiscal years (see table 12). Exhibit 9 - Major Recipients of Judgment Fund Payments by Fiscal Year presents amounts paid to major recipients (top 25) of payments from the Judgment Fund. Because Treasury has identified this exhibit as containing personally identifiable information protected by the Privacy Act of 1974, information about Exhibit 9 is not presented. Appendix III: Comments from the Department of the Treasury Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Heather I. Keister (Assistant Director), Anthony Clark, Patrick Frey, Lauren S. Fassler, Nadine Ferreira, Valerie Freeman, James Kernen, Ned Malone, Lisa Motley, and Taya R. Tasse made key contributions to this report.
Why GAO Did This Study The Treasury Judgment Fund, managed by Fiscal Service, annually pays billions of dollars of claims on behalf of federal agencies. Transparent and reliable information is important for Congress to provide effective oversight of the Judgment Fund. In May 2017, the Committee requested that Treasury provide (1) Schedules of the Judgment Fund for fiscal years 2010 to 2016 prepared in accordance with U.S. GAAP, including appropriate disclosures to answer nine questions, and (2) information on processes and procedures used when paying claims. GAO was asked to review the information that Treasury provided to the Committee. This report (1) evaluates the extent to which the Treasury-prepared information responds to the Committee's request and reconciles to financial information included in annual, audited financial reports and other reports and (2) describes Fiscal Service's documented procedures and related control activities for processing agency claims. To address these objectives, GAO compared the information provided by Treasury to other Treasury reports, conducted interviews with agency officials, and reviewed documented procedures for processing claims. What GAO Found The Department of the Treasury (Treasury) did not provide the House Committee on the Judiciary (Committee) with the information the Committee requested on the Treasury Judgment Fund. Specifically, Treasury did not provide the Committee the Schedules of the Judgment Fund Non-Entity Assets, Non-Entity Costs, and Custodial Revenues that were prepared in accordance with U.S. generally accepted accounting principles (U.S. GAAP). Treasury also did not include appropriate note disclosures or Management's Discussion and Analysis, as requested by the Committee. Rather, Treasury provided nine exhibits containing selected Judgment Fund information to answer nine questions included in the Committee's request. In addition, GAO identified numerous differences between amounts included in Treasury's exhibits and its annual Judgment Fund transparency reports to Congress and certain audited financial reports. GAO requested explanations for these differences, and Treasury provided explanations for some of them. Subsequently, Treasury officials discovered and explained that the exhibits were created in a faulty manner, resulting in an increased risk that they may contain unreliable information. Treasury officials stated that rather than expending resources to further explain differences and reconcile the exhibits with the other information, Bureau of the Fiscal Service (Fiscal Service) staff planned to submit new exhibits to the Committee; however, they did not provide a date by which they would do so. GAO found that Treasury did not take appropriate steps consistent with its existing guidance for disseminating information to the public, such as performing appropriate reviews of information in the exhibits prior to providing them to the Committee, to ensure the quality and responsiveness of the information provided. The lack of reliable information on the Judgment Fund impairs the Committee's ability to provide effective oversight, including considering whether enacting new legislation would benefit the American people by ensuring better management of the Judgment Fund. Fiscal Service has policies and procedures to help ensure that it only certifies payments for awards, judgments, and compromise settlements (claims) from the Judgment Fund that meet the following four tests: (1) claims are final, (2) claims are monetary, (3) one of the authorities specified in the Judgment Fund statute permits payment, and (4) payment is not legally available from any other source of funds (e.g., claims are only paid from the Judgment Fund when payment is not otherwise provided for in a specific appropriation or by another statutory provision). What GAO Recommends GAO recommends that Fiscal Service take steps to ensure that information provided to Congress undergoes a documented review to ensure the quality and responsiveness of the information provided. Fiscal Service did not concur or nonconcur with the recommendation but agreed with GAO concerns regarding the reliability of information provided to the Committee.
gao_GAO-18-356
gao_GAO-18-356_0
Background VHA’s patient advocacy program is intended to provide veterans with a means to provide feedback about health care services they receive at VAMCs. VHA sets forth minimum expectations for VAMCs’ administration of the program, including that veterans must have easy access to a patient advocate and must have their complaints addressed in a convenient and timely manner. Administration of the Patient Advocacy Program The patient advocacy program is administered at the VAMC level. Each of VA’s 170 VAMCs is responsible for making at least one patient advocate available to respond to veterans’ feedback, and for ensuring that feedback is recorded in PATS. VAMCs may designate other staff to assist patient advocates in responding to feedback, such as lead patient advocates and service-level advocates. Service-level advocates, such as nurses or administrative staff, are designated at some VAMCs to respond to veterans’ feedback before involving a patient advocate. All VAMC staff that have a designated role in the administration of the patient advocacy program are referred to as patient advocacy program staff. In addition to designating program staff, VAMCs may use a variety of methods to make veterans aware of the patient advocacy program, such as displaying signage on site and including information about the program on their websites. (See app. I for more information on the methods selected VAMCs used to make veterans aware of the program.) Patient advocacy program staff enter veterans’ feedback in PATS using a report of contact (ROC) and assign one or more issue codes that generally describe the nature of the feedback, such as coordination of care. (See app. II for additional information on entering veterans’ feedback into PATS.) Each piece of feedback shared is categorized as either a request for information, compliment, or complaint. VHA’s handbook for the program specifies certain goals for data collection and resolution—specifically, that all complaints should be entered in PATS to enable a comprehensive understanding of veterans’ issues and concerns to, in turn, identify potential system-wide improvements; and responses should occur no later than 7 days after the complaint is made. With this guidance, patient advocacy program staff use a variety of approaches for entering veterans’ feedback in PATS and closing it in the system once addressed. For example, when VAMCs have designated service-level advocates, the process for entering and closing feedback in PATS is generally different than the approach used by VAMCs that have only patient advocates. (See fig. 1.) Patient advocacy program staff at each VAMC are assisted by a VISN- level coordinator who acts as a liaison between the VAMCs and VHA and is responsible for ensuring consistency in PATS data collection within the VISN. The VISN director is responsible for designating the coordinator and ensuring that each VAMC within the VISN has at least one patient advocate. Oversight of the Patient Advocacy Program The VHA office responsible for overseeing the patient advocacy program changed as a result of CARA. From January 2011 to July 2017, the program was overseen by OPCC&CT under VHA’s Deputy Under Secretary for Health for Operations & Management. CARA included a provision for VHA to establish OPA to begin overseeing the program and specified that this office would report directly to the Under Secretary for Health, a higher-level office within VHA. Although OPCC&CT is no longer responsible for overseeing the program, it is to continue to play an advisory role to OPA during the initial phases of its work, according to OPCC&CT officials. Many of OPA’s oversight responsibilities are specified in CARA including ensuring that patient advocates advocate on behalf of veterans, manage PATS, and identify trends in the data to determine whether there are opportunities for improving veterans’ health care. Also, OPA’s director is required to ensure that patient advocates receive relevant, consistent training across VAMCs. When establishing the office in July 2017, VHA officials wrote a memo indicating that OPA’s primary objectives were to implement a standardized policy for the patient advocacy program and to resolve any system-wide issues, such as concerns about care across VAMCs identified through veterans’ feedback. In addition, in August 2017, OPA began soliciting feedback from VAMCs on various aspects of the patient advocacy program to identify improvement priorities and best practices. By September 2017, OPA had identified an acting program director, established a workgroup (called the National Strategic Workgroup) to develop recommendations related to program administration, and finalized a charter that identifies workgroup deliverables. VHA Has Provided Limited, Outdated Guidance to VAMCs on the Governance of the Patient Advocacy Program VHA has provided limited guidance to VAMCs on the governance of the patient advocacy program. Specifically, VHA provided limited guidance on how to meet the program’s expectations that veterans have easy access to a patient advocate who will hear their complaints and address them in a timely manner. While VHA’s handbook for the program provides general information on the responsibilities of patient advocacy program staff, it does not specify the VAMC department to which patient advocates should report to help ensure VAMCs meet these expectations. According to VHA officials, the lack of specific guidance was intentional and due in part to VHA officials’ view that leadership at each VAMC is in the best position to understand the needs of veterans at their facilities, and therefore should have flexibility to make decisions about governance in response to those needs. In addition to providing limited guidance to VAMCs, VHA’s patient advocacy program handbook is out of date and does not incorporate recent agency-wide changes, such as those made in response to VHA Strategic Plan FY 2013 – 2018 which identifies the goal of providing proactive, patient-driven health care. The handbook for the program was issued in 2005, expired in 2010, and as of January 2018, no updates had been released. In the absence of an updated document, VAMCs are still expected to follow the outdated handbook. However, the handbook does not identify the responsibilities of the current VHA office responsible for overseeing the program. Instead, it identifies the responsibilities of the VHA office that oversaw the program before OPCC&CT began overseeing the program in 2011. In recent years, OPCC&CT reviewed the implementation of the patient advocacy program at some VAMCs and provided specific recommendations on how to change program governance to better reflect a more proactive patient advocacy program model. However, the recommendations from these reviews were provided only to some VAMCs; guidance that could be applicable to all VAMCs was not added to the handbook. OPCC&CT officials stated that they did not update the handbook because they decided to instead spend time trying to understand recent feedback they received from VAMC officials and ensure that any updates would reflect system-wide shifts as a result of VHA’s strategic plan. OPCC&CT’s limited and outdated guidance to VAMCs on the governance of the patient advocacy program is inconsistent with federal internal control standards for the control environment, which require agencies to establish an organizational structure, assign responsibility, and delegate authority to achieve agency objectives—key aspects of governance. To do so, an agency may develop an organizational structure that assigns responsibilities to discrete units and defines reporting lines at all levels of the organization. Without providing specific, timely guidance to VAMCs on the governance of the patient advocacy program, the program is at risk of not meeting its minimum expectations. In light of the limited and outdated guidance on the governance of the program, patient advocacy program staff at most of our selected VAMCs noted that the VAMC department to which patient advocates report can have a direct effect on the ability of staff to resolve veterans’ complaints. For example, patient advocates at one VAMC said because of the program’s position within the organization, they did not have the authority to ensure that VAMC officials external to the patient advocacy program, such as physicians, quickly engaged in responding to veterans’ complaints. In these cases, a patient advocate would contact the physician to resolve a complaint, but may not have received a response until the matter was brought to the attention of the physician’s supervisor—a reporting line that is outside of the patient advocacy program at this VAMC. Officials from several of our selected VAMCs and VSOs noted that the position of the patient advocacy program within VAMCs may not give patient advocates the authority to require VAMC staff to respond to veterans’ complaints. They added that conflict-of- interest concerns could arise when a veteran has a complaint about a VAMC for which the patient advocate works. (See app. III for additional information on the governance of the patient advocacy program at selected VAMCs.) In VA’s written comments on a draft of this report, which are reproduced in Appendix IV, VA stated that it issued its new directive for the patient advocacy program that had been in development as we were conducting our review. While the updated directive specifies that a VAMC’s lead patient advocate should report to the facility director, it does not specify the VAMC department to which other patient advocacy program staff, including patient advocates who are not designated as lead patient advocates and service-level advocates, should report. In addition, OPA’s National Strategic Workgroup recently submitted recommendations to OPA on the governance of the patient advocacy program. OPA officials stated that they plan to prioritize the recommendations and elicit feedback from VISN directors on how to operationalize the recommendations. However, it is unclear whether OPA will provide additional guidance related to the governance of the program based on these recommendations, such as guidance on the VAMC department to which all types of patient advocacy program staff should report. Until actions to address the weaknesses we found are completed, guidance on the governance of the program will continue to be lacking. VHA Has Provided Limited Guidance to VAMCs on Staffing the Patient Advocacy Program VHA has provided limited guidance to VAMCs on the number and type of patient advocacy program staff needed to ensure that complaints from veterans are addressed in a convenient and timely manner. According to VHA’s existing handbook for the program, every VAMC should have at least one patient advocate and appropriate administrative, technical, and clerical support should be provided to allow for efficient performance of the responsibilities of program staff. OPCC&CT did not provide guidance on how VAMCs should determine the appropriate number of administrative, technical, and clerical staff or type of patient advocacy program staff, such as lead patient advocates and service-level advocates. According to officials, this was because no assessment was conducted to identify what staff resources would be needed to meet the expectations of the program. In the absence of such an assessment, OPCC&CT instead relied on each VAMC to determine what resources would be needed based on the facility’s size and services provided. However, VHA’s handbook for the program does not provide instruction for VAMC or VISN officials on how to determine the number and type of staff needed for the program. OPCC&CT officials added that budget constraints can also affect a VAMC’s ability to hire the appropriate staff for the program. (See app. III for additional information on the number and type of patient advocacy program staff at selected VAMCs.) Officials at all but one of the selected VAMCs stated that program staff at their VAMCs had more work to do than they could handle. For example, VAMC officials cited backlogs in work, such as calls from veterans not being answered, messages not being responded to, voicemail boxes being full, and not all veterans’ feedback being entered into PATS. Officials from one VAMC we spoke with in July 2017 stated that due to workload demands and not enough patient advocacy program staff at their VAMC, they had roughly 300 unanswered phone calls at that time from veterans who want to provide feedback to a patient advocate. Officials from several VSOs we spoke with stated that there is not enough patient advocate staff, adding that veterans reported that their calls to patient advocates were not answered, they were unable to reach an advocate, or their calls were not responded to in a timely manner. The lack of staffing guidance is inconsistent with GAO’s Key Principles for Effective Strategic Workforce Planning, which states that workforce planning is essential to addressing an organization’s critical need to align its human capital program with its current and emerging mission and programmatic goals. Further, federal internal control standards require agencies to design control activities to achieve objectives, a key aspect of effectively staffing a program. Such control activities may include effectively managing the agency’s workforce, such as by continually assessing the knowledge, skills, and abilities of the workforce to achieve organizational goals. The lack of guidance on staffing may impede VAMCs’ efforts to ensure that they have the appropriate number and type of staff to administer the patient advocacy program. The resulting misalignment of staff resources could have negatively affected VAMCs’ ability to achieve the program’s objectives, including addressing veterans’ complaints in a timely manner. For example, if there are not a sufficient number of patient advocates to respond to veterans’ phone calls in a timely manner, VAMCs may not be able to ensure that patient advocates can respond to veterans’ complaints within 7 days, as called for by VHA’s handbook for the program. According to VHA officials, OPA analyzed feedback from VAMCs on the factors that should be considered in developing national guidelines for staffing, such as facility size and complexity level, and directed its National Strategic Workgroup to develop recommendations for determining the extent to which VAMCs have utilized various patient advocacy program staff, such as service-level advocates, by the spring of 2018. However, OPA expects that these efforts will result in recommendations for consideration, and it is unclear what steps, if any, will be taken based on the recommendations. Until actions to address the weaknesses we found are completed, the lack of guidance for VAMCs on determining the appropriate number and types of staff will put the patient advocacy program at risk of being unable to address veterans’ complaints in a convenient and timely manner. VHA Has Recommended Training for Patient Advocates, but Has Not Developed an Approach to Routinely Assess Their Training Needs or Monitored Training Completion VHA Has Developed a Recommended Training List for Patient Advocates, but Has Not Developed an Approach to Assess Their Training Needs on a Routine Basis VHA has recently developed a list of recommended training for patient advocates. In the spring of 2017, OPCC&CT officials updated a recommended training list for patient advocates developed before 2011 when OPCC&CT began overseeing the patient advocacy program. The training list covers a wide variety of topics, including how to enter and examine trends in PATS data, as well as key responsibilities of patient advocates outlined in VHA’s handbook for the program. OPCC&CT officials stated that they would like to make the trainings required, but have not pursued this because of the lengthy process within VHA to designate required training for a specific group of staff. To update the list in 2017, OPCC&CT convened a workgroup (which included several patient advocates) to determine whether the old training list was sufficient, and the workgroup shared its suggested updates with VISN- level coordinators for distribution to VAMCs in April 2017. We found that OPCC&CT has not developed an approach to routinely assess the training needs of patient advocates. Rather, OPCC&CT officials stated that they relied on VAMC and VISN staff to conduct these assessments. However, VHA’s handbook for the program does not specify that VAMC or VISN officials are responsible for conducting routine assessments of patient advocates’ training needs. None of our selected VAMCs routinely conducted assessments of the training needs of patient advocates, such as assessing whether advocates were adequately trained to carry out their responsibilities. Officials from two VAMCs said they used ad hoc approaches to assess training needs. For example, one patient advocate supervisor stated that training is offered on an “as needed” basis in patient advocate meetings when a training need is identified. The lack of an approach for routinely assessing the training needs of patient advocates is inconsistent with federal standards for internal control related to control activities. Under these standards relating to human capital, management ensures that training is aimed at developing and retaining employee knowledge, skills, and abilities to meet changing organizational needs. Management should also continually assess the knowledge, skills, and ability needs of a program so that the program is able to obtain a workforce that has the required knowledge, skills, and abilities to achieve organizational goals. Without an approach for routinely assessing the training needs of patient advocates, VHA may not be able to clearly identify gaps in the knowledge and skills of these staff over time, which, in turn, could put the program at risk of not meeting its goals. For example, if there is a gap in understanding among patient advocates that all complaints should be entered into PATS, addressing veterans’ complaints may be delayed, if addressed at all, and opportunities to analyze complaint data for the purpose of identifying system-wide improvements may be missed. According to VHA officials, OPA analyzed feedback from VAMCs on the training needs of patient advocates, including how to correctly enter data into PATS, and directed its National Strategic Workgroup to develop recommendations for assessing the training needs of patient advocates by the spring of 2018. OPA expects that these efforts will result in recommendations for OPA to consider, but it is unclear what steps, if any, will be taken based on the recommendations. Until actions to address the weaknesses we found are completed, the lack of routine assessments of training needs will continue to put the program at risk of staff not having the requisite skills and knowledge to carry out their duties. VHA Has Not Monitored Training Completion for Patient Advocates VHA has not monitored the completion of training for patient advocates. Specifically, OPCC&CT officials said that they did not monitor the extent to which patient advocates completed the recommended training distributed in April 2017. Instead, these officials relied on patient advocate supervisors to monitor training completion. However, VHA’s handbook for the program does not specify that patient advocate supervisors are responsible for monitoring the completion of training for patient advocates. Half of patient advocate supervisors at our selected VAMCs did not track the completion of patient advocacy training. Patient advocate supervisors said that they are able to track the completion of general VA employee training through VA’s Talent Management System. However, most training specific to patient advocacy were generally not included in this system during the period of our review. Officials from our selected VAMCs who did track patient advocacy training used various methods to record completion, such as keeping attendance lists for the training provided. Taking steps to monitor training completion would be consistent with GAO’s Guide for Assessing Strategic Training and Development Efforts in the Federal Government which identifies components of the training and development process, including having agencies collect and monitor data corresponding to establishing training objectives. Monitoring training completion would also be consistent with federal standards for internal control related to control activities. Under these standards relating to human capital, management ensures that training is aimed at developing and retaining employee knowledge, skills, and abilities to meet changing organizational needs. Management also continually assesses the knowledge, skills, and ability needs of a program so that the program is able to obtain a workforce that has the required knowledge, skills, and abilities to achieve organizational goals—key components for monitoring training completion. If patient advocates are not properly trained in how to use PATS to document and resolve complaints, tracking the status of complaints may be more difficult, which could increase the likelihood that they are not addressed in a timely manner, if at all. Further, CARA specifies that the director of OPA should ensure that patient advocates receive training specific to patient advocacy. According to VHA officials, OPA did not obtain information on whether patient advocates completed recommended training and did not identify an approach for monitoring training completion moving forward. Without monitoring training completion, there is an increased risk that patient advocates have not received the training they need to effectively fulfill their responsibilities such as advocating on behalf of veterans and consistently using PATS to document and resolve complaints. VHA Has Not Monitored Patient Advocacy Data-Entry Practices or Reviewed Patient Advocacy Data to Assess Program Performance and Identify System-Wide Improvements VHA Has Not Monitored Whether Complaints Were Always Entered into PATS and Issue Codes Assigned Consistently VHA officials have not monitored PATS data-entry practices to ensure complaints were always entered into PATS and issue codes were assigned consistently to ROCs. OPCC&CT officials told us they did not monitor the data-entry practices of patient advocacy program staff to ensure that all complaints were entered into PATS, a key goal according to VHA’s handbook for the program. Rather, they relied on VISN and VAMC officials to ensure that program staff entered all complaints into PATS. Officials from two of the five VISNs we interviewed stated that they did not perform any audits or checks of the data entered into PATS by patient program staff at VAMCs. We also found inconsistencies in the extent to which VAMC officials entered complaints into PATS, with complaints always entered into PATS at one of our selected VAMCs, while at other VAMCs some complaints were left unrecorded, according to officials. For example, at one VAMC, officials stated that over a third of the complaints received were not entered into PATS due to the competing workload demands of patient advocates. Similarly, at another selected VAMC, almost a quarter of the complaints received were not entered into PATS, according to patient advocates there who explained that they primarily used a document outside of PATS to record veterans’ feedback. In addition, OPCC&CT officials told us they did not monitor whether patient advocates used a consistent practice to assign issue codes to veterans’ feedback recorded into PATS. Using a consistent data-entry practice is important to ensure that PATS data can be compared across VAMCs to better enable an accurate and comprehensive understanding of veterans’ issues and concerns, a goal of the patient advocacy program. OPCC&CT officials stated that they relied on VISN-level coordinators to monitor coding practices because VHA’s handbook for the program states that these coordinators should develop VISN-wide consistent approaches for entering complaints into PATS. VISN-level coordinators from two selected VISNs stated that they created a standard practice for assigning issue codes within a particular VISN; however, the coding practices differed between VISNs, making national level analysis difficult. We also found inconsistencies in how VAMC officials coded specific veterans’ feedback. For example, patient advocates did not use consistent practices to code issues related to the Veterans Choice Program (Choice Program), one of the most common types of issues patient advocates told us they hear about from veterans. Officials from one of our selected VAMCs said they code feedback related to the Choice Program under a specific “request for information” issue code, regardless of whether the feedback was a request for information, compliment, or complaint. In contrast, officials at another VAMC stated that they typically code feedback related to the Choice Program as a complaint related to billing. (See app. II for additional information on data- entry practices at selected VAMCs.) OPCC&CT’s lack of monitoring of PATS data-entry practices is inconsistent with GAO’s Assessing the Reliability of Computer-Processed Data which identifies the importance of consistent data-entry practices to ensure that data are reasonably complete and accurate. Further, federal standards for internal control related to information and communications require agencies to use quality information, such as relevant data from reliable sources, to achieve the agency’s objectives. Under internal control standards for control activities, management also is to monitor performance to achieve objectives. Without OPCC&CT monitoring data- entry practices, the patient advocacy program is at risk of not meeting its goal that all complaints are entered into PATS and there is an increased likelihood of VHA not having an accurate understanding of veterans’ complaints across VAMCs. Moving forward, in fall 2017, OPA distributed meeting minutes to all VISN and VAMC directors stating that all veterans’ feedback should be consistently recorded in PATS. OPA officials also updated some of the issue codes in PATS in fall 2017 and added a code specifically for community care issues, such as issues related to the Choice Program. In addition, OPA officials stated that they plan to promote the consistent assignment of issue codes to veterans’ feedback through national training, but have not specified when this training will occur or if OPA staff will monitor patient advocates’ consistent assignment of issue codes or of data-entry practices generally. Until these actions are completed, however, the gaps in monitoring of PATS data-entry practices that we identified will continue to exist, putting the program at risk of incomplete or unreliable data that may not allow an accurate understanding of veterans’ complaints, critical to making system-wide improvements. VHA Has Not Systematically Reviewed PATS Data to Assess Program Performance and Identify Potential System- Wide Improvements VHA officials have not systematically reviewed PATS data to assess program performance and identify potential system-wide improvements, goals of the patient advocacy program. Specifically, OPCC&CT officials stated that they reviewed PATS data in response to inquiries, but did not conduct systematic reviews of the data over time. For example, they did not track VAMC performance on responding to complaints in a timely manner or track the most common complaints across VAMCs to identify potential opportunities for system-wide improvements. OPCC&CT officials stated that they did not conduct systematic reviews of PATS data because VISN and VAMC officials were primarily responsible for these analyses. However, according to VHA’s handbook for the patient advocacy program, VHA officials have a responsibility to examine PATS data for trends across VAMCs and identify any areas for system- wide improvement. Officials stated that it was challenging to analyze PATS information included in narrative text, such as descriptions of veterans’ feedback. Not reviewing PATS data is inconsistent with federal standards for internal control for monitoring which require agencies to establish and operate monitoring activities, such as assessing the quality of performance over time, and evaluate the results. Further, not conducting systematic assessments of PATS data made it difficult for OPCC&CT to determine program performance, such as whether the program was meeting its goal that all complaints are entered into PATS and responded to within 7 days. Officials explained that VHA interprets this goal to mean that complaints are closed in PATS within 7 days. According to VA, between FY 2014 and FY 2017 there were more than 53,000 complaints per year open for greater than 7 days. If OPCC&CT officials had conducted systematic reviews of PATS data, they may have been able to identify that there were a significant number of complaints open for longer than 7 days and consider what actions should be taken, such as providing additional guidance to VAMCs on how to address complaints in a timely manner. Furthermore, without systematically reviewing PATS data across VAMCs to identify potential system-wide improvements, OPCC&CT officials may have been unaware of important care issues across VAMCs. For example, patient advocates from several of our selected VAMCs stated that opioid prescription issues are among the most common complaints they received from veterans. If OPCC&CT officials were to have systematically reviewed PATS data across VAMCs to determine the prevalence of these types of complaints, they could have identified the need to address them on a national level and consider system-wide policies or guidance in response. According to VHA officials, OPA is in the process of identifying the data it needs to review on a routine basis, and directed its National Strategic Workgroup to identify program data that could be reviewed to assess program performance and identify potential system-wide improvements by the spring of 2018. However, OPA expects that these efforts will result in recommendations for OPA to consider, and it is unclear what steps, if any, will be taken based on the recommendations. Until actions to address the weaknesses we found are completed, the lack of a systematic review of PATS data will persist, putting the program at continued risk of missed opportunities for identifying and addressing weaknesses across VAMCs. Conclusions As one of the largest health care delivery systems in the nation, it is critically important for VHA to ensure that each veteran who receives health care services has easy access to an advocate who listens to that veteran’s feedback and responds in a timely manner. This is especially important given concerns about veterans’ ability to receive timely and quality care. However, VHA’s efforts to ensure that the patient advocacy program is meeting its goals—to identify potential system-wide improvements and respond to complaints within 7 days—have fallen short. OPCC&CT did not provide sufficient oversight to the program in the four key areas of governance, staffing, training, and data-entry practices, which has left the program at risk for not meeting its goals. VHA’s newly established OPA has initiated plans to improve the patient advocacy program in these four areas; however, most of these plans center around a workgroup that will make recommendations for OPA to consider, and it is unclear what specific actions, if any, will be taken based on these recommendations. Further, documentation for several of OPA’s planned efforts has not been finalized. Unless specific actions to address the weaknesses we identified are completed expeditiously, the program is at risk of not meeting its goals, including addressing veterans’ complaints in a convenient and timely manner. Furthermore, without addressing the weaknesses we identified, OPA misses opportunities to review PATS data across VAMCs to identify potential system-wide issues that, if addressed, could significantly improve the experience of veterans. Such reviews are critical to ensuring that VHA is taking steps to both meet its goal in its strategic plan to provide veterans with timely and quality health care, and to address recent issues it has faced, such as veterans’ ability to access care in a timely manner. Recommendations for Executive Action We are making the following six recommendations to the VHA Undersecretary for Health: provide updated guidance to VAMCs on the governance of the patient advocacy program, including clear definitions of reporting lines. (Recommendation 1) assess and provide guidance to VAMCs on appropriately staffing the patient advocacy program, including guidance on how to determine the appropriate number and type of staff. (Recommendation 2) develop an approach to routinely assess the training needs of patient advocates. (Recommendation 3) monitor the completion of training for patient advocates. (Recommendation 4) monitor PATS data-entry practices to ensure all complaints are entered into PATS and that veterans’ feedback is coded consistently. (Recommendation 5) systematically review PATS data to assess program performance and identify potential system-wide improvements. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of this report to VA for comment. In its written comments, which are reproduced in Appendix IV, VA concurred with our recommendations and noted that it recently issued the new directive for patient advocacy that had been in development as we were conducting our review. The directive supersedes the outdated handbook for the patient advocacy program and describes certain aspects of program governance, including certain reporting lines, roles, and responsibilities. Accordingly, VA requested that we close our first recommendation related to governance. We revised our report to reflect the issuance of the new directive. However, we do not believe the directive fully implements our recommendation. While the updated directive specifies that a VAMC’s lead patient advocate should report to the facility director, it does not specify the VAMC department to which other patient advocacy program staff, including patient advocates who are not designated as lead patient advocates and service-level advocates, should report. Until VA specifies the reporting lines for these other patient advocacy program staff, our recommendation will remain open. In addition, VA stated in its written comments that OPA has efforts underway related to staffing, training, and PATS data entry and assessment and provided estimated completion dates for these efforts. We will monitor VA’s efforts to address our recommendations. VA did not provide technical comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, the Under Secretary for Health, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in Appendix V. Appendix I: Awareness of the Patient Advocacy Program Our eight selected Department of Veterans Affairs (VA) medical centers (VAMC) use a variety of methods to make veterans or their representatives aware of the patient advocacy program, including providing brochures on the program, displaying signage, and providing program information on the VAMC’s website. (See fig. 2 for examples of patient advocacy program signage at some of the VAMCs we visited.) Appendix II: Patient Advocate Tracking System (PATS) Data Entry and Management Patient advocacy program staff, such as patient advocates or service- level advocates who are designated to respond to veterans’ feedback, enter feedback from veterans or their representatives in the Veterans Health Administration (VHA) Patient Advocate Tracking System (PATS) by creating a report of contact (ROC). Each ROC includes basic information regarding the individuals involved, a description of the feedback provided by the veteran, and a description of the steps taken to resolve the issue. Patient advocacy program staff assign one or more issue codes that generally describe the nature of the feedback, such as “coordination of care.” (See figures 3 and 4.) In order to organize veterans’ feedback, VHA categorizes feedback as either requests for information, compliments, or complaints. Within each of these categories VHA defines specific issue codes for program staff to select from based on the description of the veteran’s feedback. (See table 2.) The Comprehensive Addiction and Recovery Act of 2016 (CARA), includes a provision for every VAMC to display the purpose of the program, along with the contact information of a patient advocate at the facility, in as many prominent locations as deemed appropriate to be seen by the largest percentage of veterans. In September 2016, VHA Central Office sent a memo to Veterans Integrated Service Network (VISN) directors explaining this requirement and an Office of Patient Centered Care and Cultural Transformation (OPCC&CT) official obtained confirmation from all VHA facilities that this requirement was met in October 2016. Nevertheless, officials from two veterans service organizations (VSO) we interviewed stated they often encounter veterans who are not aware of the patient advocacy program. According to VA, in fiscal year (FY) 2017, there were 268,114 veterans associated with ROCs entered in PATS. VA also reported that, in the same year, patient advocacy program staff entered 414,256 unique reports of contact in PATS. According to VA, from the unique reports of contact in PATS, program staff documented 473,564 issues, which included (but were not limited to) 112,722 requests for information, 35,839 compliments, and 325,003 complaints. See table 3 for the top five issues that patient advocacy program staff across VAMCs entered in PATS for FY 2017. According to VA, in FY 2017, a total of 1,391 program staff system-wide entered data in PATS. In the same year, according to PATS, veterans, rather than family members or friends, most often provided feedback to patient advocacy program staff. Our eight selected VAMCs varied in the number of patient level advocates and service-level advocates who had access to PATS, whether veterans’ feedback was recorded outside of PATS, and which issue code or codes were used to record feedback related to the Veterans Choice Program. (See table 4.) Examples of methods that patient advocates and service-level advocates used at selected VAMCs to record veterans’ feedback outside of PATS included call logs and tracking spreadsheets. VAMC officials indicated that recording information outside of PATS helped them track their responses to veterans’ feedback. Some of the information recorded outside of PATS was additional information that is not required to be entered into PATS, such as requests for information. Appendix III: Approaches to the Governance and Staffing of the Patient Advocacy Program The eight Department of Veterans Affairs (VA) medical centers (VAMC) selected for our review used a variety of approaches to govern the patient advocacy program, resulting in differences in the number of positions for patient advocates and service-level advocates and the title of the positions. Service-level advocates, such as nurses or administrative staff, are designated at some VAMCs to respond to veterans’ feedback before involving a patient advocate. (See table 5.) Patient advocates reported to a variety of departments among our selected VAMCs. At two of the VAMCs, patient advocates reported to the customer or consumer relations department, while at three, patient advocates reported to the quality management department. In addition, the placement of the department that patient advocates reported to within the VAMC differed. For example, the patient advocate supervisor at one of the selected VAMCs said that patient advocates reported to the customer service manager, who did not report directly to the VAMC director. At another VAMC, the patient advocate reported directly to the VAMC director. In addition to the Veterans Health Administration (VHA) handbook for the patient advocacy program, all eight of our selected VAMCs developed their own policies for the administration of the program, and these policies varied. For example, while almost all of the policies specified the responsibilities with respect to the patient advocacy program of the service chiefs—officials who oversee the administration and operation of service lines such as primary care, these responsibilities varied. For example, two of the policies required service chiefs to incorporate veterans’ feedback into performance measures used for VAMC staff external to the patient advocacy program, such as physicians, while the other policies did not. We also found variation between our selected VAMCs with respect to whether they had written descriptions of the service-level advocates’ roles. Of the six VAMCs that designated service-level advocates, three had written descriptions of their roles, while three did not. Further, among the VAMCs that had a written description of the role of a service-level advocate, the expectations for these advocates varied. For example, one VAMC’s written description specified that service-level advocates are expected to enter veterans’ feedback into PATS within 7 days of receiving the feedback. The written descriptions at the other two VAMCs did not specify this expectation. Appendix IV: Comments from the Department of Veterans Affairs Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Hernán Bozzolo (Assistant Director), Rebecca Rust Williamson (Analyst-in-Charge), Jennie F. Apter, Q. Akbar Husain, and Emily Loriso made key contributions to this report. Also contributing were Julie Flowers, Jacquelyn Hamilton, and Vikki Porter.
Why GAO Did This Study VHA has designated patient advocates at each VAMC to receive and document feedback from veterans or their representatives, including requests for information, compliments, and complaints. In recent years, the importance of a strong patient advocacy program has taken on new significance given concerns with VHA's ability to provide veterans timely access to health care, among other issues. The Comprehensive Addiction and Recovery Act of 2016 included a provision for GAO to review VHA's patient advocacy program. This report examines the extent to which VHA has (1) provided guidance on the governance of the program; (2) provided guidance on staffing the program; (3) assessed the training needs of patient advocates and monitored training completion; and (4) monitored patient advocacy program data-entry practices and reviewed program data. GAO reviewed VHA and VAMC documents, including summaries of program data. GAO interviewed VHA officials about the program, as well as officials from a non-generalizable selection of eight VAMCs and five VISNs selected based on the volume of veteran complaints and other factors. GAO also compared VHA policies and practices to federal internal control standards. What GAO Found The Veterans Health Administration (VHA) provided limited guidance to Department of Veterans Affairs (VA) medical centers (VAMC) on the governance of its patient advocacy program and its guidance, a program handbook, has been outdated since 2010. VAMCs are still expected to follow the outdated handbook, which does not provide needed details on governance, such as specifying the VAMC department to which patient advocates should report. Officials from most of the VAMCs that GAO reviewed noted that the VAMC department to which patient advocates report can have a direct effect on the ability of staff to resolve veterans' complaints. The lack of updated and complete guidance may impede the patient advocacy program from meeting its expectations, to receive and address complaints from veterans in a convenient and timely manner. VHA also has provided limited guidance to VAMCs on staffing the patient advocacy program. VHA's handbook states that every VAMC should have at least one patient advocate and appropriate support staff; however, it did not provide guidance on how to determine the number and type of staff needed. Officials at all but one of the eight VAMCs in GAO's review stated that their patient advocacy program staff had more work to do than they could accomplish. This limited guidance on staffing could impede VAMCs' efforts to ensure that they have the appropriate number and type of staff to address veterans' complaints in a timely manner. Further, VHA has recommended training for patient advocates, but it has not developed an approach to routinely assess their training needs or monitored training completion. VHA officials stated that they relied on VAMC and Veterans Integrated Service Network (VISN) staff to conduct these activities. However, GAO found that for the eight VAMCs in its review, the training needs of patient advocates were not routinely assessed, and training completion was not always monitored. Without conducting these activities, VHA increases its risk that staff may not be adequately trained to advocate on behalf of veterans. Finally, VHA has not monitored patient advocacy program data-entry practices or reviewed the data to assess program performance. VHA officials stated that they relied on VISN and VAMC officials to ensure that all complaints were consistently entered into VHA's Patient Advocate Tracking System (PATS). However, GAO identified inconsistencies in the extent to which VAMC officials did so. VHA's lack of monitoring may pose a risk that not all complaints are entered into this tracking system—a goal of the program. Additionally, VHA officials stated they did not systemically review data in the system to assess program performance and identify potential system-wide improvements because VHA considered this the responsibility of VAMCs. As a result, VHA officials may miss opportunities to improve veterans' experiences. VHA is beginning to address many of these governance, staffing, training, and data issues, including directing a workgroup to provide recommendations by spring of 2018. However, because the recommendations will be advisory, and because program deadlines have slipped in the past, the nature and timing of the actions needed to resolve these issues remain unclear. What GAO Recommends GAO is making 6 recommendations to improve guidance for and oversight of the patient advocacy program, focusing on governance, staffing, training, and PATS data entry and assessment. VA concurred with GAO's recommendations.
gao_GAO-19-93
gao_GAO-19-93_0
Background The 2017 Hurricanes and California Wildfires In 2017, three major hurricanes made landfall in the United States and historic wildfires struck California. According to FEMA, the 2017 hurricanes and wildfires collectively affected 47 million people—nearly 15 percent of the nation’s population. See figure 1 for a timeline of these major disasters. Overview of Federal Disaster Response and Recovery When disasters hit, state and local entities are typically responsible for disaster response efforts. The Robert T. Stafford Disaster Relief and Emergency Assistance Act established a process by which a state may request a presidential disaster declaration to obtain federal assistance. According to the DHS National Response Framework—a guide to how the federal government, states and localities, and other public and private sector institutions should respond to disasters and emergencies—the Secretary of Homeland Security is responsible for ensuring that federal preparedness actions are coordinated to prevent gaps in the federal government’s efforts to respond to all major disasters, among other emergencies. The framework also designates FEMA to lead the coordination of the federal disaster response efforts across federal agencies. The National Response Framework identifies 14 emergency support functions that serve as the federal government’s primary coordinating structure for building, sustaining, and delivering disaster response efforts across more than 30 federal agencies. Each function addresses a specific need—such as communication, transportation, and energy—and designates a federal department or agency as the coordinating agency. For example, the emergency support function for public works and engineering assists DHS by coordinating engineering and construction services, such as temporary roofing or power, and USACE is the primary agency responsible for these functions during disaster response activities. FEMA coordinates disaster response efforts through mission assignments—work orders that FEMA issues to direct other federal agencies to utilize the authorities and the resources granted to it under federal law. Mission assignments are authorized by the Robert T. Stafford Disaster Relief and Emergency Assistance Act and can consist of federal operations support or direct federal assistance, which includes federal contracts. FEMA’s contracting efforts are supported by its Office of the Chief Procurement Officer and its contracting workforce. While the majority of FEMA’s contracting workforce is located in headquarters, contracting officers are also located in each of FEMA’s 10 regional offices. See appendix II for the location of FEMA’s 10 regional offices as well as the states each one is responsible for coordinating with to address National Response Framework responsibilities. PKEMRA Requirements and the Use of Advance Contracts Congress enacted PKEMRA in 2006, which addressed various shortcomings identified in preparation for and response to Hurricane Katrina, which hit the Gulf Coast in 2005 and was one of the largest, most destructive natural disasters in U.S. history. Among the provisions included were requirements for FEMA to identify and establish advance contracts to ensure that goods and services are in place to help FEMA rapidly mobilize resources in immediate response to disasters. Examples of these goods and services are: Goods: construction supplies and tarps; food and water; cleaning and hygiene supplies; and power equipment and generators. Services: engineering; information technology and communication support; transportation of goods; and housing and lodging assistance. As of June 2018, FEMA reported having advance contracts in place for 56 different types of goods and services. Among other contracting requirements, PKEMRA requires FEMA to develop a contracting strategy that maximizes the use of advance contracts to the extent practical and cost effective; coordinate advance contracts with state and local governments; encourage state and local governments to engage in similar pre- planning and contracting; and submit quarterly reports to the appropriate committees of Congress on each disaster contract entered into by the agency using non- competitive procedures. According to FEMA’s advance contracting strategy, the agency will maximize the use of advance contracts to the extent they are practical and cost-effective, which will help preclude the need to procure goods and services under unusual and compelling urgency. When disasters strike, contracting officers may use the unusual and compelling urgency exception to full and open competition to support non-competitive contract awards. FEMA’s strategy also states that advance contracts will help to ensure that goods and services are in place to help FEMA rapidly mobilize resources in immediate response to disasters. USACE also has its own advance contracts in place as a preparedness measure. According to USACE officials, they established advance contract initiatives in 2003, two years prior to Hurricane Katrina, to help facilitate their emergency support function under the National Response Framework—public works and engineering. As of September 2018, USACE reported having advance contracts in place for three services— debris removal, temporary roofing, and temporary power. Appendix III provides details on specific advance contracts established by FEMA and USACE. According to FEMA documentation, most of its advance contracts are indefinite delivery contracts, which can facilitate the goal of having contracts available if there is a disaster. One type of indefinite delivery contract—an indefinite delivery, indefinite quantity contract—can be awarded to single or multiple vendors and provides for an indefinite quantity, within stated limits, of supplies or services during a fixed period. Under these contracts, the government places orders for individual requirements. These contracts also require the government to order and the contractor to provide at least a stated minimum quantity of supplies and services. Additionally, the contracting officer should also establish a reasonable maximum quantity for the contract based on market research, trends in similar recent contracts, or any other rational basis. Minimum and maximum quantity limits can be stated as the number of units or as dollar values, and may also be referred to by contracting officers as minimum guarantees or contract ceilings, respectively. As part of its overall acquisition strategy, FEMA officials identified other vehicles aside from its own advance contracts through which they obtain goods and services. DHS strategic sourcing vehicles: When a disaster occurs, FEMA contracting officers are first required to use any available DHS strategic sourcing vehicles—a broader, aggregate approach for procuring goods and services—with limited exceptions. Blanket purchase agreements: FEMA also relies on blanket purchase agreements, such as those established through the General Service Administration Federal Supply Schedule program, to provide some commercial goods and services needed for disaster response. Interagency Agreements: FEMA may also leverage interagency agreements, by which it obtains needed supplies or services from another agency by an assisted or direct acquisition. FEMA and other agencies may also award new contracts to support disaster response efforts following a disaster declaration. According to FEMA officials, these post-disaster contract awards may be required, for example, if advance contracts reach their ceilings, or if goods and services that are not suitable for advance contracts are needed. FAR Requirements The FAR requires agencies to perform acquisition planning activities for all acquisitions to ensure that the government meets its needs in the most effective, economical, and timely manner possible. Generally, program and contracting officials share responsibility for the majority of acquisition planning activities, which include the following: Pre-Solicitation: The program office identifies a need, and develops key acquisition documents to summarize that need, such as market research, a statement of work defining requirements, cost estimates, and a written acquisition plan. The pre-solicitation process ends when the program office submits these documents, typically referred to as an acquisition package, to the contracting officer to determine what type of contract is appropriate to fulfill the requirements. Solicitation: The contracting officer develops a solicitation, in consultation with other agency stakeholders, to request bids or proposals from contractors. The acquisition planning process ends once a solicitation is issued. Contracting for disaster relief and recovery efforts can also present unique circumstances in which to solicit, award, and administer contracts. Under the FAR, agencies are generally required to use full and open competition when soliciting offers and awarding contracts. However, an agency may award contracts noncompetitively when the need for goods or services is of such unusual and compelling urgency that the federal government faces the risk of serious financial or other type of injury. When it becomes evident that a base contract period and any option periods will expire before a subsequent contract to meet the same need can be awarded, contracting officers may, for example, extend the existing contract, or award a short-term stand-alone contract to the incumbent contractor on a non-competitive basis to avoid a lapse in services, along with sufficient justification and approval. These extensions and new sole source contracts are informally referred to as bridge contracts by some in the acquisition community, and we use that terminology in this report. In October 2015, we established the following definitions related to bridge contracts: Bridge contract: An extension to an existing contract beyond the period of performance (including base and option years), and a new, short-term contract awarded on a sole-source basis to an incumbent contractor to avoid a lapse in service caused by a delay in awarding a follow-on contract. Predecessor contract: The contract that was in place prior to the award of a bridge contract. Follow-on contract: A longer-term contract that follows a bridge contract for the same or similar services. This contract can be competitively awarded or awarded on a sole-source basis. Contracts, orders, and extensions (both competitive and non-competitive) are included in our definition of a “bridge contract” because the focus of the definition is on the intent of the contract, order, or extension. However, the FAR does not formally define bridge contracts or require that they be tracked. We recommended that the Office of Federal Procurement Policy amend the FAR to incorporate a definition of bridge contracts. The Office of Federal Procurement Policy agreed with our recommendation to provide guidance to agencies on bridge contracts and has taken steps to develop that guidance, but has not yet implemented our recommendations. If a contracting officer opts to extend the existing contract in place—often referred to as a predecessor contract—the contracting officer may use a number of different mechanisms to do this. One of these is the “option to extend services” clause. If the contract includes this clause, the contracting officer may use it to extend the contract for up to six months. While this option may be exercised more than once, the total extension of performance shall not exceed 6 months. FEMA and USACE Relied on Advance Contracts to Respond to the 2017 Disasters, but FEMA Lacks an Updated Advance Contracting Strategy and Guidance FEMA and USACE obligations on advance contracts—as of May 31, 2018—accounted for about half of total federal contract obligations for the three hurricanes, and more than three quarters of the contract obligations identified by those agencies for the California wildfires. However, an outdated strategy and lack of guidance to contracting officers resulted in confusion about whether and how to prioritize and use advance contracts to quickly mobilize resources in response to the three 2017 hurricanes and the California wildfires. Advance Contracts Accounted for about Half of Government-wide Contract Obligations for the 2017 Hurricanes, and over Three-Quarters of FEMA and USACE’s Obligations for the California Wildfires Government-wide contract obligations for the three hurricanes were about $8.2 billion as of May 31, 2018. FEMA and USACE obligated 46 percent, or about $3.8 billion, of the $8.2 billion spent government-wide on the three hurricanes through advance contracts. Data on government- wide contract obligations for the California wildfires were not able to be identified because national interest action codes were not established for them in FPDS-NG. However, FEMA and USACE provided information on their contracting activities related to the wildfires. Their use of advance contracts accounted for 86 percent, or about $667 million, of the contract obligations they identified. FEMA and USACE advance contract obligations for the three hurricanes and California wildfires totaled about $4.5 billion, about 56 percent of the total contract obligations made by these agencies for these disasters. See figure 2 for details on FEMA and USACE’s advance and post-disaster contract obligations by event. The greatest proportion of FEMA and USACE’s obligations on advance contracts supported Hurricane Maria disaster relief efforts—41 percent and 59 percent, respectively. About 39 percent of USACE’s obligations on advance contracts were used in support of the California wildfires, compared to less than 1 percent of FEMA’s obligations. FEMA awarded orders against 72 base advance contracts in response to the three 2017 hurricanes and California wildfires, and USACE awarded orders against 15 of its advance contracts. See figure 3 for FEMA and USACE’s obligations on advance contracts by event. Advance Contracts Were Used Primarily for Services FEMA and USACE procured a variety of goods and services through advance contracts in response to the three hurricanes and wildfires, but about 86 percent of obligations, or $3.8 billion, were used to procure services. For example, all of USACE’s $1.7 billion in advance contract obligations were for services, such as debris removal. FEMA obligated about $2.2 billion on services, such as architect and engineering services to rebuild roads and bridges. FEMA’s obligations on goods totaled $624 million and included prefabricated buildings, such as manufactured housing units to provide lodging, and food and water. See figure 4 for examples of obligations on goods or services by event. FEMA Lacks an Updated Strategy and Guidance on the Use of Advance Contracts FEMA lacks an updated strategy and guidance on advance contract use, despite the PKEMRA requirement to develop a contracting strategy that maximizes their use to the extent practical and cost effective. As we found in May 2006 following Hurricane Katrina, and reiterated in our September 2015 report, agencies need to have competitively awarded contracts in place before a disaster to be effective in their response. Our current review found that FEMA has established advance contracts for goods and services to enable it to respond following a disaster. However, FEMA’s lack of an updated strategy and guidance on advance contract use resulted in confusion about whether and how to maximize their use to the extent cost-effective and practical to facilitate a faster response when providing goods and services to survivors. PKEMRA required the FEMA Administrator to identify specific goods and services that the agency could contract for in advance of a natural disaster in a cost-effective manner. PKEMRA also required the FEMA Administrator to develop a contracting strategy that maximizes the use of advance contracts to the extent practical and cost-effective. Following the enactment of PKEMRA, in 2007 FEMA issued the Advance Contracting of Goods and Services Report to Congress, in part to address the requirement for an advance contracting strategy. In addition to the strategy, FEMA provides information on advance contracts in its Disaster Contracting Officer Desk Guide. The 2007 strategy notes that advance contracts will help to preclude the need to procure goods and services for disaster response under the unusual and compelling urgency exception to full and open competition, and allow FEMA to rapidly mobilize resources in immediate response to disasters. Several contracting officials we spoke with said that it is a requirement to use advance contracts before awarding new contracts. Moreover, a senior FEMA contracting official told us that advance contracts are intended to be used before awarding post-disaster contracts, even if the advance contract is not capable of fulfilling all of the requirements for a needed good or service. However, our review of the strategy found that it does not provide any specific direction on how contracting officers should award or use advance contracts to meet PKEMRA’s objectives, or how they should be prioritized in relation to post-disaster contracts. Further, there is no mention in FEMA’s 2017 Disaster Contracting Officer Desk Guide that advance contracts should be considered prior to the award of post-disaster contracts. In September 2015, we found shortfalls with the information available to contracting officers about advance contracts and recommended that FEMA provide new or updated guidance with information on how advance contracts should be used. FEMA agreed with this recommendation and stated that in 2015 it included information on advance contracts and their use in training documentation. However, our review of semi-annual training documentation provided in May 2018 found that it only lists some of the advance contracts that are available, and not guidance on their use. A report by the Senate Committee on Homeland Security and Governmental Affairs identified concerns about FEMA’s use of advance contracts for self-help tarps in response to the 2017 hurricanes. Specifically, the report found that while FEMA ordered some tarps through one of its existing advance contracts, that order was placed after a post-disaster contract for tarps was signed, raising questions about whether FEMA’s actions were informed by an overall strategy for using its advance contracts, in this case, for tarps. Our current review identified similar concerns, and found that the lack of an updated strategy and guidance on the use of advance contracts contributed to challenges in using these contracts to respond to the 2017 disasters. In our review of advance contracts for meals and tarps, we found the following: Meals: Prior to the 2017 disasters, FEMA had advance contracts in place to provide meals with specific nutritional requirements. According to FEMA contracting officials, the advance contract vendors were at capacity for these specific meals following the response to Hurricane Harvey, requiring FEMA to issue a new post-disaster competitive solicitation and award new contracts with less specific nutritional requirements following Hurricane Maria. Based on our review of contract documentation, two of the existing advance contract vendors were awarded these new post-disaster contracts, but at different prices than those negotiated through their advance contracts. FEMA officials told us that contracting officers will negotiate to ensure the price of the contract is fair and reasonable and may utilize historical information or current contract prices to inform this determination. Normally, adequate price competition establishes a fair and reasonable price. According to a contracting officer involved with the award, FEMA relied on competition and historical prices, but not the existing advance contract prices, to determine that the new post- disaster contract meal prices were fair and reasonable. Guidance on the extent to which advance contract prices should be considered when comparing proposed prices to historical prices paid could help to further inform contracting officers’ decision-making during a disaster. Tarps: Our review of FEMA’s use of contracts for tarps is another example of how FEMA lacked an updated advance contracting strategy and guidance to provide goods and services to facilitate a faster response to the 2017 disasters. For example, in September 2014, FEMA awarded multiple award indefinite delivery, indefinite quantity advance contracts to three small businesses for self-help tarps, which are used to cover small areas of roof damage. In November 2014, these contracts were modified by the contracting officer to include delivery requirements for providing tarps to replenish FEMA’s stock during steady state operations or during emergency response operations, such as a natural disaster. The contract modification added that during an emergency response, vendors would be expected to deliver up to 150,000 tarps within 96 hours of being issued a task order. However, these small businesses were not required to meet the emergency response delivery time frames and amounts since they would not be expected to store tarps on FEMA’s behalf, limiting the use of FEMA’s advance tarp contracts for immediate disaster response needs. According to a contracting officer involved with these contracts, the tarp advance contracts are typically used only to replenish tarp stockpiles in FEMA’s distribution centers. However, the contracting officer also noted that not being able to fully use the existing advance contracts for tarps to respond to the three 2017 hurricanes was a challenge and required FEMA to award post- disaster contracts to meet tarp requirements. Furthermore, we found that FEMA awarded post-disaster contracts for tarps before utilizing its advance contracts with the small businesses. Contract file documentation for the post-disaster contracts stated that FEMA’s advance contract holders for tarps had reached their capacity, and that market research had confirmed that it would be difficult for small businesses to meet the urgent delivery timeframes for tarps. Yet, after the award of the post-disaster tarp contracts, FEMA awarded task orders to one of the advance contractors to provide tarps in response to Hurricane Maria. Another small business advance contractor, which according to FEMA’s post-disaster contract documentation had reached its capacity, also submitted a proposal as part of the post-disaster contract solicitation. According to FEMA, neither of the post-disaster contract holders ultimately provided the required tarps. The timing and use of the existing tarp advance contracts raises questions about their ability to provide tarps immediately following a disaster, and whether an updated advance contracting strategy would have enabled FEMA to more quickly provide the needed tarps to survivors, considering the additional time and staff resources needed to award new post-disaster contracts. FEMA established advance contracts to provide critical goods, like meals and tarps, following a disaster; however FEMA’s 2007 contracting strategy does not provide direction on the objectives of advance contracts or how to maximize their use to the extent practical and cost-effective, as required by PKEMRA. According to FEMA officials, they had not considered updating the 2007 advance contracting strategy because they believed the use of advance contracts following PKEMRA had been incorporated into their disaster contracting practices. FEMA has also not communicated specific guidance to program and contracting officials on whether and how advance contracts should be prioritized before issuing new post-disaster solicitations and awarding contracts for the same or similar requirements, or how to maximize their use to the extent practical and cost-effective following a disaster, as required by PKEMRA. FEMA officials also acknowledged that additional guidance regarding advance contracts, including their availability and use during a disaster, could be useful. Without an updated strategy—and clear guidance that is incorporated into training—on the use of advance contracts and how they should be prioritized and used in relation to new post-disaster contract awards, FEMA lacks reasonable assurance that it is maximizing the use of advance contracts to quickly and cost-effectively provide goods and services following a disaster. This places FEMA at risk of continued challenges in quickly responding to subsequent disasters. Improvements Needed in FEMA’s Planning, Management, and Reporting of Advance Contracts While FEMA used a variety of advance contracts to respond to the 2017 disasters, we found weaknesses in the process of awarding and overseeing selected advance contracts in our review. These weaknesses were: (1) challenges in FEMA’s acquisition planning; (2) limited record keeping or management of certain FEMA contracts; and (3) incomplete reporting on FEMA’s advance contract actions to certain congressional committees. Related to USACE, we did not identify any planning or management challenges based on our review of its four selected contracts, and USACE is not required to report on its advance contract actions to the congressional committees. Challenges in FEMA’s Acquisition Planning Resulted in Bridge Contracts FEMA has taken some steps since 2016 to improve competition and develop processes and guidance on the acquisition process for advance contracts, but shortfalls in acquisition planning have resulted in a number of bridge contracts. Bridge contracts can be a useful tool in certain circumstances to avoid a gap in providing products and services. We have previously reported that when non-competitive bridge contracts are used frequently or for prolonged periods, the government is at risk of paying more than it should for products and services. Based on our analysis, 63 of FEMA’s 72 advance contracts used in response to the 2017 disasters were initially competed. All 15 of USACE’s advance contracts used in responding to the three hurricanes and California wildfires in 2017 were initially competed. We found that at least 10 of FEMA’s advance contracts used in 2017 were bridge contracts. Within the 10 FEMA advance contracts we identified as bridge contracts, 6 were part of our selected case studies. The six advance contracts with subsequent bridges in our review obligated roughly $778 million in response to the three hurricanes and California wildfires in 2017. These bridge contracts included five that are associated with two of FEMA’s largest programs used in 2017—the Individual Assistance Program and Public Assistance Program—and one that is associated with a telecommunications program. Three of the six bridge advance contracts we reviewed were awarded to support FEMA’s Individual Assistance Program, which provides mass care services such as food and water as well as financial and direct assistance, among other services, to survivors whose property has been damaged or destroyed and whose losses are not covered by insurance. In 2017, this assistance was supported through the Individual Assistance- Technical Assistance Contract (IA-TAC), known as IA-TAC III. The IA- TAC III predecessor contracts had an original period of performance from a base year starting in May 2009 with four 1-year options that ended in May 2014. However, FEMA program and contracting officials were unable to implement changes to the requirements—recommended by FEMA senior leadership in 2010—prior to expiration. According to FEMA officials, staffing shortfalls, operational tempo, and unrealistic contract requirements led to acquisition planning delays. These challenges, in turn, led to a series of extensions from May 2014 to November 2016 and a new non-competitive bridge contract (base with options) from November 2016 to May 2018. At that point new, competitive follow-on indefinite delivery indefinite quantity contracts—the Individual Assistance Support Contract (IASC) and Logistics Housing Operations Unit Installation, Maintenance, and Deactivation (LOGHOUSE)—were awarded. See figure 5. Two of our six selected advance contracts that were bridge contracts were awarded to support FEMA’s Public Assistance Program, which provides supplemental federal assistance to state, tribal, territorial, and local governments for debris removal, life-saving emergency protective measures, and the repair, replacement, or restoration of damaged facilities. The predecessor Public Assistance-Technical Assistance Contract (PA-TAC) used in 2017, known as PA-TAC III, was awarded with an original period of performance from a base year in February 2012 with four 1-year options that ended in February 2017. FEMA officials noted that changes to the PA-TAC III contract requirements and acquisition strategy were identified in 2015. Yet due to the time needed to incorporate these changes, FEMA was unable to complete required acquisition planning activities, such as finalizing the acquisition plan, prior to the expiration of PA-TAC III. Following 11 months of extensions to complete these activities, FEMA competitively awarded new contracts in December 2017. These awards were protested to the GAO and the protests were denied and are currently under review at the Court of Federal Claims. According to FEMA officials, these events required PA- TAC III to be extended until January 2019, as shown in figure 6. The remaining bridge contract in our sample is associated with the Wireline Services Program, a telecommunication program that provides FEMA employees deployed to respond to a disaster with local and long- distance telephone, high-speed data, and cable television services. The 5 year wireline predecessor contract was awarded in 2003 and again in 2008, but FEMA was unable to award a competed contract when the 2008 contract expired in December 2013 due to the time it took to update program requirements. FEMA contracting officials extended the contract for 6 months before letting it expire altogether. Due to high staff turnover and inconsistent record keeping, at the time of our review FEMA officials were unable to determine the cause for this lapse of service, which occurred after the contract’s expiration in June 2014. Starting in January 2015, FEMA contracting officials used a series of bridge contracts over more than three years to address changing contract requirements and delays in completing acquisition planning documentation, as shown in figure 7. FEMA contracting officials anticipated awarding a competitive contract by the end of fiscal year 2018, but the award has been delayed and the existing contract extended through January 2019. In one of the bridge contracts included in our review, FEMA improperly used FAR clause 52.217-8. According to that clause, an agency may extend a contract’s period of performance for up to 6 months and is generally used in the event of circumstances outside of the contracting officer’s control that prevent the new contract award, such as a bid protest. This clause may be used multiple times to extend the contract so long as the total extension of performance does not exceed 6 months. Our analysis found that FEMA used the clause for a total of 14 months to justify two 6-month extensions and one 2-month extension to the second bridge contract. The FEMA contracting official associated with the advance contract reported uncertainty over the proper use of this clause and what other authorities should have been used instead to extend the contract. FEMA’s Office of Chief Counsel and contracting officials acknowledged this error. While not all bridge contracts that we identified during our review were non-competitive, FEMA officials acknowledged that the use of non- competitive bridge contracts is not an ideal practice as they cannot ensure the government is paying what it should for products and services. In October 2015 we identified delays in the completion of acquisition planning documentation as one of the leading causes of awarding bridge contracts. In an effort to decrease the need for non-competitive bridge contracts and provide ample time for acquisition planning, FEMA began implementing a 5-Year Master Acquisition Planning Schedule (MAPS) in 2016. MAPS is a tracking tool that monitors the status of and provides acquisition planning timeframes for certain FEMA acquisitions over $5 million, as well as for all advance contracts and any acquisition deemed by the agency to be mission critical, regardless of dollar value. As we previously noted, acquisition planning includes both the pre- solicitation and solicitation phases. Based on our review of MAPS documentation, the tool generates a timeline of discretionary acquisition milestones across these two phases, based on certain considerations like the type of acquisition and whether it will be competed. Using this timeline, MAPS sends email alerts to program and contracting staff when certain acquisition milestones should occur. Specific to the solicitation phase, FEMA’s Office of the Chief Procurement Officer has developed annual lead time guidance for how long contracting officers should be given to award new contracts following the completion of the acquisition package, which is then conveyed through MAPS. For example, for acquisitions $150,000 and under, FEMA’s 2018 lead time guidance states contracting officers should be given 60 days to award the contract following completion of the acquisition package. FEMA officials we spoke with acknowledged that these discretionary timeframes are frequently shortened when program office officials are delayed in completing acquisition packages. While FEMA has lead time guidance to establish timeframes for completing the solicitation phase, FEMA currently has no guidance establishing timeframes for the pre-solicitation phase, when program offices complete the acquisition packages. Figure 8 provides an example timeline of the major milestones tracked in MAPS. In its analysis of 12 fiscal year 2017 contracts tracked in MAPS that were awarded late, FEMA found that half were late because contracting officials were not given enough lead time to award a new contract following the program office’s completion of the acquisition package. Not adhering to suggested timeframes can place a burden on contracting officers and increase the likelihood of not awarding the contract on schedule, requiring FEMA to non-competitively extend the existing contract. According to FEMA’s lead time guidance, based on the contract values for the bridge contracts in our review contracting officers should have been given between 240 and 300 days to award a new contract once the acquisition package was completed. However, as we mention earlier, due to delays from changing program requirements and acquisition strategies we found that the acquisition plans for the follow-on contracts related to these bridge contracts were not completed until after the predecessor contract had already expired, as shown in figure 9 below. Timely completion of the acquisition package was a key challenge identified in the contracts we reviewed. However, according to officials from the Office of the Chief Procurement Officer, they do not have the authority to establish guidance for FEMA program officials on completing pre-solicitation phase activities. In August 2011, we identified challenges with acquisition planning across DHS. Specifically, we found that DHS and other agencies did not measure or incorporate into guidance the amount of time it takes to develop and obtain approvals of the acquisition planning documents required during the pre-solicitation phase. We recommended that DHS procurement offices collect information about the timeframes needed for the acquisition planning process to establish timeframes for when program officials should begin acquisition planning. DHS did not concur with this recommendation, stating that its acquisition manual already encourages early planning, and has not implemented the recommendation. At the time, we maintained that program officials needed more guidance to have a better understanding of how much time to allow for completing acquisition planning steps, and that the component procurement offices are best positioned to provide guidance on how long these planning processes may take. Given the current challenges we identified with FEMA’s ability to complete acquisition planning activities in a timely manner and the resulting delays in awarding new contracts for critical advance contract goods and services, additional information and guidance on acquisition planning timeframes remains important. Additionally, while MAPS has been in place since 2016 and FEMA officials have instituted training to communicate the system’s intent, program and contracting officials we spoke with varied in their familiarity with it. For example, officials responsible for MAPS stated that by March 2016, 90 percent of FEMA’s contracting staff had attended an hour long training session and additional training sessions were held for all program office staff at various points in 2016 and 2017. However, most of the program office and contracting officials responsible for the bridge contracts in our review reported limited familiarity with MAPS. While FEMA has taken some positive steps to institute training and has guidance on timeframes for part of the acquisition planning process, program and contracting staff we spoke with were still uncertain how best to utilize MAPS to identify the time needed to effectively complete acquisition planning activities. According to federal internal control standards, agency management should internally communicate the necessary quality information to achieve their objectives. Given FEMA’s emphasis on planning before a disaster and using advance contracts to help reduce the need to award non-competitive contracts during a disaster, establishing clear guidance on the factors that can affect acquisition planning activities, and requiring officials to follow the timeframes needed to complete them to meet the goal of awarding competitive contracts, is essential. Until FEMA provides detailed guidance about timeframes and considerations that affect the entire acquisition planning process—both the pre-solicitation and solicitation phases—to all officials responsible for acquisition planning, and clearly communicates the intent of MAPS, it cannot ensure that MAPS will be effective at reducing the number of non-competitively awarded bridge contracts, as is FEMA’s intent. Current Record-Keeping Practices Limit Visibility into Advance Contract Management While FEMA has procedures regarding the documentation required for its contract files, current practices limited visibility into the advance contracts in our review. Specifically we found that acquisition plans and some other contract documents were unable to be located in certain cases. Acquisition plans provide the program and contract history as well as other information on which acquisition decisions, such as the type of contract required, are based. FEMA contracting officials were unable to locate acquisition plans for 4 of our 10 FEMA selected advance contracts despite FAR and DHS acquisition guidance requiring plans for these particular contracts to be completed and stored in the contract file. Three of these acquisition plans are associated with the IA-TAC bridge contract which, as previously noted, was associated with one of FEMA’s largest programs used in 2017. FEMA contracting officials were also unable to locate the acquisition plans completed for the prior iteration of IA-TAC because they were not in the hard copy contract file or contract writing system, meaning that no acquisition plan guiding the IA-TACs since before its 2009 award could be found. In 2011, the DHS Office of the Inspector General conducted a review of FEMA’s IA-TAC and identified, among other things, incomplete contract files as a problem. Not being able to locate acquisition plans can result in the loss of contract knowledge and lessons learned from prior awards. Additionally, we found instances of contract documentation for advance contracts related to our case studies that contract officials could not locate. For instance, FEMA was unable to confirm whether or not an option year for the last competed Wireline contract included in the contract was exercised due to a lack of documentation. In order to obtain this answer, FEMA officials had to reach out to the vendor for their records. Moreover, the modification exercising the first option year for one of the IA-TAC III predecessor contracts was missing, as were the determination and findings documents exercising the first option year for all three of the predecessor IA-TAC III contracts that were associated with the advance contracts in our review. After we made FEMA officials aware of the missing documentation, they subsequently added clarifying memos to the contract files. FEMA standard operating procedures state that the acquisition documents in the official contract file will be sufficient to constitute a complete history of the entire transaction for the purpose of providing a complete background, and as a basis for informed decisions at each step in the acquisition process. Additionally, these procedures require headquarters staff to place modifications to contracts and orders and associated supporting documentation in the contract file within 5 business days of awarding a contract or issuing an order. FEMA officials stated they are required to follow these procedures until DHS has fully transitioned to an electronic filing system. According to DHS officials, that system is currently in the testing phase and a timeframe for implementation has not yet been finalized. Furthermore, according to these officials, DHS has not yet decided which, if any, existing contracts will be required to be retroactively entered into the system. Until this decision has been made and implementation occurs, FEMA’s official file of record for its advance contracts consists of a hardcopy file, which contracting officers at FEMA headquarters are required to add completed contract documentation to, per the standard operating procedures. A FEMA official told us that some documentation, including some of the missing documentation we identified, has been lost due to staff turnover and an office move in 2016. FEMA officials anticipate some of the challenges associated with managing the hard copy advance contract files will be alleviated after implementation of the Electronic Contract File System. However, DHS officials have not decided whether components will be required to retroactively enter contract information for any contract awarded prior to the implementation date. This would require FEMA and other DHS components to continue to maintain hardcopy files for some contracts— including large strategic sourcing vehicles and advance contracts—for the foreseeable future. For example, FEMA’s $2.7 billion LOGHOUSE, and $14 million IASC advance contracts were awarded in 2018 and have a period of performance lasting until 2023. Until FEMA adheres to existing contract file management requirements, whether the contract files will be transferred into the electronic system or remain in hard copy format, it is at continued risk of having incomplete contract files and a loss of institutional knowledge regarding these advance contracts. Information on Advance Contracts in FEMA’s Disaster Contract Quarterly Reports to Congressional Committees Is Incomplete Since December 2007, FEMA has submitted quarterly reports to congressional committees that list all disaster contracting actions in the preceding three months. These quarterly reports also include details on contracts awarded by non-competitive means, as required by PKEMRA. However, our analysis shows that some reports from fiscal year 2017 and 2018 have been incomplete. In September 2015, we found that FEMA’s quarterly reports to congressional committees in fiscal years 2013 and 2014 did not capture all of FEMA’s noncompetitive orders. At that time, FEMA attributed this to an error in data compilation prior to mid-2013 and explained that it had updated its process for collecting these data and strengthened the review process, resulting in accurate reports starting in the fourth quarter of fiscal year 2013. Despite this change in the data collection process, our current analysis found that 29 contract actions associated with the 10 selected advance contracts in our review were not reported across FEMA’s fourth quarter fiscal year 2017 and first quarter fiscal year 2018 reports. For example, FEMA’s fourth quarter fiscal year 2017 report did not include 13 contract actions equaling about $83 million, or 15 percent, of the $558 million in total obligations associated with the 10 selected advance contracts in our review. Similarly, FEMA’s first quarter fiscal year 2018 report did not include 16 contract actions equaling about $122 million, or 23 percent, of the $532 million in total obligations associated with the 10 selected advance contracts in our review. Figure 10 provides a breakdown of the total contract action obligations by extent of competition. To compile the quarterly reports, FEMA officials told us that their methodology is to pull contract action data that is documented in their contract writing system and FPDS-NG roughly one week after the end of each fiscal quarter. Once the data are pulled from these two sources, officials said they compare the data to ensure all reported actions are captured. However, according to officials, the data may not include all contract actions. Specifically, during disaster response efforts like those in 2017, FEMA policy allows contracting officers to execute what it refers to as “notice to proceed”, which is a notice to a construction contractor to begin work under certain circumstances. FEMA officials responsible for the quarterly reports stated that if notice to proceed documentation is used, information on some contract actions that were issued during the fiscal quarter, but not entered into the systems until after the quarter ended, may be missed during the data compilation process. FEMA policy requires that contracting officers who execute the notice to proceed documentation complete the contract award documentation in the contract writing system within three days of when the contracting officer receivers the contractor’s acceptance of the notice. However, a FEMA policy official acknowledged that during disaster response, this does not always occur. Further, FEMA officials responsible for compiling the reports stated that it is not part of their methodology to review data from prior fiscal quarters to see whether any contract actions have been entered that were not previously reported. By not adhering to FEMA policy that establishes timeframes for entering data in a disaster response scenario, FEMA risks reporting incomplete information. Moreover, without taking steps to ensure its reporting methodology provides complete information on all competed and not competed disaster contract actions, FEMA cannot be certain it is providing the congressional committees with visibility into all of its overall disaster contract awards or the extent of non- competitive contract obligations over time. No Challenges Identified with the Planning and Management of Selected USACE Advance Contracts The four selected USACE advance contracts in our review—one supporting USACE’s temporary power mission and three supporting its debris removal mission—were awarded in 2014 with a period of performance lasting until 2019. Since these contracts have not reached the end of their period of performance, we were unable to assess the effectiveness of USACE planning activities. According to contracting officials, USACE is performing acquisition planning activities for both the temporary power and debris removal advance contracts and anticipates awarding the new contracts prior to the current contracts’ expiration. Additionally, USACE was able to provide the acquisition plans for each of the four advance contracts in our review. Unlike FEMA, which retains hard copy files of its contract documentation, USACE uses three official systems of record to store contract file documentation electronically. Officials acknowledged that while moving between the three official systems to find documents may be time consuming, contract documents are typically able to be located. FEMA and USACE Identified Lessons Learned from the Use of Advance Contracts in 2017, but Reported Challenges with State and Local Coordination Remain Both FEMA and USACE have processes for identifying and assessing lessons learned following a disaster. Contracting officials from these agencies identified several lessons learned from the 2017 major hurricanes and the California wildfires that directly affected their use of advance contracts. These include the need for: (1) additional advance contracts for certain goods and services; (2) flexibility to increase contract ceilings; (3) use of USACE’s debris removal advance contracts to respond to the California wildfires; and (4) federal coordination and information sharing with state and local governments on advance contracts. While officials identified some lessons learned, they also identified challenges related to FEMA’s outreach with state and local governments on advance contracting efforts. FEMA and USACE Have Identified Lessons Learned and Actions to Address Them FEMA and USACE have processes for identifying and assessing lessons learned through after-action reviews and reports following major disasters. According to FEMA and USACE officials, they routinely perform these reviews and then compile after-action reports to identify lessons learned and proposed actions to address them. Due to the concurrent nature of hurricanes Harvey, Irma, and Maria, FEMA headquarters completed one combined after-action review for all three hurricanes in July 2018. The resulting report identified 18 strategic-level key findings across five focus areas, and recommendations for improvement. These recommendations included some that were specific to advance contracts, such as the need for additional advance contracts to support future disaster response efforts, and improved state and local coordination to support state and local contracting and logistics operations. In addition, USACE officials performed after-action reviews following disasters, and have a process in place to discuss challenges and recommendations for improvement on their use of advance contracts for temporary power, temporary roofing, and debris removal. While the scope of FEMA’s and USACE’s after-action reports are broader than just advance contracts, we identified, based on our review of reports and interviews with FEMA and USACE officials, several lessons learned related to advance contracts following the 2017 hurricanes and California wildfires, as shown in table 1. Challenges in Coordinating with and Providing Information to State and Local Governments on the Use of Advance Contracts Continued We also found that while FEMA has updated its guidance to reflect some requirements for state and local coordination over the use of advance contracts, inconsistencies in FEMA’s outreach and information on the use of advance contracts remains a challenge. PKEMRA required that FEMA encourage state and local governments to establish their own advance contracts with vendors for goods and services in advance of natural disasters. In September 2015, we found that FEMA’s outreach with state and local governments to encourage the establishment of advance contracts can result in more efficient contracting after a disaster. PKEMRA also required that FEMA establish a process to ensure that federal advance contracts are coordinated with state and local governments, as appropriate. In our September 2015 report, we also found that these efforts can ensure that states are aware of and can access certain federal advance contracts, such as General Services Administration schedule contracts. However, in the same report, we found that inconsistencies in whether and how the regions perform state and local outreach limited FEMA’s ability to support advance contracting efforts. We recommended that FEMA provide new or updated guidance to ensure that all contracting officers are aware of requirements concerning the need to conduct outreach to state and local governments to support their use of advance contracts. DHS concurred with this recommendation and in 2017 FEMA updated its Disaster Contracting Desk Guide to state that contracting officers should inform their state and local counterparts of the availability and use of federal advance contracts established by FEMA. Our review of the guide found that it does remind contracting officers to coordinate with states and localities over the use of federal advance contracts, but does not provide any details on how often or what types of advance contract information should be shared with states and localities, or provide any instructions to contracting officers on PKEMRA’s requirement to encourage states and localities to establish their own advance contracts for the types of goods and services needed during a disaster. Our current review also found inconsistencies with FEMA’s efforts to encourage states and localities to establish their own advance contracts with vendors and ensure coordination with them on their use of federal advance contracts. For example, some regional FEMA officials explained that they regularly perform outreach, which can assist states and localities with establishing advance contracts for goods and services commonly needed during a disaster, like security, transportation, and office supplies. Regional officials we spoke with said more frequent coordination allows them to avoid overlap across state and federal contracting efforts, and know what resources the states have in place and how long states are capable of providing these resources following a disaster. However, other regional officials reported having less frequent coordination with state and local governments. For example, a FEMA official stated that one of the regions has less frequent meetings with state and local governments because the region is geographically dispersed and has fewer disasters. According to another regional official, coordination between some regional offices and state and local officials over advance contracting was minimal prior to Hurricane Harvey, and in some cases only occurred when FEMA and state and local officials were co-located during a disaster. Officials from some state and local governments and USACE reported examples where increased coordination between FEMA, states, and localities could have improved the use of advance contracts in 2017. For example, in September 2018 we found that some localities were relying on the same contractors to perform debris removal activities following Hurricanes Harvey in Texas and Irma in Florida. As a result, we reported that some contractors that were removing debris in Texas did not honor existing contracts in Florida, leading to delays in debris removal. Additional communication and coordination between FEMA and contracting officials in these states and localities about which contractors they had established advance contracts with could have helped to prevent this overlap and subsequent delay in removing debris. During our current review, USACE and California officials also reported miscommunications about state and local expectations for USACE’s debris removal contracts following the wildfires. Specifically, USACE and state and local officials reported differing expectations about the work to be performed under USACE’s debris removal contracts, such as what structures would be removed from private property and acceptable soil contamination levels. According to USACE officials, they relied on FEMA, as the lead for coordinating federal disaster response, to manage communication with states and localities and to identify and manage expectations about the scope of work to be performed using their advance debris removal contracts. While state and local officials we met with in California reported working closely with some FEMA officials not responsible for regional contracting during the response to the wildfires, FEMA regional contracting officials said that they had no direct coordination with California officials. We also identified inconsistencies in the information available to FEMA’s contracting officials on existing advance contracts, which can be used to facilitate coordination with states and localities on the establishment and use of advance contracts. Our review of FEMA’s advance contract list found that it does not include all of the advance contracts that FEMA has in place, and contracting officers we spoke with cited other resources they also use to identify advance contracts, like biannual training documentation provided to contracting staff. For example, while FEMA officials told us the advance contract list is updated on a monthly basis, our analysis found that 58 advance contracts identified on the June 2018 advance contract list were not included in the May 2018 biannual training documentation, including contracts for telecommunications services, generators, and manufactured housing units. Further, 26 of the contracts included in the May training documentation were not included on the June advance contract list, including contracts for foreign language interpretation services, hygiene items, and short-shelf life meals. Some contracting officers we spoke with said they referred to the advance contract list as the primary resource for identifying advance contracts, while others referenced the biannual training as their primary resource. FEMA has recognized some shortcomings in how it coordinated and communicated with state and local governments over the use of advance contracts following the 2017 disasters, and identified some action to address these issues moving forward. In the 2017 Hurricane Season FEMA After-Action Report, FEMA identified the need to expand its capabilities to support state, local, tribal, and territorial governments in improving their capabilities for advance contracting, among other issues. The report recommends that FEMA should continue efforts to develop a toolkit that will provide state and local governments with recommendations for advance contracts, emergency acquisition guidance, and solicitation templates. According to FEMA contracting officials, the development of the toolkit has been prioritized by FEMA’s Administrator to help better prepare the states and localities and decrease their reliance on FEMA for assistance following a disaster. However, as of August 2018 the specific contents of the toolkit were still being decided. For example, officials familiar with the development of the toolkit originally said they intended for it to include FEMA’s advance contract list, to provide states with recommendations on the types of advance contracts that may be useful. But in subsequent discussions these officials told us they did not plan to provide states and localities with a full list of advance contracts to avoid being overly prescriptive, and because not all of the contracts on the list are relevant for the types of disasters some states experience. Officials further stated that since it is the responsibility of the federal coordinator in each region to communicate available federal advance contracts to states and localities, providing a full list of advance contracts is unnecessary. Federal internal control standards state that agency management should use quality information to achieve their objectives. Agency management should also internally and externally communicate that information to achieve their objective. However, FEMA’s guidance does not clearly communicate its objectives and requirements for contracting officers to encourage states and localities to enter into their own advance contracts, nor is there a consolidated resource listing available advance contracts that states and localities can use to inform their advance contracting efforts. According to FEMA officials, information on advance contracts is fluid, as new contracts are established or old contracts expire. Officials also told us that the advance contract list is updated monthly, yet as mentioned earlier, contracts identified in the May training documentation were not reflected in the list that was updated as of June. Ensuring that advance contract information is complete and updated regularly is important, because differences across FEMA’s resources listing advance contracts could result in FEMA’s contracting officers not being aware of the availability of certain contracts during a disaster, and states not receiving recommendations on what advance contracts may be helpful for them to establish. Without clear guidance on FEMA’s expectations for coordination with states and localities on advance contracting efforts, and a centralized resource listing up to date information on FEMA’s advance contracts, FEMA contracting officers and their state and local counterparts lack reasonable assurance they will have the tools needed to effectively communicate about advance contracts, and use them to respond to future disasters. Moreover, given FEMA’s recent emphasis on the importance of states and localities having the capability to provide their own life-saving goods and services in the immediate aftermath of a disaster, clearly communicating consistent and up to date information on the availability and limitations of federal advance contracts through the toolkit, or other means, is critical to informing state and local disaster response efforts. Conclusions Contracting during a disaster can pose a unique set of challenges as officials face a significant amount of pressure to provide life-sustaining goods and services to survivors as quickly as possible. Advance contracts are a tool that FEMA and others within the federal government can leverage to rapidly and cost-effectively mobilize resources, while also helping to preclude the need to procure critical goods and services non- competitively after a disaster. Given the circumstances surrounding the 2017 disasters and the importance of preparedness for future disasters, it is critical to ensure that the federal government is positioned to maximize its advance contracts to the extent practical and cost-effective to provide immediate disaster response. Although FEMA has identified advance contracts for use during a disaster, without an updated strategy—and guidance that is incorporated into training—on how to maximize their use during a disaster, as well as the development of clear guidance on acquisition planning timeframes, FEMA is at risk of these contracts not being effectively planned and used. Furthermore, FEMA officials have not always maintained complete information on the advance contracts available for them to quickly respond to disasters, or completely reported competitively and non- competitively awarded advance contract information to better help congressional committees evaluate spending over time. Finally, without continued efforts to improve outreach with states and localities and centralize information on available advance contracts, FEMA’s contracting officers and their state and local counterparts may not have the information needed to efficiently respond to a disaster. Recommendations for Executive Action We are making nine recommendations to FEMA. FEMA’s Administrator should update the strategy identified in its 2007 Advance Contracting of Goods and Services Report to Congress to clearly define the objectives of advance contracts, how they contribute to FEMA’s disaster response operations, and whether and how they should be prioritized in relation to new post-disaster contract awards. (Recommendation 1) FEMA’s Administrator should ensure the Head of the Contracting Activity updates the Disaster Contracting Desk Guide to include guidance for whether and under what circumstances contracting officers should consider using existing advance contracts prior to making new post- disaster contract awards, and include this guidance in existing semi- annual training given to contracting officers. (Recommendation 2) FEMA’s Administrator should update and implement existing guidance for program office and contracting officer personnel to identify acquisition planning timeframes and considerations across the entire acquisition planning process, and clearly communicate the purpose and use of MAPS. (Recommendation 3) FEMA’s Administrator should ensure the Head of the Contracting Activity adheres to current hard copy contract file management requirements to ensure advance contract files are complete and up to date, whether they will be transferred into the new Electronic Contract Filing System or remain in hard copy format. (Recommendation 4) FEMA’s Administrator should ensure the Head of the Contracting Activity reminds contracting officers of the three day timeframe for entering completed award documentation into the contract writing system when executing notice to proceed documentation. (Recommendation 5) FEMA’s Administrator should ensure the Head of the Contracting Activity revises its reporting methodology to ensure that all disaster contracts are included in its quarterly reports to congressional committees on disaster contract actions. (Recommendation 6) FEMA’s Administrator should ensure the Head of the Contracting Activity revises the Disaster Contracting Officer Desk guide to provide specific guidance for contracting officers to perform outreach to state and local governments on the use and establishment of advance contracts. (Recommendation 7) FEMA’s Administrator should ensure the Head of the Contracting Activity identifies a single centralized resource listing its advance contracts and ensure that source is updated regularly to include all available advance contracts. (Recommendation 8) FEMA’s Administrator should ensure the Head of the Contracting Activity communicates information on available advance contracts through the centralized resource to states and localities to inform their advance contracting efforts. (Recommendation 9) Agency Comments and Our Evaluation We provided a draft of this report to DOD, DHS, and FEMA for review and comment. DOD did not provide any comments on the draft report. In its comments, reprinted in appendix IV, DHS and FEMA concurred with our nine recommendations. DHS and FEMA also provided technical comments, which we incorporated as appropriate. In its written comments, FEMA agreed to take actions to address our recommendations, such as updating guidance on advance contract use and management, adding an addendum to its quarterly report that captures the contract actions that were previously unreported, and better communicating information on advance contracts to states and localities. In its concurrence with two of our recommendations, FEMA requested that we consider these recommendations resolved and close as implemented based on our actions it had previously taken. For example, in its response to our third recommendation, FEMA agreed to update and implement existing guidance to identify acquisition timeframes and the purpose and use of its 5-Year MAPS program. In its response, FEMA reiterated that it has conducted training sessions for its contracting and program staff on the 5-Year MAPS program and provides notice to program managers when acquisition planning is set to begin, which the agency believes satisfies this recommendation. We acknowledge FEMA’s training in this report; however, we noted that not all program and contracting staff we spoke with were familiar with 5-Year MAPS, and there is no formal guidance on timeframes for the entire acquisition planning process. We continue to believe this recommendation remains open and encourage FEMA to formalize guidance on the timeframes and considerations for planning various types of acquisitions across the entire acquisition planning process, and to document the purpose and use of the 5-Year MAPS program to ensure a uniform understanding of the program. Further, in its concurrence with our eighth recommendation, FEMA stated that it believes its current advance contract list satisfies our recommendation for internally communicating available advance contracts. We acknowledge in this report that the advance contract list is updated monthly; however, we found inconsistencies in the advance contract list and other documentation identifying advance contracts, which could result in FEMA’s contracting officers not having full visibility into available advance contracts. We continue to believe the recommendation remains open and encourage FEMA to identify a centralized resource with all available advance contracts and ensure that it is regularly updated for contracting staff. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the U.S. Army Corps of Engineers Director of Contracting, the Secretary of Homeland Security, the Administrator of the Federal Emergency Management Agency, and the Federal Emergency Management Agency’s Chief Procurement Officer. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or makm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report reviews the federal government’s contracting efforts for preparedness, response, and recovery efforts related to the three 2017 hurricanes and California wildfires. This report specifically addresses the use of advance contracts, assessing the extent to which (1) the Federal Emergency Management Agency (FEMA) and the U.S. Army Corps of Engineers (USACE) used advance contracts, (2) the planning, management, and reporting of selected FEMA and USACE advance contracts met certain contracting requirements, and (3) FEMA and USACE identified any lessons learned and challenges with their use of these contracts. We also have an ongoing review on post-disaster contracting that is expected to be completed in early 2019. To identify the extent to which FEMA and USACE used advance contracts, we reviewed data on contract obligations for the 2017 disasters from the Federal Procurement Data System-Next Generation (FPDS-NG) through May 31, 2018. We identified hurricane obligations using the national interest code, as well as the contract description. Data on obligations for the California wildfires is limited to those contracts that FEMA and USACE identified as being used to respond to those events because no national interest code was established in FPDS-NG. To determine which obligations were made through the use of advance contracts, we reviewed documentation provided by FEMA and USACE identifying the advance contracts they have in place and that were used in support of the 2017 disasters. We analyzed the FPDS-NG data to identify FEMA and USACE advance contract obligations compared to overall contract obligations by disaster, competition procedures used, and the types of goods and services procured. We assessed the reliability of FPDS-NG data by reviewing existing information about the FPDS-NG system and the data it collects—specifically, the data dictionary and data validation rules—and performing electronic testing. We determined the FPDS-NG data were sufficiently reliable for the purposes of this report. To assess the extent to which FEMA used its advance contracts, we reviewed FEMA contracting policies and guidance, such as FEMA’s 2017 Disaster Contracting Desk Guide and FEMA’s Advance Contracting of Goods and Services Report to Congress to identify available guidance on the use and intent of advance contracts. Based on our review of documentation, we identified examples of goods—tarps and meals—that FEMA had advance contracts in place for, but experienced challenges using in response the 2017 disasters. We reviewed FPDS-NG data to determine whether these goods were procured through post-disaster contracts rather than advance contracts, and selected advance and post- disaster contracts for further review. To identify limitations that affected the use of tarp and meal advance contracts, we gathered and reviewed advance and post-disaster contract documentation and interviewed contracting officials involved in the award and use of the contracts in 2017. To assess the extent to which the planning, management, and reporting of advance contracts used in response to the three hurricanes and California wildfires in 2017 met selected applicable contracting requirements, we reviewed relevant documentation, including the Post- Katrina Emergency Management Reform Act (PKEMRA), the Federal Acquisition Regulation (FAR), and Department of Homeland Security (DHS, FEMA, and USACE contracting policies. We identified a non- generalizable sample of advance contracts based on advance contract obligation data from FPDS-NG as of March 31, 2018. We analyzed the data to identify 10 competed and four h non-competed contracts. To obtain a range of competed contracts, we identified contracts used for goods and services with obligations above $50 million. All of the non- competed contracts used were for FEMA services; to obtain a range of non-competed contracts we identified contracts with obligations above $10 million. Our selected advance contracts included 10 from FEMA and four from USACE. Findings based on information collected from the 14 contracts cannot be generalized to all advance contracts. Additional details on our selected contracts can be found in table 2. To review our selected FEMA and USACE advance contracts, we developed a data collection instrument to gather selected contract information, such as period of performance, contract type, estimated contract value, and the presence of key contract documents, among others. To assess FEMA and USACE’s planning of selected advance contracts, we reviewed information from our data collection instrument on advance contract award date and period of performance, and determined that six of FEMA’s contracts met GAO’s definition of a bridge contract. To identify any planning challenges that contributed to these extensions, we reviewed FEMA acquisition planning policies, timeframes and relevant contract file documentation, such as written acquisition strategies and justification and approval documents, to determine whether acquisition planning activities for the selected advance contracts were completed according to guidance. We interviewed FEMA officials associated with these contracts on acquisition planning efforts, and factors that affected their ability to award new contracts. We also reviewed documentation and interviewed officials on FEMA’s acquisition planning system—the 5 Year Master Acquisition Planning Schedule (MAPS). To assess FEMA and USACE’s management of selected advance contracts, we reviewed information gathered from our data collection instrument on the presence of selected acquisition documents, such as acquisition strategies and contract modifications in the contract file, that typically provide the history of a contract. We reviewed relevant procurement regulations, the DHS Acquisition Manual, and other FEMA and USACE policies, to identify acquisition documentation requirements and record keeping processes. For contracts where documentation was not found in the contract file or system of record, we requested the missing documentation from FEMA and USACE officials to determine whether it had been completed. We also interviewed FEMA and USACE headquarters officials to supplement our understanding of FEMA and USACE’s record keeping policies, practices, and challenges. To assess the reporting of selected advance contracts, we compared advance contract action data identified in FPDS-NG to data reported in FEMA’s Disaster Contracts Quarterly Report Fourth Quarter, Fiscal Year 2017 and Disaster Contracts Quarterly Report First Quarter, Fiscal Year 2018 to congressional committees on disaster contracting to identify any unreported actions. We interviewed FEMA officials to discuss the methodology and data sources for the congressional committee reports, and any limitations to the accuracy of the data reported. To assess what challenges and lessons learned FEMA and USACE identified with the use of advance contracts in 2017, we reviewed PKEMRA advance contract requirements, FEMA and USACE documentation on the use of advance contracts, and after-action reports from 2017 and prior years, including the Hurricane Sandy FEMA After- Action Report, and the 2017 Hurricane Season FEMA After-Action Report, and federal internal control standards for information and communications. As part of our review, we identified FEMA and USACE’s processes for documenting lessons learned following a disaster, lessons learned specific to advance contracts, and any recommendations or actions planned by the agencies to address them. We interviewed FEMA and USACE headquarters officials on reported lessons learned, any other challenges related to the use of advance contracts, and ongoing or completed actions to address them. To describe challenges related to coordination with state and local officials on the use of advance contracts, we interviewed FEMA and USACE regional staff. To obtain perspectives and examples from state and local government officials involved in disaster response efforts we interviewed officials in California on advance contracting efforts. The information gathered from these officials is not generalizable to all officials. We also analyzed information on available advance contracts from FEMA’s June 2018 advance contract list and FEMA’s May 2018 training documentation identifying advance contracts to identify any differences in the information available to FEMA regional contracting officers, and their state and local contracting counterparts. We conducted this performance audit from March 2018 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Federal Emergency Management Agency (FEMA) Regional Offices Appendix III: Federal Emergency Management Agency (FEMA) and U.S. Army Corps of Engineers (USACE)-Identified Advance Contracts Appendix IV: Comments from the Department of Homeland Security Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Katherine Trimble (Assistant Director), Meghan Perez (Analyst in Charge), Erin Butkowski, and Suzanne Sterling were principal contributors. In addition, the following people made contributions to this report: Sonja Bensen, Emily Bond, Lorraine Ettaro, Suellen Foth, Julia Kennon, Elisha Matvay, Carol Petersen, Sylvia Schatz, Alyssa Weir, and Robin Wilson.
Why GAO Did This Study Following Hurricane Katrina, Congress required FEMA to establish advance contracts for goods and services to enable the government to quickly and effectively mobilize resources in the aftermath of a disaster, like those that affected the United States in 2017. GAO was asked to review the federal government's response to the three 2017 hurricanes and California wildfires. This report assesses, among other things, (1) FEMA and USACE's use of advance contracts, (2) FEMA's planning and reporting of selected advance contracts, and (3) challenges, if any, with FEMA's use of these contracts. GAO analyzed data from the Federal Procurement Data System-Next Generation through May 31, 2018; selected a non-generalizable sample of 14 FEMA and USACE advance contracts that were competed and obligated over $50 million, or non-competed and obligated over $10 million, in response to the 2017 disasters; and interviewed FEMA and USACE officials. What GAO Found In response to Hurricanes Harvey, Irma, and Maria, as well as the 2017 California wildfires, the Federal Emergency Management Agency (FEMA) and U.S. Army Corps of Engineers (USACE) relied heavily on advance contracts. As of May 31, 2018, FEMA and USACE obligated about $4.5 billion for various goods and services through these contracts, see figure below. GAO found limitations in FEMA's use of some advance contracts that provided critical goods and services to survivors, including an outdated strategy and unclear guidance on how contracting officers should use advance contracts during a disaster, and challenges performing acquisition planning. FEMA also did not always provide complete information in its reports to congressional committees. Specifically, GAO found 29 advance contract actions that were not included in recent reports due to shortcomings in FEMA's reporting methodology, limiting visibility into its disaster contract spending. FEMA identified challenges with advance contracts in 2017, including federal coordination with states and localities on their use. FEMA is required to coordinate with states and localities and encourage them to establish their own advance contracts with vendors. However, GAO found inconsistencies in that coordination and the information FEMA uses to coordinate with states and localities on advance contracts. Without consistent information and coordination with FEMA, states and localities may not have the tools needed to establish their own advance contracts for critical goods and services and quickly respond to future disasters. What GAO Recommends GAO is making nine recommendations to FEMA, including that it update its strategy and guidance to clarify the use of advance contracts, improve the timeliness of its acquisition planning activities, revise its methodology for reporting disaster contracting actions to congressional committees, and provide more consistent guidance and information to contracting officers to coordinate with and encourage states and localities to establish advance contracts. FEMA concurred with our recommendations.
gao_GAO-18-56
gao_GAO-18-56_0
Background Forest Service Mission and Structure The Forest Service’s mission includes sustaining the nation’s forests and grasslands; managing the productivity of those lands for the benefit of citizens; conserving open space; enhancing outdoor recreation opportunities; and conducting research and development in the biological, physical, and social sciences. The agency carries out its responsibilities in three main program areas: (1) managing public lands, known collectively as the National Forest System, through nine regional offices, 154 national forests, 20 national grasslands, and over 600 ranger districts; (2) conducting research through its network of seven research stations, multiple associated research laboratories, and 81 experimental forests and ranges; and (3) working with state and local governments, forest industries, and private landowners and forest users in the management, protection, and development of forest land in nonfederal ownership, largely through its nine regional offices. According to the Forest Service, it employs a workforce of over 30,000 employees across the country. However, this number grows by thousands in the summer months, when the agency hires seasonal employees to conduct fieldwork, respond to wildland fires, and meet the visiting public’s needs. The Office of the Chief of the Forest Service is located in Washington, D.C., with 27 offices reporting directly to the Office of the Chief, as illustrated in figure 1. The nine national forest regions, each led by a regional forester, oversee the national forests and grasslands located in their respective regions. Each national forest or grassland is headed by a supervisor, the seven research stations are each led by a station director, and a state and private forestry area is headed by an area director. The Forest Service collectively refers to its forest regions, research stations, and area as RSAs. The RSAs are organized differently according to their operations, and comparable operations within the RSAs, such as collections from reimbursable agreements, may be processed differently in the various regions and stations, resulting in highly decentralized operations. In addition, the offices of the Chief Financial Officer (CFO); Deputy Chief of Business Operations (includes the budget office); and eight other offices located in the Washington, D.C., headquarters also report directly to the Office of the Chief of the Forest Service. Forest Service Budget and Control Activities The Forest Service receives appropriations for its various programs and for specific purposes to meet its mission goals. Prior to fiscal year 2017, the Forest Service’s budgetary resources consisted primarily of no-year funds. Its budget office in Washington, D.C., initiates apportionment requests and monitors the receipt of Department of the Treasury (Treasury) warrants. Upon receipt of the warrant, the apportionment is recorded in the financial system and then the budget office develops an allocation summary detailing the allocation of its budget authority by fund, programs within the funds, and distribution of funds at the regional, station, and area levels. The Forest Service may also transfer funds from other appropriations to the appropriations account that funds its fire suppression activities when available funds appropriated for fire suppression and the Federal Land Assistance, Management, and Enhancement (FLAME) fund will be exhausted within 30 days. The Forest Service’s administrative policies, practices, and procedures are issued in its Directive System, which provides a unified system for issuing, storing, and retrieving internal direction that governs Forest Service programs and activities. The Directive System consists of the Forest Service’s manuals and handbooks. The manuals contain management objectives, policies, and responsibilities and provide general direction to Forest Service line officers and staff directors for planning and executing their assigned programs and activities. The handbooks provide detailed direction to employees and are the principal source of specialized guidance and instruction for carrying out directions issued in the manuals. Line officers at the national and RSA levels have authority to issue directives in the manuals and handbooks under their respective jurisdictions. The Forest Service’s policy states that the Directive System is the only place where Forest Service policy and procedures are issued. In addition to the Directive System, Forest Service staff have also developed standard operating procedures (SOP) and desk guides to supplement guidance provided in directives. However, the SOPs and desk guides are not part of the Forest Service Directive System and therefore are not official policy and procedures. Forest Service Did Not Properly Design Control Activities for Its Allotments Process, Administrative Control of Funds, and Fund Transfers While the Forest Service had documented processes for allotting its budgetary resources, it did not have an adequate process and related control activities for reasonably assuring that (1) amounts designated in appropriations acts for specific purposes are used as designated and (2) unobligated no-year appropriation balances from prior years were reviewed for their continuing need. In addition, the Forest Service did not have a properly designed and documented system for administrative control of funds. Finally, the Forest Service had not properly designed control activities for fund transfers for fire suppression activities under its Wildland Fire Management program. Forest Service Does Not Have an Adequate Process and Related Control Activities for Reasonably Assuring That Appropriated Amounts Are Used for the Purposes Designated While the Forest Service had documented processes for allotting its budgetary resources, it did not have an adequate process and related control activities to reasonably assure that amounts designated in appropriations acts for specific purposes are used as designated—as required by the purpose statute, which states that “appropriations shall be applied only to the objects for which the appropriations were made except as otherwise provided by law.” We reviewed Forest Service documents about its budget authority processes, which included control objectives, related control activities, and processes over the allotment of its budgetary resources. We found that these documents, including manuals and handbooks, did not include an adequate process and related control activities for assuring that appropriated amounts are used for the purposes designated. For example, such a process would include the Forest Service allotting appropriated funds for specific programs or objects as provided in the applicable appropriation act, by either using specific budget line items already defined in the Forest Service’s financial system or creating new budget line items, as needed. Standards for Internal Control in the Federal Government states that management should define objectives clearly to enable the identification of risks and design appropriate control activities to achieve objectives and respond to the risks identified. As a result of the Forest Service not having an adequate process and related control activities for assuring that appropriated amounts are used for the purposes designated, the Forest Service did not properly allocate certain funds for specific purposes detailed in the appropriations acts for fiscal years 2015 and 2016. For example, in fiscal year 2015, the Forest Service did not set aside in its financial system the $65 million specified in the fiscal year 2015 appropriations act for acquiring aircraft for the next- generation airtanker fleet. According to Forest Service documents, as of January 6, 2016, $35 million of the designated funds was used for other purposes. In February 2017, we issued a legal opinion, related to the Forest Service’s use of the $65 million, which concluded that the Forest Service had failed to comply with the purpose statute. According to USDA’s Office of General Counsel, “this lack of any separate apportionment or account for the next-generation airtanker fleet was due to the fact that it was a new item, not included in the agency’s budget request, and added late in the appropriations process.” Similarly, in fiscal year 2016, the Forest Service did not create new budget line items to reserve in its financial system $75 million for the Forest Inventory and Analysis Program specified in the fiscal year 2016 appropriations act. Rather than creating a new budget line item for the program specified in the appropriations act, the funds were combined with an existing budget line item, making it difficult to track related budget amounts and actual expenditures. The lack of an adequate process and related control activities to reasonably assure that appropriated amounts are used for the purpose designated also increases the risk that the Forest Service may violate the Antideficiency Act. Forest Service Lacked a Process and Related Control Activities over the Review of Unobligated No- Year Funds from Prior Years The Forest Service did not have a process and related control activities to reasonably assure that unobligated, no-year funds from prior years were reviewed for continuing need. We reviewed the Forest Service’s budget authority process document and related manuals and handbooks, which documented control objectives and procedures over its budgetary resources and the guidance for administrative control of funds. We found that these documents did not include a process for reviewing the Forest Service’s unobligated, no-year funds from prior years and related control activities to reasonably assure that such funds were reviewed for continuing need. Such reviews, if performed, may identify unneeded funds that could be reallocated to other programs needing additional budgetary resources, if consistent with the purposes designated in appropriations acts. The USDA Budget Manual states as a department policy that “agencies of the Department have a responsibility to review their programs continually and recommend, when appropriate, deferrals or rescissions.” The USDA Budget Manual further states the following: “Agency officials should remain alert to this responsibility since the establishment of reserves is an important phase of budgetary administration. If it becomes evident during the fiscal year that any amount of funds available will not be needed to carry out foreseeable program requirements, it is in the interest of good management to recommend appropriate actions, thereby maintaining a realistic relationship between apportionments, allotments, and obligations.” However, the Forest Service did not develop a directive addressing the control objectives, related risks, and control activities for implementing this USDA policy. Up until fiscal year 2017, Forest Service budgetary resources consisted primarily of no-year funds. At the beginning of each fiscal year, unobligated balances of no-year funds are carried forward and reapportioned to become part of budget authority available for obligation in the new fiscal year. Unobligated balances can increase during the fiscal year due to deobligation of prior years’ unliquidated obligations that the Forest Service determines it no longer needs. These resources are immediately available to the Forest Service to the extent authorized by law without further legislation or action from Office of Management and Budget (OMB) unless the apportionment states otherwise. According to Forest Service officials, unobligated funds reported in the Forest Service’s September 30, 2016, Statement of Budgetary Resources included $351 million in discretionary unobligated no-year funds, appropriated as far back as fiscal year 1999. The Forest Service did not identify and define a process and control objectives related to its review of unobligated no-year funds from prior years for continuing need. As a result, the Forest Service did not have reasonable assurance that prior no-year unobligated balances were properly managed and considered in its annual budget requests. This increased the risk that the Forest Service may make budget requests in excess of its needs. Additionally, the Forest Service could miss opportunities to use its prior year unobligated no-year funds more timely and effectively, for example, using these funds for other Forest Service program needs, if consistent with the purposes designated in appropriations acts. During our work, we brought this issue to management’s attention, and in response, Forest Service officials stated that the Forest Service is planning to develop a quarterly process to review available balances and, as needed, redirect funds to agency priorities. However, as of July 2017, the Forest Service had not yet developed this review process. Further, Congress rescinded about $18 million of the Forest Service’s prior year unobligated balances and required it to report unobligated balances quarterly within 30 days after the close of each quarter and appropriated multi-year funds instead of no- year funds to the Forest Service for fiscal year 2017. Forest Service Did Not Have a Properly Designed Comprehensive System for Administrative Control of Funds The Forest Service issued guidance related to administrative control of funds in manuals and handbooks, which USDA did not review and approve prior to their issuance. Based on our review of these documents, we found that the processes and related control activities over the administrative control of funds were dispersed in numerous manuals and handbooks, which may hamper a clear understanding of the overall system. Further, the system lacked key elements that would allow it to serve as an adequate system of administrative control of funds. For example, in its manuals and handbooks the Forest Service did not identify, by title or office, those officials with the authority and responsibility for obligating the service’s appropriated funds, such as funds for contracts, travel, and training. As a result, the responsibility for obligating funds was not clearly described and properly assigned in Forest Service policy as required by the USDA Budget Manual and OMB Circular No. A-11. OMB Circular No. A-11 states that the Antideficiency Act requires that the agency head prescribe, by regulation, a system of administrative control of funds, and OMB provided a checklist in appendix H to the circular that agencies can use for drafting their fund control regulations. This requirement is consistent with those in the USDA Budget Manual, which prescribes budgetary administration through a system of administrative controls for its component agencies, including the Forest Service. The USDA Budget Manual states that to the extent necessary for effective administration, (1) the heads of USDA component agencies may delegate to subordinate officials responsibilities in connection with the administrative distribution of funds within apportionments and allotments and the monitoring, control, and reporting of the occurrence of obligations and expenditures under apportionments and allotments and (2) the chain of such responsibility shall be clearly defined. In addition, USDA requires its component agencies to promulgate and maintain administrative control of funds regulation and to send such regulation to USDA’s Office of Program and Budget Analysis for review and approval prior to issuance. Because the Forest Service has not developed and issued a comprehensive system for administrative control of funds that considers all aspects of the budget execution processes, it cannot reasonably assure that (1) programs will achieve their intended results; (2) the use of resources is consistent with the agency’s mission; (3) programs and resources are protected from waste, fraud, and mismanagement; and (4) laws and regulations are followed. We also found that the Forest Service had not reviewed and updated most of its administrative control of funds guidance in the manuals and handbooks for over 5 years. The USDA Budget Manual requires each component to periodically review its funds control system for overall effectiveness and to assure that it is consistent with its agency programs and organizational structures. Further, Forest Service policy also requires routine review, every 5 years, of policies and procedures in its Directive System. According to Forest Service officials, when directives are up for review and update, a staff from the Office of Regulatory and Management Services (ORMS) sends an e-mail reminder to notify responsible personnel that updates to applicable directives are needed. However, we found that the Forest Service does not have adequate controls in place to monitor the reviews and any updates of the manuals and handbooks in its Directive System to reasonably assure that their efforts resulted in timely updates. As a result, the Forest Service is at risk that guidance for its system for administrative control of funds may lose relevance as processes change over time and control activities may become inadequate. Forest Service Control Activities for Wildland Fire Suppression Related Fund Transfers Were Not Properly Designed The Forest Service did not have properly designed control activities over its process for fund transfers related to wildland fire suppression activities. The Forest Service receives appropriations for necessary expenses for (1) fire suppression activities on National Forest System lands, (2) emergency fire suppression on or adjacent to such lands or other lands under fire protection agreement, (3) hazardous fuels management on or adjacent to such lands, and (4) state and volunteer fire assistance. Transfer of funds from other Forest Service programs to its fire suppression activities occurs when the Forest Service has exhausted all available funds appropriated for the purpose of fire suppression and the FLAME fund. A key aspect of this process is assessing the FLAME forecast, which the Forest Service uses to predict the costs of fighting wildland fires for a given season, and developing a strategy to identify specific programs and the amounts that may be transferred to pay for fire suppression activities when needed. The process for reviewing the FLAME forecast and strategizing the fund transfers was documented in the Basic Budget Desk Guide created by staff in the Forest Service’s Strategic Planning and Budget Analysis Office. However, the desk guide did not contain evidence of review by responsible officials. As a result, the Forest Service lacked reasonable assurance that the desk guide was complete and appropriate for its use. The Basic Budget Desk Guide included a listing of actions to be performed by the analyst for reviewing the FLAME forecast report and developing a strategy for fund transfer from other programs. However, the desk guide did not specify the factors to be considered when developing the strategy. For example, it did not call for documentation addressing the rationale for the strategy or an assessment of the risk that the fund transfer could have on the programs from which the funds would be transferred. The desk guide also did not describe the review and approval of the strategy by a responsible official(s) prior to the fund transfer request sent to the Chief of the Forest Service. According to Standards for Internal Control in the Federal Government, management should design control activities to achieve objectives and respond to risks and that such control activities should be designed at the appropriate levels in the organizational structure. Further, management may design a variety of transaction control activities for operational processes, which may include verifications, authorizations and approvals, and supervisory control activities. The lack of properly designed control activities for supervisory review of the desk guide and strategy to identify the amounts for fund transfers does not provide the Forest Service reasonable assurance that the objectives of the fund transfers—including mitigating the risk of a shortfall of funding for other critical Forest Service program activities, such as payroll or other day-to-day operating costs—will be efficiently and effectively achieved. Forest Service Did Not Have Properly Designed Processes and Related Control Activities for Reimbursable Receivables and Collections The Forest Service enters into various reimbursable agreements with agencies within USDA, other federal agencies, state and local government agencies, and nongovernment entities to carry out its mission for public benefit. The reimbursable agreements may be for the Forest Service to provide goods and services to a third party or to receive goods and services from a third party, or may be a partnership agreement with a third party for a common goal. According to Forest Service officials, the two distinct types of Forest Service reimbursable agreements are (1) fire incident cooperative agreements and (2) reimbursable and advanced collection agreements (RACA). The Forest Service did not have documented processes and related control activities for its fire incident cooperative agreements to reasonably assure the effectiveness and efficiency of its related fire incident operations. In addition, processes and related control activities applicable to RACAs were not adequately described in applicable manuals and handbooks in the Directive System, to reasonably assure that control activities could be performed consistently and effectively. Further, certain RACA processes in the Directive System had not been timely reviewed by management and did not reflect current processes. Moreover, as previously discussed, SOPs and desk guides developed in field offices related to RACA processes were not in the Forest Service’s Directive System. Finally, the Forest Service lacked control activities segregating incompatible duties performed by line officers and program managers in creating reimbursable agreements and the final disposition of related receivables. Forest Service Did Not Have Documented Processes and Related Control Activities for Fire Incident Cooperative Agreements The Forest Service did not have documented processes and related control activities for its fire incident cooperative agreements to reasonably assure the effectiveness and efficiency of its related fire incident operations and reliable reporting internally and externally. As part of the service’s mission objective to suppress wildland fires, Forest Service officials stated that they enter into 5-year agreements referred to as master cooperative agreements with federal, state, and other entities. These agreements document the framework for commitment and support efficient and effective coordination and cooperation among the parties in suppressing fires, when they occur. The master cooperative agreements do not require specific funding commitments as amounts are not yet known. These agreements vary from region to region because of the differing laws and regulations pertaining to the participating states and other entities. These variations can also result in different billing and collection processes between regions. When a fire occurs, supplemental agreements, which are based on the framework established in the applicable master cooperative agreements, are signed by relevant parties for each fire incident. These agreements establish the share of fire suppression costs incurred by the Forest Service and amounts related to entities that benefitted from those fire suppression efforts. These supplemental agreements require commitment and obligation of funds. As indicated in figure 2, the Forest Service’s obligations for fire suppression activities ranged from $412 million to $1.4 billion over the 10-year period from fiscal years 2007 through 2016. In response to our request for documentation of processes and related control activities over its fire incident cooperative agreements, Forest Service officials stated that processes and related control activities over reimbursable agreements were applicable to both fire incident cooperative agreements and RACAs. However, based on our review of the Forest Service’s processes and related control activities over its reimbursable agreements, we found that the unique features of fire incident cooperative agreements (as compared to features of RACAs) were not addressed in the processes and related controls for reimbursable agreements. For example, there was no process and related control activities over the negotiation and review of (1) a fire incident master cooperative agreement, which is developed before a fire occurs, and (2) supplemental agreements, which are signed by all relevant parties after the start of a fire incident. These supplemental agreements detail, among other things, the terms for (1) fire department resource use, (2) financial arrangements, and (3) specific cost-sharing agreements. Another unique feature of fire incident cooperative agreements, which was not covered in process documents for its reimbursable agreements, was the preparation of the Cost Settlement Package. The preparation of this package does not start until after the fire has ended and the Forest Service has received and paid all bills. According to Forest Service officials, a fire incident is deemed to have ended when there are no more resources (firefighters and equipment) on the ground putting out the fire. However, this definition was not documented in the Forest Service’s manuals and handbooks in the Directive System. Based on our review of documentation that the Forest Service provided for four fire incidents, we found that for these incidents the Cost Settlement Packages and the billings took several months to years to complete after the fire incident. According to Forest Service officials, delays in preparing the Cost Settlement Package in many cases were due to parties involved in suppressing the fires taking a long time to submit their invoices to the Forest Service for payment. Because the preparation of Cost Settlement Packages was not included in the process documents, the Forest Service did not have a defined time frame for when, in relation to the end of the fire, the Cost Settlement Package must be completed. For example, in one case we reviewed, the bill for a cost settlement was sent 9 months after the fire occurred, and in another case, settlement occurred approximately 2 years after the fire occurred. For both fire incidents, based on the reports we reviewed, the fires were contained within a week or two, but the Forest Service does not have a policy for documenting the date when the fire incident is deemed to have ended. Because of the complexity of the process for negotiating and determining the reimbursable amounts from all the costs that the Forest Service pays for a fire incident, the reimbursable amounts may take time to negotiate, and subsequent billing to and collection from parties may take much longer. Forest Service officials stated that some receivables that were not going to be collected until after its financial system’s aging process for receivables deemed such receivables uncollectible and a bad debt are tracked in a spreadsheet outside its financial system. We found that the Forest Service did not have a documented process and related control activities to reasonably assure that its Budget Office was informed of these older receivables being tracked in a spreadsheet and the related progress of collection activities that local program managers and line officers perform, which could affect the reliability of the reported reimbursable receivable amounts. According to Standards for Internal Control in the Federal Government, management should internally communicate the necessary quality information to achieve the entity’s objectives. Without proper communication, important information, such as amounts that the Forest Service will receive from fire incident cost settlement negotiations, may not be considered in the Forest Service’s strategy for the effective and efficient management of fund transfers for fire suppression activities. Forest Service Manuals and Handbooks for RACAs Did Not Adequately Describe the Processes and Related Control Activities and Were Not Timely Reviewed Processes and related control activities applicable to RACAs were not adequately described in Forest Service manuals and handbooks in its Directive System. RACAs, which may be for research or other nonemergency purposes, are billed and collected based on previously agreed upon billing and collection terms. In accordance with the Forest Service’s Directive System, policies related to business processes, such as RACAs, are documented in its manuals while procedures for performing specialized activities are documented in its handbooks. We found that the manuals and handbooks in the Directive System did not adequately describe the processes and related control activities over the RACA processes to enable efficient and effective performance of the work by appropriate and responsible personnel. The manuals and handbooks related to RACAs state that a manager review the documentation to ensure that the funding supports the objective of the agreement, the agreement is the correct instrument for funding the project, all relevant terms and conditions have been included in the agreement, the entity’s financial strength and capability are acceptable, and all applicable regulations and OMB circulars have been addressed. However, there was no discussion in the manuals and handbooks about when the manager needs to perform the reviews and how these reviews were to be documented. Further, in response to our inquiry regarding procedures performed to assess the entity’s financial strength and capability are acceptable before a RACA is signed, Forest Service officials stated that there is currently no formal process for determining financial capability for RACAs. For reimbursable agreements, the Forest Service’s process documented in its handbook consisted of completing a creditworthiness checklist. However, the handbook did not describe procedures for (1) completing the checklist and (2) documenting responsible personnel’s review and approval of an entity’s acceptable financial capability. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks. Management’s design of internal control establishes and communicates the who, what, when, where, and why of internal control execution to personnel. Documentation also provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel. Further, the standards also explain that management clearly document internal control in a manner that allows the documentation to be readily available and properly managed and maintained. In addition, the manuals and handbooks applicable to the RACAs have not been timely reviewed by management, and had not been updated to reflect current processes. For example, the document that serves as direction for Forest Service personnel on how to enter into RACAs referred to an outdated financial system that was replaced in fiscal year 2013. Further, the manuals and handbooks for the RACA processes had no indication that they had been reviewed within the past 5 years. Forest Service policy requires routine review, every 5 years, of policies and procedures in its Directive System. According to Forest Service officials, a staff member from ORMS sends an e-mail to officials responsible for updating these policies and procedures. However, appropriate control activities have not been designed to reasonably assure that updates were made, reviewed, approved, and issued as needed for continued relevance and effectiveness. Without adequate descriptions of processes and related control activities in its manuals and handbooks over RACAs, the Forest Service is at risk that processes and related control activities may not be properly, consistently, and timely performed. Further, because it lacks a process and related controls for monitoring and reviewing the updates of the guidance and various process documents in the Directive System, the Forest Service is at risk that its policies and procedures may not provide appropriate agency-wide direction in achieving control objectives, particularly when financial systems change and old processes may no longer be applicable. Forest Service Standard Operating Procedures and Desk Guides for RACA Processes Were Not in the Directive System and Lacked Sufficient Details SOPs and desk guides related to RACA processes were not in the Directive System and are not considered official Forest Service policy and procedures. Forest Service field staff responsible for various processes generally developed SOPs and desk guides to document day-to-day procedures for employees in carrying out RACA processes to supplement the manuals and handbooks. However, the SOPs and desk guides did not reference the applicable manuals and handbooks they supplemented. Further, the SOPs and desk guides did not provide descriptions of (1) review procedures for authorization, completeness, and validity of RACAs and related receivables; (2) detailed review procedures to be performed and by whom; (3) timing of review procedures; and (4) how to document the completion of the review procedures. Finally, SOPs and desk guides did not have evidence that responsible officials reviewed and approved them to authorize their use. These SOPs and desk guides are only available in the field office where these were developed, and if similar SOPs and desk guides were developed in other field offices, control activities and how they are performed could vary. We also noted that these SOPs and desk guides were not timely updated to reflect processes and systems currently in use. For example, there were many instances where the SOPs and desk guides referred to systems that the Forest Service no longer used. Standards for Internal Control in the Federal Government states that management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. Effective documentation assists in management’s design of internal control by establishing and communicating the who, what, when, where, and why of internal control execution to personnel. Documentation also provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel and to achieve the entity’s objectives. Management assigns responsibility and delegates authority to key roles throughout the entity. As a result of the issues discussed above, the Forest Service is at risk that control activities may not be properly and consistently performed and its related control objectives may not be achieved efficiently and effectively. In addition, the Forest Service is at risk that knowledge for performing the control activities may be limited to a few personnel or lost altogether in the event of employee turnover. Forest Service Lacked Adequate Segregation of Duties over Reimbursable Agreements The Forest Service lacked control activities over the segregation of incompatible duties performed by line officers and program managers for reimbursable agreements and any adjustments affecting the final disposition of related receivables. Field offices manage the majority of Forest Service projects, including authorizing the agreements and monitoring related collection. The Forest Service line officer for fire incident cooperative agreements and program managers for RACA at the RSA, unit, or field levels initiate and develop the terms of the agreements and are also responsible for any subsequent negotiation of the agreements. In the process of negotiating and settling costs, the line officer or program manager has the authority to cancel or change related receivables that they deemed uncollectible. For example, in a fire incident, the line officer at the region or field level is involved in both developing a Cost Share Agreement and after the fire incident has ended, negotiating the Cost Settlement Package with parties involved in the agreement to determine the final settlement amount that the Forest Service will be reimbursed for expenses paid in suppressing the fire incident. Therefore, the line officer is responsible for initiating the Cost Share Agreement, modifying the Cost Settlement Package, and changing or canceling the related receivable, which represent conflicting duties. We also found that the Forest Service did not have any mitigating controls, such as independent approval of any adjustments affecting the final disposition of receivables, to mitigate the risk of these incompatible duties. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks. Segregation of duties contributes to the design, implementation, and operating effectiveness of control activities. To achieve segregation of key functions, management can divide responsibilities among different people to reduce the risk of error, misuse, or fraud. This may include separating the responsibilities for authorizing or approving transactions, processing and recording them, and reviewing the transactions so that no one individual controls all key aspects of a transaction or event. Forest Service officials stated they did not consider segregating the conflicting duties related to reimbursable agreements because these line officers and program managers were most familiar with the terms of the agreement and the activities performed. However, a lack of adequate segregation of conflicting duties or proper monitoring and review of conflicting duties for receivables from reimbursable agreements could result in receivables not being collected, and an increased risk of fraud. Forest Service Did Not Have Properly Designed Processes and Related Control Activities for Reviewing Unliquidated Obligations The Forest Service’s processes and related control activities over review of unliquidated obligations were not properly designed to reasonably assure optimum utilization of funds and were inconsistent with USDA and Forest Service policy. Further, Forest Service manuals and handbooks related to the review of unliquidated obligations did not clearly describe control activities and were not timely reviewed by management. The Forest Service reported unliquidated obligations of approximately $2.6 billion and $2.5 billion in its financial statements as of September 30, 2015, and 2016, respectively. In fiscal year 2016, the Forest Service deobligated about $319 million of its unliquidated obligations from prior years. Forest Service Processes and Control Activities for Review and Certification of Unliquidated Obligations Were Not Properly Designed The Forest Service’s procedures related to the review of unliquidated obligations were not properly designed and were inconsistent with USDA and Forest Service policy. In accordance with USDA Departmental Regulation (Regulation 2230-001) and related Forest Service policy, the Forest Service identifies and reviews unliquidated obligations that have been inactive for at least 12 months to determine whether delivery or performance of goods or services is still expected to occur. Once a determination has been made that an unliquidated obligation can be deobligated, program or procurement personnel are to notify finance personnel, in writing, within 5 days of the determination to process the deobligation. Within 15 days of receipt of the written notification, the unliquidated obligations are to be adjusted in the financial management system. The Forest Service CFO is then to be notified in writing that the deobligation was processed. Within 1 month of the close of each quarter, the Forest Service CFO is to submit to USDA’s Associate CFO for Financial Operations a certification stating that the Forest Service has performed reviews of its unliquidated obligations and taken appropriate actions, such as promptly deobligating an unliquidated obligation that is no longer needed. However, the Forest Service’s quarterly certifications are inconsistent with USDA and Forest Service policy because the months included in each quarterly review do not line up with the months outlined in policy. For example, as shown in table 1, based on policy the certification due on October 31, covers the months July through September. However in practice, the certification that the Forest Service prepared for October 31 covers May through July. As a result, the review and certification for August and September would be delayed an entire quarter. According to Forest Service officials, it takes considerable time to produce accurate unliquidated obligations reports from USDA’s financial system and then distribute them to field offices. Therefore, there is not sufficient time for the field offices to review and deobligate amounts not needed from the unliquidated obligations balances to meet USDA’s certification timing and requirements. However, the Forest Service has not developed other processes and control activities that could help meet USDA and Forest Service policy and reasonably assure that unliquidated obligations are reviewed timely and appropriate actions are taken. As a result, there is an increased risk that the Forest Service may not achieve its control objectives of optimum utilization of funds and timely adjustments of obligated balances. Forest Service Processes and Control Activities for Reviewing Unliquidated Obligations in Manuals and Handbooks Were Not Adequately Described and Timely Reviewed The Forest Service’s process and related control activities over its review of unliquidated obligations and resulting certifications were not adequately described in manuals and handbooks in its Directive System. Further, the manuals and handbooks were not timely reviewed and updated to reflect processes and systems currently in use. In accordance with the Forest Service’s Directive System, policies are documented in its manuals while procedures for performing specialized activities are documented in its handbooks. However, we found that the Forest Service’s processes and related control activities for reviewing unliquidated obligations were not adequately described and documented in such manuals and handbooks. Although parts of the applicable section of the handbook referred to procedures, there were no detailed descriptions of the processes, and only references to objectives of the procedures for reviewing unliquidated obligations were listed. For example, in identifying unliquidated obligations for review, the narrative description of the procedures in the handbook states that the responsible obligating official must review each selected unliquidated obligation to determine whether (1) delivery or performance of goods or services has occurred or is expected to occur and (2) accounting corrections to the obligation data in the accounting system are necessary. The handbook also refers to an unliquidated obligations report listing the unliquidated obligations that must be reviewed. The narrative does not provide any detailed procedures that obligating officials or responsible personnel need to perform, how to perform those procedures, and how those control activities are to be documented. The guidance in the handbook was supplemented by two desk guides. However, the desk guides are outside the Forest Service’s Directive System and, as previously noted, the Directive System is the only place where the Forest Service’s policy and procedures are issued. In addition, these desk guides did not reference the applicable guidance in the Directive System that they were supplementing. Further, the process and related control activities for adjusting unliquidated obligations within 15 days of receipt of written notification, as stated in USDA’s policy, were not described in either the handbooks or the desk guides. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks to achieve an effective internal control system. Management’s design of internal control establishes and communicates the who, what, when, where, and why of internal control execution to personnel. Documentation also provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel. Further, the standards also explain that management clearly document internal control in a manner that allows documentation to be readily available and that documentation be properly managed and maintained. In addition, manuals and handbooks for processes related to review and certification of unliquidated obligations had no evidence that they had been reviewed within the past 5 years for ongoing relevance and effectiveness. According to a Forest Service manual, all service-wide directives, except interim directives, shall be reviewed at least once every 5 years. The Forest Service does not have an effective process in place to monitor the reviews and any updates of the manuals and handbooks in its Directive System. As previously discussed, while ORMS sends an e- mail requesting that the applicable officials review and update the guidance in the manuals and handbooks, there is no follow-up process to help ensure that documents were reviewed and updated as needed. Because the Forest Service’s process and related control activities over its review and certification of unliquidated obligations were not adequately described in its manuals and handbooks, the Forest Service is at risk that its control activities may not reasonably assure that control objectives provide (1) optimum utilization of funds and (2) for unliquidated obligations that are no longer needed to be efficiently and effectively deobligated and made available for other program needs. Conclusions Adequate processes and related control activities over the Forest Service’s budgetary resources are critical in reasonably assuring that these resources are timely and effectively available for its mission operations, including fire suppression. However, we identified deficiencies in the Forest Service’s processes and related controls over allotments, unobligated no-year funds from prior years, administrative control of funds, fund transfers, reimbursable agreements, and available funds from deobligation of unliquidated obligations. Deficiencies ranged from a lack of processes to control activities that were not properly designed, resulting in an increased risk that Forest Service funds may not be effectively and efficiently monitored and used. In addition, the Forest Service’s manuals and handbooks, which provide the directives for the areas we reviewed, had not been reviewed by management in accordance with the Forest Service’s 5-year review policy. Further, Forest Service staff prepared SOPs and desk guides that documented control activities, but they were not issued as official policy and had not been reviewed and approved by responsible officials. As a result, the Forest Service is at increased risk that the control activities may not be consistently performed across the agency and that the guidance in the SOPs and desk guides may not comply with agency policy in the Directive System. Recommendations for Executive Action To improve internal controls over the Forest Service’s budget execution processes, we are making the following 11 recommendations: The Chief of the Forest Service should (1) revise its process and (2) design, document, and implement related control activities to reasonably assure that amounts designated in appropriations acts for specific purposes are properly used for the purposes specifically designated. (Recommendation 1) The Chief of the Forest Service should (1) develop a process and (2) design, document, and implement related control activities to reasonably assure that unobligated no-year funds from prior years are reviewed for continuing need. (Recommendation 2) The Chief of the Forest Service should (1) design, document, and implement a comprehensive system for administrative control of funds and (2) submit it for review and approval by USDA before issuance, as required by the USDA Budget Manual. (Recommendation 3) The Chief of the Forest Service should design, document, and implement control activities over the preparation and approval of a fire suppression fund transfers strategy, to specify all appropriate factors to be considered in developing and documenting the strategy, and incorporate these control activities into the Directive System. (Recommendation 4) The Chief of the Forest Service should design, document, and implement processes and related control activities for its fire incident cooperative agreements to reasonably assure efficient and effective operations and timely and reliable reporting of reimbursable receivables related to fire incident cooperative agreements, and incorporate them in the Directive System. (Recommendation 5) The Chief of the Forest Service should update the RACA manuals and handbooks to adequately describe the processes and related control activities applicable to RACAs to reasonably assure that staff will know (1) how and when to perform processes and control activities and (2) how to document their performance. (Recommendation 6) The Chief of the Forest Service should design, document, and implement segregation of duties or mitigating control activities over reimbursable agreements and any adjustments affecting the final disposition of related receivables. (Recommendation 7) The Chief of the Forest Service should modify, document, and implement control activities consistent with USDA and Forest Service policy to reasonably assure that unliquidated obligations are reviewed timely and appropriate actions are taken. (Recommendation 8) The Chief of the Forest Service should adequately describe the processes and related control activities for unliquidated obligations review and certification processes in manuals and handbooks within the Directive System. (Recommendation 9) The Chief of the Forest Service should develop, document, and implement a process and related control activities to reasonably assure that manuals and handbooks for allotments, reimbursable agreements, and review of unliquidated obligations are reviewed and updated every 5 years, consistent with Forest Service policy. (Recommendation 10) The Chief of the Forest Service should develop, document, and implement a process and related control activities to reasonably assure that SOPs and desk guides (1) clearly refer to guidance in the Directive System for allotments, reimbursable agreements, and review of unliquidated obligations and (2) are reviewed and approved by responsible officials prior to use. (Recommendation 11) Agency Comments We provided a draft of this report to USDA for comment. In its comments, reproduced in appendix III, the Forest Service stated that it generally agreed with the report and that it has made significant progress to address the report’s findings. Specifically, the Forest Service stated that its financial policies concerning budget execution have been revised to address our concerns with allotments, unliquidated obligations, commitments, and administrative control of funds as prescribed by OMB Circular No. A-11. Further, the Forest Service stated that it has undertaken an in-depth review of its unliquidated obligations and modified the certification process to comply with the USDA requirement. We are sending copies of this report to the appropriate congressional committees and to the Secretary of Agriculture and the Chief of the Forest Service. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-9869 or khana@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine the extent to which the Forest Service properly designed control activities over (1) allotments of budgetary resources, its system for administrative control of funds, and any fund transfers between Forest Service appropriations; (2) reimbursables and related collections; and (3) unliquidated obligations. We reviewed the Forest Service’s process documents and control activities, policies and procedures from its manual and handbooks in its Directive System, and other guidance in the form of standard operating procedures (SOP) and desk guides to obtain an understanding of internal controls at the Forest Service related to our three objectives. We reviewed the control activities that the Forest Service identified to determine whether the activities would achieve the control objectives that the service identified and whether the activities were consistent with Standards for Internal Control in the Federal Government. We also reviewed recent relevant GAO and U.S. Department of Agriculture (USDA) Office of Inspector General reports to obtain background information related to the Forest Service’s budget execution processes. We evaluated the design of the Forest Service’s control activities based on data for fiscal year 2016. To address our first objective, we reviewed Forest Service process documents related to allotments and budget authority to obtain an understanding of control activities over the allotments of budgetary resources, its system for administrative control of funds, and any related fund transfers between Forest Service appropriations. The process documents included a list of control objectives and related control activities that the Forest Service had used to assess its internal controls. We also reviewed the related guidance in appendix H to Office of Management and Budget Circular No. A-11, Preparation, Submission, and Execution of the Budget for Administrative Control of Funds, to identify requirements that agencies must meet to ascertain whether their controls over funds management are properly designed. We interviewed key officials from the Forest Service’s Strategic Planning, Budget and Accountability Office to gain an understanding of their processes for allotments of budgetary resources, its system for administrative control of funds, and fund transfers between Forest Service appropriations for wildland fire suppression activities, including how each of their risk assessments were performed and their plans to mitigate the risks. We reviewed and analyzed the processes documented in the manuals and handbooks collectively referred to as directives to determine whether the processes and control activities were designed to achieve the Forest Service’s stated objectives. Specifically, we examined the Forest Service’s control activities to determine whether these sufficiently communicated the procedures to be performed and the documentation to be prepared. We also reviewed USDA Budget Manual to determine whether Forest Service guidance was consistent with USDA’s requirements for all of its component agencies, specifically requirements related to the administrative control of funds. To address our second objective, we reviewed the Forest Service’s policies, procedures, and other documentation and interviewed agency officials to develop an understanding of its processes related to reimbursable agreements and related collection activities. We first identified, through interviews with Forest Service officials, the different kinds of reimbursable agreements that the Forest Service enters into with other USDA components, other federal agencies, state and local government agencies, and nongovernment entities to carry out its mission for the benefit of the public. Two distinct types of reimbursable agreements include (1) fire incident cooperative agreements and (2) reimbursable and advanced collection agreements. We reviewed Forest Service process documents and templates related to these two types of reimbursable agreements provided to obtain an understanding of control activities over reimbursable processes. We reviewed the list of control objectives and related control activities that the Forest Service identified to determine whether the control activities were designed to achieve the applicable control objectives. To address our third objective, we reviewed the Forest Service’s policies, procedures, and other documentation related to and interviewed agency officials about unliquidated obligations to develop an understanding of the Forest Service’s review and certification processes for unliquidated obligations balances. We reviewed the Forest Service’s control activities related to its process for reviewing unliquidated obligations to obtain an understanding of control activities around its process and to determine whether the control activities were designed to achieve the applicable control objectives. Based on the results of our evaluation of the Forest Service’s design of internal control activities over the budget execution processes, we did not evaluate the implementation of the control activities or whether they were operating as designed. While our audit objectives focused on certain control activities related to (1) allotments of budgetary resources, the Forest Service’s system for administrative control of funds, and related fund transfers; (2) reimbursables and related collections for reimbursable agreements; and (3) unliquidated obligations, we did not evaluate all control activities and other components of internal control. If we had done so, additional deficiencies may or may not have been identified that could impair the effectiveness of the control activities evaluated as part of this audit. We conducted this performance audit from August 2016 to January 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Standards for Internal Control in the Federal Government Standards for Internal Control in the Federal Government provides the overall framework for establishing and maintaining internal control. Internal control represents an agency’s plans, methods, policies, and procedures used to fulfill its mission, strategic plan, goals, and objectives. Internal control is a process by an entity’s oversight body, management, and other personnel to provide reasonable assurance that the objectives of the entity will be achieved. When properly designed, implemented, and operating effectively, it provides reasonable assurance that the following objectives are achieved: (1) effectiveness and efficiency of operations, (2) reliability of internal and external reporting, and (3) compliance with applicable laws and regulations. Internal control is not one event, but a series of actions that occur throughout an entity’s operations. The five components of internal control are as follows: Control Environment - The foundation for an internal control system that provides the discipline and structure to help an entity achieve its objectives. Risk Assessment - Assesses the risks facing the entity as it seeks to achieve its objectives and provides the basis for developing appropriate risk responses. Control Activities - The actions management establishes through policies and procedures to achieve objectives and respond to risks in the internal control system, which includes the entity’s information system. Information and Communication - The quality information management and personnel communicate and use to support the internal control system. Monitoring - Activities management establishes and operates to assess the quality of performance over time and promptly resolve the findings of audits and other reviews. An effective internal control system has each of the five components of internal control effectively designed, implemented, and operating with the components operating together in an integrated manner. In this audit, we assessed the design of control activities at the Forest Service related to its (1) allotments of budgetary resources and any related fund transfers between Forest Service appropriations, (2) reimbursables and related collections, and (3) review of unliquidated obligations. Appendix III: Comments from the U.S. Department of Agriculture Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, the following individuals made key contributions to this report: Roger Stoltz (Assistant Director), Meafelia P. Gusukuma (Auditor-in-Charge), Tulsi Bhojwani, Cory Mazer, Sabrina Rivera, and Randy Voorhees.
Why GAO Did This Study The Forest Service, an agency within USDA, performs a variety of tasks as steward of 193 million acres of public forests and grasslands. Its budget execution process for carrying out its mission includes (1) allotments, which are authorizations by an agency to incur obligations within a specified amount, and (2) unliquidated obligations, which represent budgetary resources that have been committed but not yet paid. Deobligation refers to an agency's cancellation or downward adjustments of previously incurred obligations, which may result in funds that may be available for reobligation. GAO was asked to review the Forest Service's internal controls over its budget execution processes. This report examines the extent to which the Forest Service properly designed control activities over (1) allotments of budgetary resources, its system for administrative control of funds, and any fund transfers between Forest Service appropriations; (2) reimbursables and related collections; and (3) review and certification of unliquidated obligations. GAO reviewed the Forest Service's policies, procedures, and other documentation and interviewed agency officials. What GAO Found In fiscal years 2015 and 2016, the Forest Service received discretionary no-year appropriations of $5.1 billion and $5.7 billion, respectively. It is critical for the Forest Service to manage its budgetary resources efficiently and effectively. While the Forest Service had processes over certain of its budget execution activities, GAO found the following internal control deficiencies: Budgetary resources . The purpose statute requires that amounts designated in appropriations acts for specific purposes are used as designated. The Forest Service did not have an adequate process and related control activities to reasonably assure that amounts were used as designated. In fiscal year 2017, GAO issued a legal opinion that the Forest Service had failed to comply with the purpose statute with regard to a $65 million line-item appropriation specifically provided for the purpose of acquiring aircraft for the next-generation airtanker fleet. Further, the Forest Service lacked a process and related control activities to reasonably assure that unobligated no-year appropriation balances from prior years were reviewed for their continuing need; did not have a properly designed system for administrative control of funds, which keeps obligations and expenditures from exceeding limits authorized by law; and had not properly designed control activities for fund transfers to its Wildland Fire Management program. These deficiencies increase the risk that the Forest Service may make budget requests in excess of its needs. Reimbursable agreements . To carry out its mission, the Forest Service enters into reimbursable agreements with agencies within the U.S. Department of Agriculture (USDA), other federal agencies, state and local government agencies, and nongovernment entities. The Forest Service (1) did not have adequately described processes and related control activities in manuals and handbooks for its reimbursable agreement processes and (2) lacked control activities related to segregating incompatible duties performed by line officers and program managers. For example, line officers may be responsible for initiating cost sharing agreements, modifying cost settlement packages, and changing or canceling the related receivable, which represent incompatible duties. As a result, programs and resources may not be protected from waste, fraud, and mismanagement. Unliquidated obligations . The Forest Service's processes and control activities over the review and certification of unliquidated obligations were not properly designed to reasonably assure the best use of funds and that unliquidated obligations would be efficiently and effectively deobligated and made available for other program needs. Further, the current process, as designed, was inconsistent with USDA and Forest Service policy. In addition, the Forest Service's manuals and handbooks, which provide directives for the areas that GAO reviewed, had not been reviewed by management in accordance with the Forest Service's 5-year review policy. Further, standard operating procedures and desk guides prepared by staff to supplement the manuals and handbooks were not issued as directives and therefore were not considered official policy. This increases the risk that control activities may not be consistently performed across the agency. What GAO Recommends GAO is making 11 recommendations to improve processes and related internal control activities over the management of the Forest Service's budgetary resources, reimbursable receivables and collections, and its process for reviewing unliquidated obligations. The Forest Service generally agreed with the report and stated that it has made significant progress to address the report findings.
gao_GAO-19-67
gao_GAO-19-67_0
Background Medicare Payment for Individual and Panel Tests before PAMA’s Implementation in 2018 Medicare pays for laboratory tests that are performed individually or in a group. For individual tests, laboratories submit claims to Medicare for each test they perform that is on the CLFS; tests are identified using a billing code. Prior to the implementation of PAMA in 2018, the payment rates on the CLFS were based on rates charged for laboratory tests in 1984 through 1985 adjusted for inflation. Additionally, 57 geographic jurisdictions had their own fee schedules for laboratory tests. CMS used the 57 separate fee schedules to calculate a national limitation amount, which served as the maximum payment for individual laboratory tests. Thus, the payment rate for an individual test was the lesser of the amount claimed by the laboratory, the local fee for a geographic area, or the national limitation amount for a particular test. Medicare pays bundled payment rates for certain laboratory tests that are performed as a group, called panel tests. Panel tests can be divided into two categories—those without billing codes and those with billing codes. Panel tests without billing codes are composed of at least 2 of 23 distinct component tests. Additionally, there are 7 specific combinations of these 23 component tests that are commonly used and have their own billing code. Prior to 2018, Medicare paid for both types of panel tests (those without or with a billing code) using a bundled rate based on the number of tests performed, with modest payment increases for each additional test conducted. For example, in 2017, Medicare paid $7.15 for panel tests with two component tests and $9.12 for panel tests with 3 component tests, with a maximum bundled payment rate of $16.64 for all 23 component tests. Prior to 2018, the Medicare Administrative Contractors would count the number of tests performed before determining the appropriate bundled payment rate. For those panel tests with a billing code, the payment rate was the same if laboratories used the associated billing code for the panel test or listed each of the component tests separately. Medicare Payment for Individual and Panel Tests after PAMA’s Implementation in 2018 After PAMA’s implementation in 2018, the 57 separate fee schedules for individual laboratory tests were replaced with a single national fee schedule. The payment rates for this single national fee schedule were based on private-payer rates for laboratory tests paid from January 1, 2016 through June 30, 2016. Specifically, the payment rate for an individual test was generally based on the median private-payer rates for a given test, weighted by test volume. Payment for panel tests also changed in 2018. For panel tests without billing codes, Medicare Administrative Contractors no longer counted the number of component tests performed to determine the bundled payment rate; instead, Medicare paid the separate rate for each component test in the panel. For panel tests with a billing code, the payment rate depended on how the laboratory submitted the claim. If a laboratory used the billing code associated with the panel test, Medicare paid the bundled payment rate for that billing code. If a laboratory submitted a claim for the panel test, but listed each of the component tests separately instead of using the panel test’s billing code, Medicare paid the individual payment rate for each component test. Table 1 below summarizes the changes to payment rates before and after 2018. Types of Clinical Laboratories Multiple types of laboratories receive payment under Medicare. The three laboratory types that received the most revenue from the CLFS in 2016 were independent laboratories, hospital-outreach laboratories, and physician-office laboratories. (See table 2.) Estimates of the size of the total U.S. laboratory market vary. For example, the Healthcare Fraud Prevention Partnership estimated that the laboratory industry received $87 billion in revenue in 2017, while another market report estimated the laboratory industry received $75 billion in revenue in 2016. Similar to Medicare, the three laboratory types that generally receive the most revenue overall are independent laboratories, hospital-outreach laboratories, and physician-office laboratories, when laboratory tests performed in hospital inpatient and outpatient settings were excluded. Estimates of revenue received by these laboratories also vary. For example, in recent years, estimates of the share of laboratory industry revenue generated by independent laboratories ranged from 37 percent to 54 percent. Additionally, estimates of revenue generated by hospital- outreach laboratories recently ranged from 21 to 35 percent, and physician-office laboratories ranged from 4 to 11 percent of total laboratory industry revenue. Private-Payer Rates for Laboratory Tests Generally Vary by Laboratory Type and Other Characteristics Private-payer rates for laboratory tests conducted by the three largest laboratory types generally vary by type and other characteristics, according to market reports and the laboratory industry officials we interviewed. Independent laboratories. These laboratories generally receive lower private-payer rates than other types of laboratories, according to industry officials we interviewed. Market reports we reviewed noted that about half of the independent laboratory market is dominated by two national laboratories and that these national laboratories provide more competitive pricing by performing a large volume of tests at one time. Medicare accounted for a smaller proportion of the revenue earned by these two national laboratories (12 percent), compared to other laboratories, according to another market report we reviewed. In contrast, a different market report noted that smaller, independent laboratories tend to earn more of their revenue from Medicare (34 percent). Hospital-outreach laboratories. These hospital-affiliated laboratories typically receive relatively higher private-payer rates, according to industry officials we interviewed. Although hospital- outreach laboratories perform tests similar to other laboratories, they can obtain above-average payment rates by leveraging the market power of their affiliated hospital when negotiating rates with private payers, according to industry officials and market reports. Hospital-outreach laboratories generally receive about 25 to 30 percent of their revenue from the Medicare CLFS. Physician-office laboratories. Physician-office laboratories typically receive higher private-payer rates than independent laboratories, according to a recent analysis by a laboratory industry association. This industry association also noted that the cost structure to operate in a setting such as a physician-office laboratory is different than in large independent laboratories, as the physician-office laboratory is unable to conduct a large number of tests at one time. Officials from another industry association we interviewed said that payment rates for these laboratories are generally dependent on the size of the physician practice group. These same officials told us that larger physician groups (e.g., 10 or more physicians) typically negotiate higher rates from private payers than smaller physician groups. Most physician-office laboratories received less than $25,000 in revenue per year from Medicare, according to CMS. Additionally, in 2013, the Department of Health and Human Services Office of Inspector General found that Medicare’s payment rates on the CLFS were higher than rates paid by some private health insurance plans. Specifically, it found that Medicare rates for laboratory tests were 18 percent to 30 percent higher than rates paid by certain insurers under health benefits plans for federal employees. CMS Analyzed Private-Payer Data to Develop New Payment Rates Definition of Applicable Laboratories Required to Report Private-Payer Data to CMS CMS defined applicable laboratories as those meeting four criteria: (1) they met the definition of laboratory under regulations implementing the Clinical Laboratory Improvement Amendments of 1988; (2) they billed Medicare Part B under their own Medicare billing number, also called the national provider identifier; (3) more than 50 percent of their total Medicare revenues came from the Clinical Laboratory Fee Schedule (CLFS) and/or the Physician Fee Schedule; and (4) they received at least $12,500 in Medicare revenue from the CLFS from January 1, 2016, through June 30, 2016. CMS analyzed private-payer data it collected from about 2,000 laboratories to develop new payment rates for individual laboratory tests on the CLFS. PAMA defined laboratories required to report private-payer data, called applicable laboratories, as laboratories that meet certain criteria. (See sidebar.) Applicable laboratories with their own specific billing number, the NPI, submitted these data to CMS. If one organization operated multiple applicable laboratories, each with its own NPI, then the organization could report data to CMS for multiple applicable laboratories. CMS collected data from applicable laboratories on payments they received from private payers during the first half of 2016. Specifically, CMS collected data on (1) the unique billing code associated with a laboratory test; (2) the private-payer rate for each laboratory test for which final payment was made during the data collection period (January 1, 2016, through June 30, 2016); and (3) the volume of tests performed for each unique billing code at that private- payer rate. For the data CMS collected between January 1, 2017, and May 30, 2017, CMS relied on the entities reporting to CMS to attest to the completeness and accuracy of the data they submitted. CMS relied on each laboratory to identify whether or not it was an applicable laboratory and took steps to assist laboratories in meeting reporting requirements. According to CMS officials, they relied on laboratories to self-identify as applicable laboratories because they were unable to accurately identify the number of laboratories required to report. To assist laboratories, CMS issued multiple guidance documents to the industry outlining the criteria for being an applicable laboratory and describing the type of data CMS intended to collect. CMS also conducted educational calls when the proposed and final rules were issued and prior to the data collection period. CMS officials told us they conducted additional outreach activities, including holding conference calls with national laboratory associations and attending professional conferences. Officials said they used these outreach activities in addition to the guidance issued to inform laboratories of the reporting requirements for applicable laboratories, for example. In addition, CMS established a revenue threshold of $12,500 in an effort to reduce the reporting burden for entities that receive a relatively small amount of revenues under the CLFS. In its final rule, CMS noted that it expected that many of the laboratories that would be below this revenue threshold and, thus exempt from reporting data to CMS, would be physician-office laboratories. CMS also chose to use the NPI in its definition of applicable laboratory in the final rule to allow hospital- outreach laboratories that use their own NPI to submit data to the agency. In its proposed rule, CMS suggested using an alternative identification number to the NPI. However, according to the final rule, CMS chose to use the NPI in its definition of applicable laboratory to allow those hospital-outreach laboratories billing using their own NPI to submit private-payer data to the agency. According to CMS, at the end of the 5-month submission period, the agency had received data from approximately 2,000 applicable laboratories, representing a volume of almost 248 million laboratory tests; these data accounted for about $31 billion in revenue from private payers. CMS reported that the data it collected included private-payer rates for 96 percent of the 1,347 eligible billing codes on the CLFS. CMS used these data to calculate a median, private-payer rate, weighted by volume and phased in this change by limiting payment-rate reductions to 10 percent per year. Beginning in 2018, these new payment rates served as the single, national payment rate for individual laboratory tests. These payment rates were also used for the individual, component tests that make up panel tests and were used when laboratories billed Medicare for panel tests by listing the component tests separately. In general, the median payments rates, weighted for volume, that CMS calculated were lower than Medicare’s previous payment rates for most laboratory tests. According to our analysis, these median payment rates were lower than the corresponding 2017 CLFS national limitation amounts (the maximum that CMS would pay for laboratory tests) for approximately 88 percent of tests. Figure 1 below describes the percentage difference between these median payment rates and Medicare’s 2017 national limitation amounts for laboratory tests. The final payment rates that CMS calculated, which included the 10- percent, phased in, payment-rate reductions, will remain in effect until December 31, 2020; PAMA requires CMS to calculate new payment rates for the CLFS every 3 years. Reporting entities will next be required to submit data on private-payer rates to CMS in early 2020, for final payments made from January 1, 2019 through June 30, 2019. PAMA capped any reductions for the second 3-year cycle after implementation to a maximum of 15 percent per year. PAMA’s Provisions and CMS’s Actions May Have Mitigated Some Challenges Related to Incomplete and Inaccurate Private-Payer Data, but Future Challenges Remain Incomplete Data Likely Had a Limited Effect from 2018 through 2020 but Could Affect Future Rates CMS did not collect private-payer data from all laboratories required to report this information and did not estimate how much data was not reported by these laboratories, according to agency officials. CMS relied on laboratories to determine whether they met data reporting requirements and submit data accordingly. CMS emphasized the importance of receiving data from all laboratories required to report by stating that it is critical that CMS collect complete data on private-payer rates in order to set accurate Medicare rates. However, agency officials told us that CMS did not receive data from all laboratories required to report. They also told us that CMS did not have the information available to estimate how much data was missing because not all laboratories reported or the extent to which the data collected were representative of all of the data that laboratories were required to report. Prior to collecting private-payer data, CMS estimated that laboratories subject to reporting requirements would receive more than 90 percent of CLFS expenditures to physician-office laboratories and independent laboratories. Specifically, based on its analysis of 2013 Medicare expenditures, CMS estimated that reporting requirements would apply to the laboratories that received 92 percent of CLFS payments to physician- office laboratories and 99 percent of CLFS payments to independent laboratories. After laboratories reported private-payer data, we analyzed the share of CLFS expenditures received by the laboratories that reported. Our analysis found that CMS collected data from laboratories that received the majority of CLFS payments to physician-office, independent, and other non-hospital laboratories in 2016. However, the laboratories that reported private-payer data received less than 70 percent of CLFS expenditures to physician-office, independent, and other non-hospital laboratories. Specifically, using Medicare claims data, we calculated that CMS collected data from laboratories that received 68 percent of 2016 CLFS payments to physician-office, independent, and other non-hospital laboratories. Although it did not collect complete data, CMS concluded that it collected sufficient private-payer data to set Medicare payment rates and that collecting more data from additional laboratories that were required to report would not significantly affect Medicare expenditures. This conclusion was based, in part, on a sensitivity analyses that CMS conducted of the effects that collecting certain types and amounts of additional data would have on weighted median private-payer rates and the effects those rates could have on Medicare payment rates and, thus, expenditures. Results from these analyses showed that Medicare expenditures based on the CLFS would have changed by 2 percent or less after collecting more data from the various types of laboratories. For example, CMS estimated that doubling the amount of private-payer data from physician-office laboratories would increase expenditures by 2 percent and collecting ten times as much data from hospital outreach laboratories would increase expenditures by 1 percent. (See fig. 2.) PAMA’s 10-percent limit on annual payment-rate reductions likely reduced the effect that incomplete private-payer data could have on the CLFS because this limit applied to most Medicare payment rates for laboratory tests. As demonstrated in figure 1, while 59 percent of tests had median private-payer rates that were at least 30 percent less than their respective 2017 national limitation amounts, CMS published Medicare rates for these tests for 2018 through 2020 that were reduced by only 10 percent per year as a result of this limit. For example, a hypothetical laboratory test with a 2017 CLFS national limitation amount of $10.00 and a median private-payer rate of $7.00 would result in CLFS rates of $9.00 in 2018, $8.10 in 2019, and $7.29 in 2020. Changes to median private-payer rates due to collecting more complete data or eliminating inaccurate data would have no effect on Medicare payment rates from 2018 through 2020 for this hypothetical test if they resulted in new median rates of $7.29 or less. Our analysis of the potential effects that collecting data from additional laboratories could have had on Medicare payment rates and expenditures found that the effect of CMS not collecting complete data would likely have been greater absent PAMA’s limits on annual reductions to Medicare payment rates. As a result, CMS may face challenges setting accurate Medicare rates if it does not collect complete data from all laboratories required to report in the future when PAMA allows for greater annual payment-rate reductions. To conduct this analysis, we used the private-payer data CMS collected to analyze the range of effects that collecting additional data could have on Medicare expenditures, assuming 2016 utilization rates remain constant. The extent of these effects depends on the amount of additional data CMS would need to collect to obtain complete data and whether the payment rates in these additional data would have been greater or less than the medians of the rates reported. For example, we estimated that if CMS needed to collect 20 percent more data for its collection to be complete, doing so could increase Medicare CLFS expenditures from 2018 through 2020 by as much as 3 percent or reduce them by as much as 3 percent depending on the payment rates in these additional data. However, if annual limits to Medicare payment-rate reductions were not applied, collecting these additional data could increase CLFS expenditures by as much as 9 percent or reduce them by as much as 9 percent. (See fig. 3 and app. II for additional information about these estimates.) As demonstrated in figure 2, CMS did analyze how collecting certain types and amounts of data from additional laboratories would affect Medicare expenditures. However, without valid estimates of how much more data these additional laboratories were required to report and how much these data would change median payment rates, it remains unknown whether CMS’s analyses estimate the actual risk of setting Medicare payment rates that do not reflect private-payer rates from all applicable laboratories, as mandated by PAMA. CMS could have compared the data it collected with independent information on the payment rates laboratories were required to report, for example. The independent information could be estimated by auditing a random sample of laboratories or could be estimated using data from third-party vendors, if these vendors could supply relevant and reliable information. CMS Mitigated Challenges of Setting Accurate Medicare Payment Rates by Identifying and Excluding Inaccurate Private-Payer Data that Could Have Led to Paying More than Necessary We found that CMS mitigated challenges to setting accurate Medicare payment rates by identifying, analyzing, and responding to potentially inaccurate private-payer data. CMS addressed potentially inaccurate private-payer data and other data that CMS determined did not meet reporting requirements. CMS removed or replaced data from four reporting entities that appeared to have or confirmed having reported revenue—which is the payment rate multiplied by the volume of tests paid at that rate—instead of payment rates. We estimated that if CMS had included these data that CLFS expenditures from 2018 through 2020 would have increased by 7 percent. CMS removed data it determined were reported in error including duplicate submissions and submissions with payment rates of $0.00. We estimated that removing these data will change CLFS expenditures from 2018 through 2020 by less than one percent. CMS identified four other types of potentially inaccurate data that it determined would not significantly impact Medicare payment rates or expenditures and did not exclude them from calculations of median private-payer rates. CMS considered the following potentially inaccurate data to have met its reporting requirements: 1. data from 57 entities that reported particularly high rates in at least 60 percent of their data, 2. data from 12 entities that reported particularly low rates in at least 50 percent of their data, 3. data with payment rates that were 10 times greater than the 2017 national limitation amounts or 10 times less than these amounts, and 4. data from laboratories that may not have met the $12,500 low- expenditure threshold or that reported data from a hospital NPI instead of a laboratory NPI. We found that each of these four types of potentially inaccurate data would have changed estimated Medicare CLFS expenditures from 2018 through 2020 by 1 percent or less if CMS had instead excluded the data. To conduct this analysis, we recalculated Medicare rates after excluding each type of data and estimated Medicare expenditures assuming 2016 rates of utilization. CMS’s Implementation of New Payment Rates Could Lead to Medicare Paying Billions More than Necessary for Some Tests CMS’s Approach to Phase In Reductions to Payment Rates Temporarily Increased Some Rates and Contributed to Estimated Increases in Medicare Expenditures for Certain Laboratory Tests Although weighted median private-payer rates were lower than Medicare’s 2017 national limitation amounts for 88 percent of tests, we estimated the total Medicare expenditures based on the 2018 CLFS would likely increase by 3 percent ($225 million overall) compared to 2016 expenditures, assuming test utilization remained at 2016 levels. This increase in estimated expenditures is due, in part, to CMS’s use of above-average payment rates as a baseline to calculate payment rates for those laboratory tests affected by PAMA’s annual payment-rate reduction limit of 10 percent. (See fig. 4.) When applying the 10-percent payment-rate reduction limit, CMS used as its starting point the 2017 national limitation amounts in order to set a single, national payment rate for each laboratory test. Thus, the Medicare payment rate for a test in 2018 could not be less than 90 percent of the test’s 2017 national limitation amount. However, prior to 2018, some payment rates were commonly lower than the national limitation amounts because they were based on the lesser of (1) the amount billed on claims, (2) the local fee for a geographic area, or (3) a national limitation amount, and because panel tests had different bundled payment rates. As a result, by reducing payment rates from national limitation amounts, CMS did not always reduce rates from what Medicare actually paid. Panel tests, in particular, frequently received bundled payment rates that differed substantially from national limitation amounts associated with their billing codes prior to 2018. We compared national limitation amounts, which represent maximum Medicare payment rates for tests, with the average amounts Medicare allowed for payment in 2016, which reflect actual Medicare payment rates. For example, figure 5 below shows that the 2017 national limitation amount for comprehensive metabolic panel tests ($14.49) was substantially higher than both the average amount Medicare allowed for payment in 2016 ($11.45) and the median payment rate laboratories reported receiving from private payers ($9.08). As a result, using the 2017 national limitation amount as a basis for payment reductions caused Medicare’s payment rate to increase from an average allowed amount of $11.45 in 2016, to a payment rate of $13.04 in 2018, instead of decreasing towards a lower median private- payer rate of $9.08. By increasing average payment rates rather than phasing in reductions to rates, CMS’s implementation may lead to paying more than necessary for some tests. Federal standards for internal control for information and communications require agency management to use quality information to achieve its objectives. Basing reductions on national limitation amounts rather than more relevant information on how much Medicare actually paid—such as the average allowable amounts in 2016, for example—could result in Medicare paying more than necessary by $733 million from 2018 through 2020, according to our estimates. CMS’s Changes to Payment Rates for Panel Tests Could Lead Medicare to Pay Billions of Dollars More than Is Necessary In implementing PAMA, CMS eliminated bundled rates for panel tests that lack billing codes and started paying separately for each component test instead. CMS also implemented the 2018 CLFS in a manner that could lead to unbundling payment rates for panel tests with billing codes. If payment rates for all panel tests were unbundled, we estimated that Medicare expenditures could increase by $218 million for panel tests that lack billing codes and by as much as $10.1 billion for panel tests with billing codes from 2018 through 2020. CMS also estimated that there could be significant risks of paying more than necessary associated with unbundling and has taken initial steps to monitor these risks and explore possible responses, but had not yet responded to these risks as of July 2018. CMS Unbundled Payment Rates for Panel Tests without Billing Codes Beginning in 2018, CMS no longer uses bundled payment rates for panel tests without billing codes and instead pays laboratories individual payments for each component test that comprises these panel tests. However, CMS staff and members of its advisory panel discussed concerns with this approach. At an advisory panel meeting in 2016, CMS staff relayed concerns from stakeholders that CMS would not be able to collect valid data on private-payer rates for these panel tests. According to agency staff, stakeholders had informed CMS that private payers commonly use bundled payment rates for these panel tests, but laboratories would only be able to report unbundled payment rates for individual component tests. We estimated that unbundling these payment rates would increase Medicare expenditures from 2018 through 2020 by $218 million in comparison to the estimated Medicare expenditures over the same time period based on Medicare’s 2016 utilization and allowable amounts. For example, under the 2016 CLFS, Medicare paid approximately 435,000 claims for panel tests that included the laboratory tests assay of creatinine (HCPCS code 82565) and assay of urea nitrogen (HCPCS code 84520) at an average bundled payment rate of $6.82. In contrast, under the 2018 CLFS, these two component tests are reimbursed individually at $6.33 and $4.88, respectively, or $11.21 combined—a 63 percent increase. Despite concerns about the validity of available private-payer data on component tests for panel tests without billing codes, CMS used these data to set payment rates for component tests. CMS officials told us that they stopped using bundled payment rates for these panel tests because it is not clear that CMS has the authority to combine the individual component tests into groups for bundled payment as it did before 2018 due to PAMA’s reference to payments for each test. However, in July 2018, CMS officials told us the agency was reviewing its authority regarding this issue. CMS officials told us they are exploring alternative approaches that could limit increases to Medicare expenditures but had not yet determined what additional legal authority would be needed, if any, and did not know when CMS would make this determination. Agency officials told us that CMS has taken initial steps to monitor unbundling and explore possible responses, including the following: Monitoring unbundling: CMS has begun monitoring changes in panel test utilization, payment rates, and expenditures associated with its implementation of PAMA, according to officials. For example, CMS officials told us that preliminary data indicated that Medicare payments for individual component tests of panel tests has increased substantially in 2018, but, as of July 2018, it was too early to draw conclusions from these data because laboratories have up to one year to submit claims for tests. Collecting input on alternatives: In 2016, a subcommittee of an advisory panel that CMS established reviewed Medicare’s use of bundled payment rates for panel tests and published different approaches for CMS to consider implementing in combination with other changes to implement PAMA. CMS’s Implementation of PAMA May Have Allowed Unbundling of Payment Rates for Panel Tests with Billing Codes Beginning in 2018, laboratories that submit claims for any of the seven panel tests with billing codes by using the billing codes for the individual component tests now receive the payment rate for each component test, rather than the bundled rate. Prior to 2018, laboratories could submit claims for these panel tests either by using the specific codes for panel tests or by billing separately for each of the component tests, and, regardless of how laboratories submitted claims, Medicare Administrative Contractors would pay bundled payment rates based on how many of the 23 component tests were conducted. However, CMS instructed Medicare Administrative Contractors to stop bundling payment rates for tests that are billed individually on claims rather than billed on claims using codes for panel tests, beginning in 2018. CMS did so because it was not clear that CMS had the authority to combine the individual component tests into groups for bundled payment as it did before 2018 due to PAMA’s reference to payments for individual tests, according to agency officials. This change could potentially have a large effect on Medicare spending. For example, if a laboratory submitted a claim individually for the 14 component tests that comprise a comprehensive metabolic panel it would receive a payment of $81.91, a 528 percent increase from the 2018 Medicare bundled payment rate of $13.04 for this panel test. (See fig. 6.) Improving how reductions to payment rates for panel tests are phased in could mitigate, but not completely counteract, the effect of unbundling these payment rates. For example, for the comprehensive metabolic panel test described in figure 6, basing maximum reductions on 2016 average allowable amounts would result in a 2018 Medicare bundled payment rate of $10.31 instead of $13.04 and individual payment rates for the 14 component tests that total $56.06—a 32 percent decrease from $81.91 that Medicare would otherwise pay. If the payment rate for each panel test with a billing code were unbundled, we estimated that Medicare expenditures for these tests from 2018 through 2020 could reach $13.5 billion, a $10.1 billion increase from the $3.3 billion we estimated Medicare would spend using the bundled payment rates in the CLFS. Similarly, prior to implementing PAMA, CMS estimated that Medicare expenditures to physician-office, independent, and other non-hospital laboratories could potentially increase as much as $2.5 billion in 2018, alone if it paid for the same number of panel tests with billing codes as it did in 2016 but paid for each component test individually. These estimates represent an upper limit on the increased expenditures that could occur if every laboratory stopped using panel test billing codes and instead used the billing codes for individual component tests. We do not know the extent to which laboratories will stop filing claims using panel test billing codes. CMS officials also told us that they were aware of the risks associated with paying for the individual component tests instead of the bundled payment rate for a panel test with a billing code. However, CMS guidance, which was effective in 2018, continued to allow laboratories to use the billing codes for individual component tests rather than the billing code for the panel. CMS officials explained that this was due to PAMA’s reference to payments for individual tests, similar to CMS’s decision to stop paying bundled rates for panel tests without billing codes. At the time we did our work, CMS had not implemented a response to these risks but had taken some initial steps to monitor unbundling and consider alternative approaches to Medicare payment rates for these tests. HHS provided additional information on planned activities to address these risks in its written comments on a draft of this report. (See app. III.) Conclusions CMS collected data on private-payer rates from laboratories that were required to report these data, but not all laboratories complied with the reporting requirement, and the extent of noncompliance remains unclear. PAMA’s provision directing CMS to phase in payment-rate reductions to Medicare payment rates likely moderates the potential adverse effects of incomplete private-payer data. However, in the future, failing to collect complete data could substantially affect Medicare payment rates because private-payer rates alone will determine Medicare payment rates. In addition, we estimated that Medicare expenditures on laboratory tests will be $733 million higher from 2018 through 2020, because CMS started phasing in payment-rate reductions from national limitation amounts instead of more relevant data on actual payment rates, such as average allowable amounts. Finally, changes to payment rates, billing practices, and testing practices could increase Medicare expenditures by as much as $10.3 billion from 2018 through 2020, if CMS does not address the risks associated with unbundling payment rates for panel tests. Agency officials indicated that it was unclear if PAMA limited CMS’s ability to combine individual component tests into groups for bundled payment, and, as of July 2018, CMS was reviewing this matter but did not know when it would make a determination. Recommendations for Executive Action We are making the following three recommendations to CMS: The Administrator of CMS should take steps to collect all of the data from all laboratories that are required to report. If only partial data can be collected, CMS should estimate how incomplete data would affect Medicare payment rates and address any significant challenges to setting accurate Medicare rates. (Recommendation 1) The Administrator of CMS should phase in payment-rate reductions that start from the actual payment rates Medicare paid prior to 2018 rather than the national limitation amounts. CMS should revise these rates as soon as practicable to prevent paying more than necessary. (Recommendation 2) The Administrator of CMS should use bundled rates for panel tests, consistent with its practice prior to 2018, rather than paying for them individually; if necessary, the Administrator of CMS should seek legislative authority to do so. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to HHS for review and comment. HHs provided written comments, which are reproduced in appendix III. HHS also provided technical comments, which we incorporated as appropriate. HHS concurred with our first recommendation to take steps to collect all data from laboratories required to report and commented that it is evaluating ways to increase reporting. In particular, in a November 2018 final rule, HHS changed the definition of an applicable laboratory, which it expects will increase the number of laboratories required to report data on private-payer rates to the agency. HHS neither agreed nor disagreed with our second recommendation to phase in payment-rate reductions that start from the actual payment rates Medicare paid prior to 2018. HHS noted that any changes to the phasing in of payment-rate reductions would need to be implemented through rulemaking. We estimated that by using the national limitation amounts as a starting point for these reductions, Medicare expenditures would increase by $733 million from 2018 through 2020. For this reason, we continue to believe CMS should revise these rates as soon as practicable and through whatever mechanism CMS determines appropriate. HHS neither agreed nor disagreed with our third recommendation to use bundled rates for panel tests. However, HHS commented that it is taking steps to address this issue. More specifically, for panel tests with billing codes, HHS is working to implement an automated process to identify claims for panel tests that should receive bundled payments, similar to the process used to bundle payment rates for these panel tests prior to PAMA’s implementation and anticipates implementing this change by the summer of 2019. In addition, HHS posted guidance on November 14, 2018, stating that the panel tests with billing codes, laboratories should submit claims using the corresponding code rather than the codes for the separate component tests beginning in 2019. To reduce the potential of paying more than necessary, we believe it is important that CMS implement its proposed automated process to allow for these payments as soon as possible. In contrast, for panel tests without billing codes, HHS commented that it is continuing to review its authority and considering other approaches to payment for these panel tests, such as adding codes to the CLFS. We estimate that unbundling the payment for these panel tests could increase Medicare expenditures by $218 million from 2018 through 2020 compared to expenditures based on Medicare’s 2016 utilization, and the actual amount could be higher if utilization increases. For this reason, we believe CMS should implement bundled payment rates for these panel tests to avoid excess payments. We are sending copies of this report to the appropriate congressional committees and the Administrator of CMS. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Table of Key Dates Related to Developing the New Payment Rates for the 2018 Clinical Laboratory Fee Schedule Appendix I: Table of Key Dates Related to Developing the New Payment Rates for the 2018 Clinical Laboratory Fee Schedule Event Centers for Medicare and Medicaid Services (CMS) issued the CLFS proposed rule. CMS issued responses to frequently asked questions regarding the CLFS proposed rule. CMS issued the CLFS final rule. CMS issued responses to frequently asked questions regarding the CLFS final rule. CMS held the joint Annual Laboratory Public Meeting and Medicare Advisory Panel on Clinical Diagnostic Laboratory Tests meeting. CMS issued laboratory billing codes subject to data collection and reporting. CMS issued guidance to laboratories for collecting and reporting data. CMS held a Medicare Advisory Panel on Clinical Diagnostic Laboratory Tests meeting. CMS issued the CLFS data reporting template. CMS collected data on (1) the billing code associated with a laboratory test; (2) the private-payer rate for each laboratory test for which final payment was made during the data collection period (i.e., January 1, 2016, through June 30, 2016); and (3) the volume of tests performed for each billing code at that private-payer rate. CMS issued additional guidance for laboratories as the data collection period began. CMS issued the CLFS fee-for-service data collection user’s manual. CMS issued revised guidance to laboratories for collecting and reporting data. CMS held a Medicare Advisory Panel on Clinical Diagnostic Laboratory Tests meeting. CMS released the proposed CLFS rates. CMS held a Medicare Advisory Panel on Clinical Diagnostic Laboratory Tests meeting. Deadline for stakeholders to submit comments on the proposed CLFS rates to CMS. CMS issued the final CLFS rates. New CLFS rates became effective. Appendix II: Estimated Effects on Medicare Expenditures from Collecting Additional Data Table 4 below demonstrates the challenges the Centers for Medicare & Medicaid Services (CMS) faces in setting accurate Medicare payment rates to the extent it does not collect complete data from laboratories on private-payer rates. Specifically, the table shows the potential effect that collecting additional data for each laboratory test could have on Medicare expenditures and how this effect could vary depending on (1) the amount of additional data collected, (2) payment rates in the additional data, and (3) limits to annual reductions in Medicare payment rates. These limits are in place from 2018 through 2023 to phase in changes to payment rates. Appendix III: Comments from the Department of Health and Human Services Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Martin T. Gahart, Assistant Director; Gay Hee Lee, Analyst-in-Charge; Kaitlin Farquharson, Sandra George, Dan Lee, Elizabeth T. Morrison, Laurie Pachter, Vikki Porter, and Russell Voth made key contributions to this report.
Why GAO Did This Study Medicare paid $7.1 billion for 433 million laboratory tests in 2017. These tests help health care providers prevent, diagnose, and treat diseases. PAMA included a provision for GAO to review CMS's implementation of new payment rates for these tests. This report addresses, among other objectives, (1) how CMS developed the new payment rates; (2) challenges CMS faced in setting accurate payment rates and what factors may have mitigated these challenges; and (3) the potential effect of the new payment rates on Medicare expenditures. GAO analyzed 2016 Medicare claims data (the most recent data available when GAO started its work and the year on which new payment rates were based) and private-payer data CMS collected. GAO also interviewed CMS and industry officials. What GAO Found The Centers for Medicare & Medicaid Services (CMS) within the Department of Health and Human Services (HHS) revised the Clinical Laboratory Fee Schedule (CLFS) for 2018, establishing new Medicare payment rates for laboratory services. Prior to 2018, these rates were based on historical laboratory fees and were typically higher than the rates paid by private payers. The Protecting Access to Medicare Act of 2014 (PAMA) required CMS to develop a national fee schedule for laboratory tests based on private-payer data. To revise the rates, CMS collected data on private-payer rates from approximately 2,000 laboratories and calculated median payment rates, weighted by volume. GAO found that the median private-payer rates were lower than Medicare's maximum payment rates in 2017 for 88 percent of tests. CMS is gradually phasing in reductions to Medicare payment rates, limited annually at 10 percent over a 3-year period (2018 through 2020), as outlined in PAMA. CMS relied on laboratories to determine whether they met data reporting requirements, but agency officials told GAO that CMS did not receive data from all laboratories required to report. CMS did not estimate the amount of data it should have received from laboratories that were required to report but did not. CMS took steps to exclude inaccurate private-payer data and estimated how collecting certain types and amounts of additional private-payer data could affect Medicare expenditures. However, it is not known whether CMS's estimates reflect the actual risk of incomplete data resulting in inaccurate Medicare payment rates. GAO found that PAMA's phased in reductions to new Medicare payment rates likely mitigated this risk of inaccurate Medicare payment rates from 2018 through 2020. However, GAO found that collecting incomplete data could have a larger effect on the accuracy of Medicare payment rates in future years when PAMA allows for greater payment-rate reductions. CMS's implementation of the new payment rates could lead Medicare to pay billions of dollars more than is necessary and result in CLFS expenditures increasing from what Medicare paid prior to 2018 for two reasons. First, CMS used the maximum Medicare payment rates in 2017 as a baseline to start the phase in of payment-rate reductions instead of using actual Medicare payment rates. This resulted in excess payments for some laboratory tests and, in some cases, higher payment rates than those Medicare previously paid, on average. GAO estimated that Medicare expenditures from 2018 through 2020 may be $733 million more than if CMS had phased in payment-rate reductions based on the average payment rates in 2016. Second, CMS stopped paying a bundled payment rate for certain panel tests (groups of laboratory tests generally performed together), as was its practice prior to 2018, because CMS had not yet clarified its authority to do so under PAMA, according to officials. CMS is currently reviewing whether it has the authority to bundle payment rates for panel tests to reflect the efficiency of conducting a group of tests. GAO estimated that if the payment rate for each panel test were unbundled, Medicare expenditures could increase by as much as $10.3 billion from 2018 through 2020 compared to estimated Medicare expenditures using lower bundled payment rates for panel tests. What GAO Recommends GAO recommends that the Administrator of CMS (1) collect complete private-payer data from all laboratories required to report or address the estimated effects of incomplete data, (2) phase in payment-rate reductions that start from the actual payment rates rather than the maximum payment rates Medicare paid prior to 2018, and (3) use bundled rates for panel tests. HHS concurred with GAO's first recommendation, neither agreed nor disagreed with the other two, and has since issued guidance to help address the third. GAO believes CMS should fully address these recommendations to prevent Medicare from paying more than is necessary.
gao_GAO-18-61
gao_GAO-18-61_0
Background Skin Cancer and Sunscreen How Sunscreen Works Most sunscreen products work by absorbing, reflecting, or scattering sunlight. Sunscreen contains chemicals that interact with the skin to protect it from ultraviolet (UV) rays. UV rays are an invisible form of radiation from the sun, tanning beds, and sunlamps that can penetrate the skin and change skin cells. The most common kinds of skin cancer, including the deadliest kind of skin cancer (melanoma), are associated with exposure to ultraviolet (UV) light. Sunscreen is one of the most common methods of protection against UV exposure used by Americans. To lower the risk of skin cancer, the Centers for Disease Control and Prevention and FDA recommend that consumers use broad spectrum sunscreens with a sun protection factor (SPF) of 15 or more as directed and in conjunction with other sun- protective measures, such as seeking shade and wearing protective clothing, hats, and sunglasses. Current recommendations also state that sunscreen should be reapplied every 2 hours and after swimming, sweating, and toweling off. When used incorrectly, sunscreen may provide a false sense of protection, which can ultimately lead to increased UV exposure. FDA Regulation of Sunscreens and Other OTC Drugs Because sunscreens are intended to help prevent sunburn and, in some cases, decrease the risks of skin cancer and early skin aging caused by the sun, these products are considered drugs under the Federal Food, Drug, and Cosmetic Act. Sunscreens are regulated as OTC (i.e., nonprescription) drugs, which are drugs considered to be safe for use by consumers without the intervention of a health care professional, such as a physician. Broad Spectrum Sunscreen and Sun Protection Factor (SPF) There are two types of ultraviolet (UV) radiation from which one needs protection— UVA and UVB. UVA radiation penetrates the skin more deeply and can cause skin cancer and other skin damage. UVB radiation can cause sunburn and result in skin damage. Broad spectrum sunscreens provide protection against both UVA and UVB rays. Products labeled as “broad spectrum” have been tested for both UVA and UVB protection. Sunscreens are made in a wide range of SPFs. The SPF value indicates the level of sunburn protection provided by the sunscreen product. Higher SPF values (up to 50) provide greater sunburn protection. Because SPF values are determined from a test that measures protection against sunburn, SPF values primarily indicate a sunscreen’s UVB protection. Most OTC drugs, including nearly all sunscreen products, are marketed in the United States by following the OTC monograph process. An OTC monograph is a regulation that specifies the active ingredients that may be used to treat certain diseases or conditions without a prescription, and the appropriate dose and labeling for use, among other things. OTC drugs that meet a monograph’s requirements may be marketed without FDA’s prior approval, assuming compliance with all other applicable regulations. FDA regulations designate categories of OTC drugs, including antacids, cough and cold products, and sunscreens, to be covered by OTC monographs. OTC drug products that do not fit under an existing monograph must be approved under an NDA to be marketed, which is an application also used for new prescription drugs. See table 1 for a summary of the differences between marketing an OTC drug product, such as a sunscreen product, under the OTC monograph process compared to under an NDA. According to FDA officials, more than 100,000 OTC drugs are marketed under the OTC monograph process, and about 400 are approved to be marketed under NDAs. The sunscreen monograph currently includes 16 active ingredients. The last active ingredients (avobenzone and zinc oxide) were added to the sunscreen monograph in the late 1990s. FDA issued a final sunscreen OTC monograph in 1999; before it could go into effect, however, FDA stayed its effective date indefinitely, because the agency had not yet established UVA/broad spectrum testing and labeling requirements for sunscreen products. To date, the sunscreen monograph is not in effect. While the sunscreen monograph’s effective date is stayed, FDA has indicated that it will not take enforcement action against the marketing of sunscreens using the 16 active ingredients included in the stayed final monograph or some combination thereof, provided the products are marketed in compliance with other applicable regulations and consistent with FDA’s 2011 draft guidance. TEA Process In 2002, FDA created a two-part process, referred to as the TEA process, by which an active ingredient that was not included in OTC drugs marketed in the United States prior to the beginning of the monograph process in the 1970s can be considered for marketing under the OTC monograph process by receiving a GRASE determination. Part 1: Eligibility determination. To be eligible for review under the TEA process, the sponsor must submit an application showing that the active ingredient has been marketed in OTC drugs for a material time and to a material extent, as shown by, for example a minimum of 5 continuous years in the same country, or multiple countries outside the United States, or in an OTC product with an approved NDA in the United States; and a sufficient quantity as measured by the total number of dosage units or weight of active ingredient sold, and in a population reasonably extrapolated to the population of the United States. For ingredients found to meet the eligibility requirements, FDA publically posts this determination in the Federal Register and requests safety and effectiveness data to be submitted for the agency’s review. Part 2: GRASE determination. FDA reviews the safety and effectiveness data submitted by sponsors and other interested parties to determine whether the ingredient is generally recognized as safe and effective for OTC use. Standards for GRASE determinations are established in FDA regulations. General recognition is based upon published studies, which may be corroborated by unpublished studies and other data. Safety means a low incidence of adverse reactions or significant side effects under adequate directions for use and warnings against unsafe use, as well as low potential for harm, which may result from abuse that can occur when the drug is widely available. Effectiveness means a reasonable expectation that, in a significant proportion of the target population, the pharmacological effect of the drug, when used under adequate directions for use and warnings against unsafe use, will provide clinically significant relief of the type claimed. Based on its review, FDA may initially determine that the active ingredient is GRASE or not GRASE for OTC use; a not GRASE determination could result from FDA’s determination that the safety and effectiveness data submitted are insufficient. FDA issues its initial GRASE determination in the Federal Register and provides a period of time for public comments. The agency then reviews any comments received and issues its final GRASE determination in the Federal Register. SIA altered the process FDA is required to use for its review of sunscreen active ingredients and established time frames for the agency’s review. It also established a process for convening the agency’s Nonprescription Drugs Advisory Committee to review and provide recommendations regarding sunscreen applications at certain points in the process, and created a mechanism for sponsors to request FDA’s Office of the Commissioner to review sunscreen applications. At the time SIA was enacted in November 2014, FDA had received TEAs for eight sunscreen active ingredients. For all eight of these ingredients, FDA had deemed the applications eligible for review under the TEA process (that is, the sponsors demonstrated that the ingredients had been marketed for a material time and to a material extent), and the agency had requested data to demonstrate safety and effectiveness. FDA Implemented SIA Requirements for Reviewing Applications for Sunscreen Active Ingredients within Mandated Time Frames FDA implemented requirements for reviewing applications for sunscreen active ingredients within the time frames required by SIA. For example, by November 2016, FDA issued final guidance for applications for sunscreen active ingredients, such as guidance on safety and effectiveness testing standards and on convening the Nonprescription Drugs Advisory Committee to discuss sunscreen active ingredients. In May 2016, FDA also issued its first required report to Congress on specific performance metrics, such as the number of sunscreen applications with pending GRASE determinations. In addition to requiring FDA to issue two additional reports to Congress in 2018 and 2020, SIA requires FDA to finalize the sunscreen monograph by November 26, 2019. See table 2 for the status of FDA’s implementation of SIA requirements and corresponding time frames. FDA also implemented changes to the process for reviewing sunscreen applications as required by SIA. Administrative orders. SIA changed the process for issuing initial and final GRASE determinations for sunscreen applications to administrative orders. FDA officials stated that this approach is more efficient than rulemaking. Agency officials noted that administrative orders are not subject to multiple-stage rulemaking procedures, and generally undergo fewer levels of review outside of FDA. Time frames. SIA established time frames for each step in the review process for sunscreen applications. For example, the agency is required to determine whether a new application for a sunscreen active ingredient is eligible for review and notify the sponsor within 60 days of receipt by the agency. These time frames only include FDA’s review, and do not include the time for the sponsor or other interested parties to prepare and submit safety and effectiveness data, or respond to additional FDA requests. Filing determination. SIA added a step, known as a filing determination, in which FDA reviews the safety and effectiveness data to determine whether it is sufficiently complete for the agency to begin its more substantive review to determine whether an active ingredient is GRASE. If FDA determines that the data are sufficiently complete to determine whether the active ingredient is GRASE, the agency will file the application and further analyze the data. If FDA determines that the data are not sufficiently complete, the agency can refuse-to-file the application, which involves notifying the sponsor and providing reasons for the refusal. Sponsors can protest FDA’s decision to refuse-to-file the application, known as “file over protest,” in which case FDA will proceed with its more substantial review to determine if the active ingredient is GRASE. Office of the Commissioner review. SIA established a mechanism for sponsors to request the Office of the Commissioner to issue GRASE determinations if FDA does not meet required time frames. The mechanism has not been employed to date, because, as of August 2017, FDA had met its required time frames for reviewing and initially responding to sunscreen applications. Figure 1 illustrates the post-SIA process for FDA’s review of pending and new applications for sunscreen active ingredients, including time frames. All Eight Sunscreen Active Ingredient Applications Pending After FDA Determined More Safety and Effectiveness Data Needed; Sponsors Questioned Need for Additional Data FDA completed its review of the safety and effectiveness data for each of the eight sunscreen active ingredient applications that it received prior to the enactment of SIA. The agency concluded that the ingredients were not GRASE because the data were insufficient and additional safety and effectiveness data are needed to determine otherwise. Sponsors questioned FDA’s request for additional data and no data have been provided. FDA’s Review of Sunscreen Active Ingredient Applications Determined Additional Safety and Effectiveness Data Needed, and Most Took More than 8 Years As of February 2015, FDA completed its review of the safety and effectiveness data—that is, the initial GRASE determination—for each of the eight sunscreen applications submitted between the creation of the TEA process in 2002 and SIA’s enactment in 2014. FDA’s review concluded that the eight sunscreen active ingredients were not GRASE, because the data were insufficient to make a determination, and that additional data are needed to determine otherwise. (See fig. 2.) For all eight pending sunscreen applications, FDA requested additional safety and effectiveness data to support a GRASE determination. The data FDA requested include Human clinical safety studies including skin irritation, sensitization, and photosafety studies, as well as human pharmacokinetic tests (which measure the amount of absorption of a drug into the body). Among other studies, FDA specifically recommends that sponsors conduct a Maximal Usage Trial (MUsT), a type of human pharmacokinetic study, to support an adequate assessment of safety. Human safety data from adverse event reports and other safety- related information from marketed products that contain the active ingredient. This includes a summary of all available reported adverse events potentially associated with the ingredient, all available documented case reports of serious side effects, any available safety information from studies of the safety and effectiveness of sunscreen products containing the ingredient in humans, and relevant medical literature describing adverse events. Nonclinical animal studies that characterize the potential long-term dermal and systemic effects of exposure to the active ingredient. These tests include dermal and systemic carcinogenicity studies, as well as toxicokinetic tests (to help determine the relationship between exposure in toxicology studies in animals and the corresponding exposure in humans). In most cases, FDA also recommended developmental and reproductive toxicity studies to evaluate the potential effects of the active ingredient on developing offspring. FDA’s guidance states that if the ingredient is not absorbed into the body past an identified threshold, some of these studies will not be needed. Effectiveness data from at least two SPF studies showing that the active ingredient prevents sunburn. FDA stated these studies should demonstrate protection at an SPF of 2 or higher. FDA’s 2016 guidance on safety and effectiveness data for sunscreen states that its approach for evaluating the safety of sunscreen active ingredients is based on the agency’s current scientific understanding of topical products for chronic use. According to FDA, the standard for determining GRASE has remained the same over time. However, FDA reports that the increase in the amount and frequency of sunscreen usage, coupled with advances in scientific understanding and safety evaluation methods, has changed the agency’s perspective on what it needs to determine if sunscreen active ingredients are GRASE. As a result, the agency stated that these additional tests, such as the MUsT, are necessary to determine whether a sunscreen active ingredient is safe for chronic use. FDA reported that the studies it is requesting are not novel and are consistent with the requirements for chronically used topical drug products approved through the NDA process. For the eight sunscreen applications FDA received since 2002, FDA took between approximately 6 and 13 years to issue initial GRASE determinations starting from the date that the application was submitted. For six of the eight sunscreen applications, it took FDA more than 8 years to issue an initial GRASE determination. (See table 3.) Sponsors or other parties may submit safety and effectiveness data after FDA determines the application is eligible for review. From the most recent date that safety and effectiveness data were submitted for each application, the range of time for FDA to issue an initial GRASE determination was between about 4 and 11 years. According to FDA officials, the delays in reviewing sunscreen applications can be attributed to inadequate resources to carry out the agency’s OTC drug responsibilities and a lengthy multi-step rulemaking process, which the applications were subject to prior to SIA. The officials added that the delays in FDA’s review of sunscreen applications are indicative of the larger issues affecting the OTC monograph process more generally. For example, though the OTC monograph process began over 40 years ago, FDA officials said that the agency has still not been able to complete many monographs, or make timely changes based on emerging safety issues and evolving science, because of the burdensome regulatory process and inadequate resources. FDA officials estimate that as of October 2017 approximately one third of the monographs are not yet final, and several hundred active ingredients, including those used in sunscreen products, do not have a final GRASE determination. Some stakeholders and sponsor representatives said that one effect associated with SIA was that FDA took action on the sunscreen applications that had been pending for many years. Without the act, some of them questioned whether FDA would have reviewed the sunscreen applications or provided feedback to the sponsors. Though the agency has made an initial GRASE determination, the timing of FDA’s final GRASE determination for each of the eight sunscreen active ingredients will be determined, in part, by when each ingredient’s sponsor provides FDA with the additional safety and effectiveness data the agency requested. Sponsors Questioned FDA’s Request for Additional Safety and Effectiveness Data; No Additional Data Have Been Provided Sponsor representatives and some stakeholders questioned the additional safety and effectiveness data requested by FDA citing the following reasons Requested test not previously conducted on sunscreen. Some of the sponsor representatives and stakeholders we interviewed stated that they were not aware of one of the tests FDA requested, the MUsT, ever being conducted on sunscreen active ingredients. Some of these sponsor representatives and stakeholders said there is a lack of knowledge by sponsors and testing laboratories on how to conduct this test, as well as a lack of testing protocols. Further, representatives from some of the sponsors said that the thresholds set by FDA for these test results, which affects whether FDA will recommend additional testing, were unreasonably low or unrealistic. FDA officials stated that a MUsT is a fairly recent term for a pharmacokinetic test under maximum use, which is a test that has been used for dermal products since the 1990s. They added that the threshold FDA established for this test is considered by the agency to minimize risk, and that at or above this threshold, the risk for cancer may increase. According to agency officials, FDA’s draft guidance on conducting a MUsT is expected to be issued in 2018. Equal to or more rigorous than NDA testing requirements. Some of the sponsor representatives and stakeholders said that the additional safety and effectiveness data FDA requested are equal to or more rigorous than what are submitted for an NDA. In particular, a stakeholder noted that FDA requested additional safety and effectiveness testing for an application to market the ingredient under the OTC monograph process from a company that already had an approved NDA for a product containing the same active ingredient (ecamsule). FDA officials indicated that active ingredients under consideration for inclusion in an OTC monograph may require some studies to demonstrate that the ingredient is GRASE for OTC use that would not be required for approval of an individual drug product through an NDA. Specifically, FDA officials said such studies may be needed because once an ingredient is found to be GRASE it can be formulated in many ways (in accordance with the monograph) and marketed in multiple sunscreen products without further agency review. Additionally, the combination of sunscreen active ingredients with other inactive ingredients in a sunscreen spray, for example, may affect the absorption of the sunscreen active ingredient, according to FDA officials. In contrast, NDAs are product-specific and once approved, further changes to the products require FDA approval. Raising the bar. Some of the sponsor representatives and stakeholders said that FDA’s requests for additional safety and effectiveness data equate to FDA raising the bar or otherwise changing what is required to demonstrate GRASE for additional active ingredients in sunscreen. Some stakeholders noted that sunscreen active ingredients that are currently marketed are not subject to this level of scrutiny. According to FDA officials, given the increased usage of sunscreen, coupled with increased knowledge of how drugs are absorbed into the skin, the agency has changed its perspective on what it needs to determine if sunscreen active ingredients are GRASE. FDA officials said that when the OTC monographs first started in the 1970s, it was thought that topical products would remain on the skin rather than be absorbed, but science has shown that some topical drugs, including some active ingredients used in sunscreens, are absorbed through the skin. Because of this knowledge, FDA officials said that the agency now considers potential dermal absorption for every topically applied drug. Lack of access to some requested data. In some cases, the sponsor or another interested party submitted a study’s summary results or summary information on adverse events associated with an active ingredient, but FDA requested more detailed data behind the study or detailed data on adverse events. However, some sponsor representatives and stakeholders said that the sponsor may not have access to this level of detail if it had not conducted the study itself or received the associated adverse event reports. For example, if the sponsor is the company that manufactures the active ingredient, it would not necessarily have access to adverse event reports for specific sunscreen products, because these reports would instead be submitted to the company that manufactures the actual sunscreen product used by consumers. One stakeholder also questioned why FDA has not attempted to obtain relevant adverse event data directly from regulatory agencies in other countries. FDA officials said that the agency does not generally have access to adverse event reports from foreign regulatory agencies, and that the agency relies on sponsors to provide adequate information to support a GRASE determination. Some stakeholders supported FDA’s request that sponsors provide additional safety and effectiveness data to determine if an active ingredient is GRASE for use in sunscreens. In particular, some of the stakeholders we interviewed stated that FDA is justified in requesting additional safety and effectiveness data from the sponsors given that science has evolved and the recommended use of sunscreen has changed over time. As of October 2017, FDA officials said that the agency has not received any of the additional safety and effectiveness data requested for the eight sunscreen active ingredients seeking a GRASE determination. According to sponsor representatives we spoke with, the sponsors are either still considering whether to conduct the additional tests FDA requested or they do not plan to do so. The reasons cited by the sponsor representatives and stakeholders included Return on investment. Sponsor representatives said the testing FDA requested is extensive, would cost millions of dollars, or take several years to conduct. Some of the stakeholders said the profit margins for these types of products can be low, and other stakeholders and sponsors said that once an active ingredient is determined to be GRASE and added to the OTC monograph, then anyone can market products using that active ingredient, as there is no period of market exclusivity granted to sponsors. Additionally, some stakeholders and sponsors added that the sponsors are reluctant to spend money on additional testing, because many of these sunscreen active ingredients have been on the market in other countries for many years. Instead, according to one sponsor representative, sponsors may choose to devote their resources into developing a newer generation of sunscreen active ingredients. Alternatives not accepted. Some sponsor representatives and stakeholders said that when alternative testing methods were proposed to FDA in place of the MUsT and other tests recommended by the agency, FDA rejected the alternatives. Further, when a sponsor asked the agency if the ingredient’s experience being marketed in other countries could be used to waive some of the carcinogenicity studies requested by FDA, the agency said that marketing experience can guide the design of studies, but it is not sufficient to appropriately assess carcinogenicity. The main purpose of carcinogenicity studies, according to FDA, is to detect the potential for cancer risks associated with lifelong exposure to the active ingredient, which are difficult to detect through the adverse event data associated with marketing experience. Animal testing. Some sponsor representatives and one stakeholder mentioned concerns about conducting tests on animals, because of the effect it may have on a company’s ability to market products worldwide. For example, European regulations prohibit cosmetics, including sunscreens, from being tested on animals, though they would not prohibit such testing as required by other countries. Additionally, one sponsor and one stakeholder expressed concern that sunscreen manufacturers may face backlash from animal rights groups and shareholders if animal testing is conducted. Uncertainty if more tests will be requested by FDA in the future. One sponsor representative said that there is uncertainty whether FDA may request additional studies in the future based on the outcomes of the FDA-recommended tests. According to one stakeholder, there is concern that sponsors may spend additional time and money on conducting the tests requested by FDA and the sunscreen active ingredient may still not be determined to be GRASE. Sponsor representatives for the pending sunscreen applications and most stakeholders said that the sponsors and FDA are essentially at a standstill about adding more sunscreen active ingredients to the U.S. market through the OTC monograph process. Sponsor representatives acknowledged that they could have submitted an NDA to market a new sunscreen product instead of seeking a GRASE determination for a sunscreen active ingredient. However, some sponsor representatives and a stakeholder said that NDAs are impractical for sunscreen products, because the formulations are continually changing; for example, sunscreen products may have a new fragrance based on the season. Additionally, many of the sponsors that submitted sunscreen applications manufacture the active ingredient, but not the finished sunscreen products; yet, it is the finished products that receive approval through the NDA process. Though FDA stated that it needs additional resources to complete its work related to the OTC monograph process—and most stakeholders agree—additional resources alone will not lead to additional sunscreen active ingredients on the U.S. market. Movement on sunscreen active ingredients will also depend on sponsors and other interested parties submitting data that FDA determines are sufficient for a GRASE determination. Some stakeholders said that they agree with FDA on the need for testing to ensure the safety and effectiveness of sunscreen ingredients, but some of them said the agency should also consider the potential benefit of preventing skin cancer if new ingredients—which could offer better protection against UVA rays—become available for the U.S. market. Agency Comments We provided a draft of this report to the Department of Health and Human Services for review and comment. The department provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of the Department of Health and Human Services, appropriate congressional committees, as well as other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Steps the FDA Has Taken To Review Applications for Non-Sunscreen Active Ingredients To examine the steps the Food and Drug Administration (FDA) has taken to review time and extent (TEA) applications for non-sunscreen active ingredients, we reviewed the Sunscreen Innovation Act (SIA), applicable FDA regulations and guidance, and other relevant documentation associated with the non-sunscreen TEAs. We also interviewed FDA officials and representatives of the sponsors associated with the six non- sunscreen TEAs submitted prior to the enactment of SIA in 2014. SIA Requirements and FDA Implementation SIA included requirements related to FDA’s review of non-sunscreen TEAs. Specifically, SIA required FDA to provide sponsors of certain non-sunscreen TEAs submitted prior to the enactment of SIA, upon request, with the opportunity to select from among different options for FDA’s review (called a review framework), including corresponding time frames; issue regulations establishing time frames for reviewing non- sunscreen TEAs submitted after SIA was enacted, as well as metrics for tracking the extent to which the time frames are met; and submit a letter to Congress that includes a report on the status of FDA’s review of non-sunscreen TEAs that were pending before SIA’s enactment. FDA implemented these requirements associated with non-sunscreen TEAs by November 2016. For example, FDA provided each sponsor that requested review framework options with the ability to select the process and corresponding time frames to be applied to its pending TEA. The review framework options included FDA using an administrative order or rulemaking process, with or without a filing determination. The time frames FDA established to initially respond to the pending non-sunscreen TEAs ranged from 90 days (when an option with a filing determination is selected) to 3.5 years (when an option without a filing determination is selected) from the date the sponsor selected a review framework. For example, when a sponsor chooses to receive a filing determination with the administrative order process, FDA is to determine within 90 days whether the safety and effectiveness data provided by the sponsor or other interested party are sufficiently complete for the agency to begin its substantive review and issue a filing determination. If FDA determines that the application can be filed, the agency then has 2 years after the filing date to issue a proposed order determining whether the ingredient is generally recognized as safe and effective (GRASE). When a sponsor chooses to not receive a filing determination with the rulemaking process, FDA has 3.5 years to issue a proposed rule with the GRASE determination. Additionally, FDA issued a final rule in November 2016 outlining the process and time frames by which the agency will review and take action on new non-sunscreen TEAs submitted after the enactment of SIA, including time frames for each step in the review process. (See fig. 3.) In establishing these time frames, FDA noted that it considered the agency’s public health priorities and available resources, as required by SIA, and accounted for the anticipated variations in the content, complexity, and format of submissions, as permitted by SIA. The overall time frames for FDA’s review are estimated to be about 6 years from the date FDA receives a TEA to the date a final GRASE determination is issued. Specifically, the approximately 6 years consists of 180 days for an eligibility determination, 90 days for a filing determination, 1,095 days for an initial GRASE determination, and 912 days for a final GRASE determination. These time frames only include FDA’s review, and do not include time for the sponsor or other interested parties to submit safety and effectiveness data, respond to additional FDA requests, or request meetings with the agency before such filing. FDA also established metrics for tracking the extent to which the agency meets the time frames set forth in the regulations, and sent a letter to Congress reporting on the status of the non-sunscreen TEAs submitted prior to SIA. These metrics are included in FDA’s regulation for non- sunscreen TEAs. The metrics include the number of non-sunscreen TEAs that have been submitted post SIA, and the number and percent of these TEAs to which FDA has responded within its required time frames. Agency officials said that FDA has not received any additional non- sunscreen TEAs as of August 2017 beyond the six that were submitted prior to the enactment of SIA, and therefore has not publicly reported metrics for non-sunscreen TEAs. Lastly, FDA submitted a letter to Congress in May 2016 describing the status of the six non-sunscreen TEAs submitted prior to SIA, including the review framework selected by each sponsor, when applicable. Non-Sunscreen Active Ingredient TEAs Submitted before SIA Was Enacted As of August 2017, FDA had not issued a GRASE determination for any of the six TEAs for non-sunscreen active ingredients that were submitted before SIA was enacted. FDA has not made a GRASE determination because FDA refused to file the applications. Two non-sunscreen TEAs were determined by FDA to contain insufficient information to be filed for review in 2016. FDA requested that the sponsors for these applications provide a detailed chemical description of the active ingredients, assessments of carcinogenicity, and safety and efficacy data, among other things. Representatives of sponsors for both ingredients said they do not plan on conducting the additional tests that FDA requested, because of concerns about return on investment. According to FDA officials, the sponsors of these applications did not elect to “file over protest.” Sponsors withdrew their applications. Three non-sunscreen TEAs were withdrawn in 2016. Representatives of the sponsors of these three applications said the companies did so because of increased regulatory scrutiny of the active ingredient, and the additional safety and effectiveness data requested by FDA. TEA is still pending FDA’s initial GRASE determination. One non- sunscreen TEA that was submitted in 2004 to add an anti-dandruff ingredient to the over-the-counter monograph was pending FDA review as of August 2017. The sponsor for this application did not request to select a review framework from the agency and so the application is subject to the regulations that FDA issued in November 2016. In accordance with the time frames established in the regulations, FDA officials expect to issue a proposed rule with a GRASE determination for this TEA in 2019—within 1,095 days (3 years) of when the regulation was finalized. This date is nearly 15 years after the application was originally submitted. For those two non-sunscreen TEAs for which FDA refused to file the applications, FDA’s determination came about 8 and 13 years after the TEA was originally submitted. Sponsors that withdrew the three non- sunscreen TEAs did so 11 or more years after submitting the application. (See table 4.) Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kim Yamane (Assistant Director), Rebecca Hendrickson (Analyst-in-Charge), Kristin Ekelund, and Toni Harrison made key contributions to this report. Also contributing were George Bogart, Karen Howard, Drew Long, and Vikki Porter.
Why GAO Did This Study Using sunscreen as directed with other sun protective measures may help reduce the risk of skin cancer—the most common form of cancer in the United States. In the United States, sunscreen is considered an over-the-counter drug, which is a drug available to consumers without a prescription. Some sunscreen active ingredients not currently marketed in the United States have been available in products in other countries for more than a decade. Companies that manufacture some of these ingredients have sought to market them in the United States by applying to add the ingredients to the sunscreen monograph, which lists ingredients that can be used in sunscreens without FDA's premarket approval. FDA reviews the applications and corresponding safety and effectiveness data for the ingredients. The Sunscreen Innovation Act includes a provision for GAO to examine FDA's implementation of the act. This report examines (1) the extent to which FDA implemented requirements for reviewing applications for sunscreen active ingredients within mandated time frames, and (2) the status of the sunscreen applications. GAO reviewed FDA regulations and guidance documents, Federal Register notices, and FDA and sponsor documents for all eight sunscreen applications. GAO also interviewed FDA officials; sponsors of sunscreen applications; and stakeholders with interests in sunscreen, including health care providers, researchers, and industry groups. Stakeholders were selected based on knowledge of the monograph process and sunscreen active ingredients. The perspectives of these stakeholders are not generalizable. What GAO Found The Food and Drug Administration (FDA), within the Department of Health and Human Services, implemented requirements for reviewing applications for sunscreen active ingredients within time frames set by the Sunscreen Innovation Act, which was enacted in November 2014. For example, the agency issued a guidance document on safety and effectiveness testing in November 2016. As of August 2017, all applications for sunscreen active ingredients remain pending after the agency determined more safety and effectiveness data are needed. By February 2015, FDA completed its initial review of the safety and effectiveness data for each of the eight pending applications, as required by the act. FDA concluded that additional data are needed to determine that the ingredients are generally recognized as safe and effective (GRASE), which is needed so that products using the ingredients can subsequently be marketed in the United States without FDA's premarket approval. To make a GRASE determination, FDA requested that the application sponsors provide additional data, including human clinical studies, animal studies, and efficacy studies. Sponsors of some of the sunscreen applications and some stakeholders GAO interviewed questioned FDA's requests, stating, for example, that the agency's recommended absorption test has never been conducted on sunscreen ingredients and there is a lack of knowledge on how to conduct it. At the same time, other stakeholders support the additional testing FDA requested. FDA reports that the increase in the amount and frequency of sunscreen usage, coupled with advances in scientific understanding and safety evaluation methods, has informed the agency's perspective that it needs additional data to determine that sunscreen active ingredients are GRASE. However, none of the sponsors reported current plans to provide the requested information—that is, they are either still considering whether to conduct the additional tests or they do not plan to do so. They cited the following reasons: Return on investment. The testing FDA requested is extensive, would cost millions of dollars, or take several years to conduct, according to sponsor representatives. Some stakeholders and sponsor representatives said that sponsors are currently working to develop newer sunscreen ingredients and are therefore reluctant to invest in the testing FDA requested for the older ingredients covered by the pending applications. Alternatives not accepted. Some sponsor representatives and stakeholders said that when they proposed alternative testing methods for absorption, for example, the agency rejected the alternatives. Animal testing. One stakeholder and some sponsor representatives reported concerns about the effect that the animal testing requested by FDA may have on companies' marketing of sunscreen products worldwide. Additionally, one stakeholder and representatives from one sponsor expressed concern that sunscreen manufacturers may face backlash from animal rights groups and shareholders if animal testing is conducted. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate.
gao_GAO-18-279
gao_GAO-18-279_0
Background As reported by the United Nations, the International Criminal Police Organization, and other organizations, wildlife trafficking networks span the globe. These organizations have attempted to measure the value of illegally traded wildlife, but available estimates are subject to uncertainty. In 2016, for example, the United Nations Environment Programme (UNEP) reported that various sources estimated the global scale of illegal wildlife trade to be from $7 billion to $23 billion annually. UNEP also estimated that the scale of wildlife crime has increased in recent years in part based on a rise in environmental crime. U.S. trade in wildlife and related products includes a variety of species, such as live reptiles, birds, and mammals, as well as elephant ivory, according to law enforcement reports and government and nongovernmental officials. FWS and NOAA data on wildlife products seized at U.S. ports provide examples of the diversity of illegally traded plants, fish, and wildlife imported into or exported from the United States. For example, from 2007 to 2016, the top 10 plant, fish, and wildlife shipments seized nationally by FWS were coral, crocodiles, conchs, deer, pythons, sea turtles, mollusks, ginseng, clams, and seahorses. During that time, FWS reported that more than one-third of the wildlife shipments it seized were confiscated while being imported from or exported to Mexico (14 percent), China (13 percent), or Canada (9 percent). FWS and NOAA law enforcement offices are responsible for enforcing certain laws and treaties prohibiting wildlife trafficking. FWS Office of Law Enforcement. This office enforces certain U.S. laws and regulations as well as treaties prohibiting the trafficking of terrestrial wildlife, freshwater species, and birds. Among other things, the office aims to prevent the unlawful import, export, and interstate commerce of foreign fish and wildlife, as well as to protect U.S. plants, fish, and wildlife from unlawful exploitation. As of fiscal year 2016, the office had a budget of $74.7 million and employed 205 special agents to investigate wildlife crime, including international and domestic wildlife trafficking rings. Most of these special agents report to one of eight regional offices, which receive national oversight, support, training, and policy guidance from the FWS Office of Law Enforcement headquarters. The office’s headquarters houses a special investigative unit focused on conducting complex, large- scale criminal investigations of wildlife traffickers. In addition, the FWS Office of Law Enforcement has deployed special agents to serve as international attachés at seven U.S. embassies. These attachés provide countertrafficking expertise to embassy staff, work with host government officials to build law enforcement capacity, and contribute directly to casework or criminal investigations of wildlife traffickers. According to FWS data, the FWS Office of Law Enforcement opened more than 7,000 investigations on wildlife trafficking and other illegal activities in fiscal year 2016, including nearly 5,000 cases involving Endangered Species Act violations and nearly 1,500 cases involving Lacey Act violations. FWS Office of Law Enforcement investigations have disrupted wildlife trafficking operations. For example, Operation Crash—an ongoing rhino horn and elephant ivory-trafficking investigation launched in 2011—has led to over 30 convictions and more than $2 million in fines. NOAA Office of Law Enforcement. This office enforces certain U.S. laws and regulations as well as treaties prohibiting the trafficking of marine wildlife, including fish, as well as anadromous fish. Among other things, the office aims to prevent the illegal, unregulated, and unreported harvesting and trade of fish as well as the trafficking of protected marine wildlife. As of fiscal year 2016, the office had a budget of $68.6 million and employed 77 special agents to investigate wildlife crimes within its jurisdiction. These agents report to one of five regional offices, and those offices receive national oversight, support, and policy guidance from the NOAA Office of Law Enforcement headquarters. According to NOAA data, the NOAA Office of Law Enforcement initiated more than 5,000 investigations in fiscal year 2016. About half of those investigations involved violations of the Magnuson-Stevens Fishery Conservation and Management Act, as amended, and some of the 5,000 investigations involved violations of the Endangered Species Act or the Lacey Act. NOAA Office of Law Enforcement investigations have disrupted wildlife trafficking operations. For example, in fiscal year 2016, a NOAA Office of Law Enforcement investigation led to the conviction of a company and five individuals for illegally trafficking whale bone carvings, walrus ivory carvings, black coral carvings, and other products derived from protected species into the United States. The FWS and NOAA law enforcement offices collaborate with other government agencies and organizations to combat wildlife trafficking. Both agencies work with other federal, state, and tribal law enforcement officers as well as their international counterparts as needed during wildlife trafficking investigations. For example, FWS and NOAA work with U.S. Customs and Border Protection, U.S. Immigration and Customs Enforcement, and the U.S. Department of Agriculture to maintain import and export controls and interdict smuggled wildlife and related products at U.S. ports of entry. In addition, FWS and NOAA collaborate with Department of Justice prosecutors on criminal cases that result from agency investigations. Both agencies also collaborate with nongovernmental organizations to combat wildlife trafficking. For example, FWS and NOAA officials said that nongovernmental organizations have, in some cases, offered financial rewards (in addition to rewards offered by FWS and NOAA) for information on a wildlife crime. In addition, some nongovernmental organizations proactively provide information to FWS and NOAA on wildlife trafficking activities in the United States or foreign countries that violate U.S. laws. For example, in 2017, a nongovernmental organization created a website to collect tips on wildlife crime and to connect the sources of those tips with relevant U.S. authorities for potential financial rewards. FWS may pay financial rewards from moneys in two accounts. Law Enforcement Reward Account. FWS may pay rewards under the Endangered Species Act, the Lacey Act, and the Rhinoceros and Tiger Conservation Act from moneys in the agency’s Law Enforcement Reward Account. The moneys in this account come from fines, penalties, and proceeds from forfeited property for violations of these three laws. According to FWS officials, these moneys are available until expended. These moneys can be used to (1) pay financial rewards to those who provide information that leads to an arrest, criminal conviction, civil penalty assessment, or forfeiture of property for any violation of the Endangered Species Act, the Lacey Act, or the Rhinoceros and Tiger Conservation Act or (2) provide temporary care for plants, fish, or wildlife that are the subject of a civil or criminal proceeding under the Endangered Species Act, Lacey Act, or the Rhinoceros and Tiger Conservation Act. As of the beginning of fiscal year 2017, the balance of the Law Enforcement Reward Account was about $7 million. Law Enforcement Special Funds Account. FWS may also pay rewards from moneys in its law enforcement office’s Special Funds Account. The moneys in this account come from an annual line item appropriation and are available until expended. Since fiscal year 1988, this appropriation has provided FWS up to $400,000 each year to pay for information, rewards, or evidence concerning violations of laws FWS administers, as well as miscellaneous and emergency expenses of enforcement activity that the Secretary of the Interior authorized or approved. NOAA generally pays rewards from moneys available in the Fisheries Enforcement Asset Forfeiture Fund. The moneys in this account come from fines, penalties, and proceeds from forfeited property for violations of marine resource laws that NOAA enforces, including the Magnuson- Stevens Fishery Conservation and Management Act, the Endangered Species Act, and the Lacey Act. According to NOAA officials, moneys are available until expended and can be used to pay certain enforcement- related expenses, including travel expenses, equipment purchases, and the payment of financial rewards. As of the beginning of fiscal year 2017, the Fisheries Enforcement Asset Forfeiture Fund had a balance of about $18 million. Academic literature on the use of financial rewards to combat illegal activities and stakeholders we interviewed identified several advantages and disadvantages of using financial rewards to obtain information on wildlife trafficking. Potential advantages of using financial rewards include the following: Providing incentives. The potential for a financial reward can motivate people with information to come forward when they otherwise might not do so. Increasing public awareness. Financial rewards may bring greater public attention to the problem of wildlife trafficking, including federal efforts to combat wildlife trafficking. Saving resources. Using financial rewards may save agency resources by enabling agents to get information sooner and at a lower cost than they could have through their own efforts. Potential disadvantages of using financial rewards include the following: Eliciting false or unproductive leads. Financial rewards may generate false or unproductive leads. Affecting witness credibility. Financial rewards may lead to a source’s credibility being challenged at trial by defense attorneys since sources receive compensation for the information they provide. Consuming resources. The potential for a financial reward may create a flood of tips that take agency time and resources to follow up on or corroborate. Outside of wildlife trafficking, multiple federal agencies and federal courts are authorized to pay financial rewards for information on illegal activities under certain circumstances. For example, U.S. Customs and Border Protection—which controls, regulates, and facilitates the import and export of goods through U.S. ports of entry—is authorized, under certain circumstances, to pay rewards for original information about violations of any laws that it enforces. The Department of State may also pay rewards under certain circumstances, including for information leading to the disruption of financial mechanisms of a transnational criminal group. Similarly, the U.S. Securities and Exchange Commission (SEC) and Internal Revenue Service (IRS) may pay rewards for information about violations of federal securities laws and the underpayment of taxes, respectively, if certain conditions are met. Federal judges may award money to persons who give information leading to convictions for violating treaties, laws, and regulations that prohibit certain pollution from ships, including oil and garbage discharges. Multiple Laws Authorize FWS and NOAA to Pay Rewards for Wildlife Trafficking Information, but the Agencies Reported Paying Few Rewards from Fiscal Years 2007 through 2017 FWS and NOAA officials identified multiple laws, such as the Endangered Species Act and the Lacey Act, that authorize the payment of financial rewards to people who provide information on wildlife trafficking. FWS and NOAA reported paying few financial rewards under these laws from fiscal years 2007 through 2017. However, agency officials could not provide sufficient assurance that the reward information they provided to us represented all of their reward payments for this period. The Endangered Species Act, Lacey Act, and Other Laws Authorize the Payment of Financial Rewards FWS and NOAA officials identified over 10 laws prohibiting wildlife trafficking—including the Endangered Species Act, Lacey Act, and Bald and Golden Eagle Protection Act—that specifically authorize the payment of financial rewards in certain circumstances to people who provide information on violations of the law (see app. II for a complete list of the laws). These laws provide discretion to the agencies to choose whether to pay rewards but have varying requirements for who is eligible to receive a reward and the payment amounts. For example, the Bald and Golden Eagle Protection Act caps rewards at $2,500 for information that leads to a conviction. In contrast, the Endangered Species Act does not cap reward amounts and authorizes rewards for information that leads to a conviction as well as to an arrest, civil penalty, or forfeiture of property. Table 1 identifies the laws that FWS and NOAA officials indicated they have used to pay financial rewards for information on wildlife trafficking from fiscal years 2007 through 2017, along with information on these laws’ requirements for payment of rewards. FWS and NOAA Reported Paying Few Rewards for Information on Wildlife Trafficking but Could Not Assure the Completeness of the Information FWS and NOAA reported paying few financial rewards for information on wildlife trafficking from fiscal years 2007 through 2017, but agency officials could not provide sufficient assurance that this information was complete. Officials from both agencies said that their agencies have not prioritized the use of rewards, and they believed that the reward information they identified—such as the number, dollar amount, and year that rewards were paid—appropriately captured the few reward payments they made during this time frame. Based on the agencies’ reviews of their records, FWS reported paying 25 rewards for a total of $184,500 from fiscal years 2007 through 2017, and NOAA reported paying 2 rewards for a total of $21,000 during that same period (see table 2). See appendix III for additional details on the cases where financial rewards were paid. FWS reported paying rewards in trafficking cases involving a variety of wildlife species, such as eagles, bears, reptiles, and mollusks, across the 11-year period. FWS officials said they generally paid rewards to thank sources who proactively provided information. For example, based on our review of a reward case, FWS paid a reward in 2010 because the source provided information that was crucial in uncovering an attempt to illegally traffic leopards into the United States from South Africa. FWS would not have known about this illegal activity if the source had not come forward with the information. In several cases we reviewed, FWS officials said that the sources did not know about the possibility of receiving a reward when they contacted the agency with information. The two rewards NOAA reported paying from fiscal years 2007 through 2017 involved the illegal trafficking of sea scallops and a green sea turtle. NOAA officials said that in both cases they paid a reward to thank the source who proactively provided information to law enforcement agents. For example, the agent who investigated the sea scallop case reported requesting the reward because the information the source proactively provided was timely, credible, and led to the criminal conviction of several individuals. FWS and NOAA officials could not provide sufficient assurance that the reward information they reported to us represented all of the rewards their agencies had paid from fiscal years 2007 through 2017, but they said the information was complete to the best of their knowledge. Specifically, FWS and NOAA officials said they track all their expenditures, including reward payments, in their financial databases. However, they are not able to readily identify reward payments because their financial systems do not include a unique identifier for such payments and their reward information is located in multiple databases and formats. As a result, FWS and NOAA officials said they identified the rewards they reported to us by manually reviewing their financial and law enforcement records. In particular, FWS officials said they reviewed their paper records to identify instances when the agency paid rewards and then retrieved additional information from their financial and law enforcement databases, such as final payment amounts. NOAA officials said they identified instances when the agency paid rewards by using a combination of paper and electronic records located at NOAA’s headquarters office. NOAA officials also contacted their regions to obtain additional information located at the regional offices to confirm information about the rewards NOAA had paid. Seventeen stakeholders we interviewed who had experience investigating wildlife trafficking or expertise in using financial rewards as a law enforcement tool said that it would be useful for FWS and NOAA to maintain comprehensive information on the rewards they paid. For example, two stakeholders said that maintaining comprehensive information and making that information available to law enforcement agents could motivate agents to make greater use of rewards as a law enforcement tool. Two other stakeholders said that maintaining information on and monitoring reward use would allow the agencies to make ongoing adjustments, such as adjusting payment amounts, to make the most effective use of rewards in combating wildlife trafficking. Federal internal control standards say that management should clearly document internal control and all transactions and other significant events in a manner that allows the documentation to be readily available for examination. Control activities can be implemented in either an automated or a manual manner, but automated control activities tend to be more reliable because they are less susceptible to human error and are typically more efficient. FWS and NOAA officials agreed that maintaining reward information so that complete information is easily retrievable may be beneficial. FWS officials said having clearly documented and readily available reward information could improve how they manage rewards and enable them to monitor and examine their use of rewards more holistically. The officials said they may analyze options for creating a single repository for reward information but did not commit to doing so. They said that creating a single repository for reward information may involve some drawbacks, such as duplicating some data entry in separate databases. Similarly, NOAA officials said having clearly documented and readily available reward information would provide agency management with easier and more consistent access to that information. As a result, they said that they are exploring modifications to their financial and law enforcement databases to better identify and track rewards. For example, NOAA officials said they may be able to create a unique identifier to flag payments that are for rewards in their financial system to enable them to identify payment amounts more easily. NOAA officials did not provide a time frame for completing modifications to their financial system. By tracking reward information so that it is clearly documented and readily available for examination, FWS and NOAA can better ensure that they have complete information on the rewards they have paid to help manage their use of rewards as a law enforcement tool. FWS and NOAA Have Policies for Administering Reward Payments, but FWS’s Policy Does Not Specify Factors to Consider When Developing Reward Amounts FWS and NOAA have policies to guide their law enforcement agents on the process for preparing and submitting a request to pay a financial reward. Specifically, both agencies’ policies call for agents to include a description of the case, the nature of the information that the source provided, a justification for providing a reward, and an explanation of how a proposed reward amount was developed. These policies also outline the general review and approval process, how payments are to be made upon approval of a request, and eligibility criteria to receive a reward. For example, FWS and NOAA policies prohibit paying rewards to foreign government officials as well as paying rewards to any person whose receipt of a reward would create a conflict of interest or the appearance of impropriety. NOAA’s policy explicitly states that the NOAA Office of Law Enforcement is to use statutorily authorized rewards as a tool to obtain information from the public on resource violations and that rewards can help promote compliance with marine resource laws. NOAA’s policy suggests that agents consider advertising reward offers to assist investigations, encourages press releases, and describes the process agents should follow to do so. Moreover, NOAA’s policy specifies factors that agents might include in their reward requests to support the proposed reward, such as (1) the benefit to the marine resources that was furthered by the information provided; (2) the risk, if any, the individual took in collecting and providing the information; (3) the probability that the investigation would have been successfully concluded without the information provided; and (4) the relationship between any fines or other collections and the information provided. FWS’s policy specifies that rewards may be provided in situations in which an individual furnishes essential information leading to an arrest, conviction, civil penalty, or forfeiture of property. However, it does not discuss the usefulness of financial rewards as a law enforcement tool or the types of circumstances when rewards should be used or advertised to the public. Further, FWS’s policy does not communicate necessary quality information internally that agents may need when deciding to request the payment of rewards. In particular, it does not specify factors for agents to consider when developing proposed reward amounts. Instead, the policy leaves it to the discretion of field and regional agents to develop proposed reward amounts within any limitations specified in law. Some FWS agents we interviewed said that they developed proposed reward amounts on a case-by-case basis and did not know whether their proposed amounts were enough, too little, or too much. In addition, some agents said that because FWS’s policy does not specify factors for agents to consider, the reward approval process is subjective and unclear and this has made it challenging for the agents to develop proposed reward amounts. For example, one agent we interviewed said he submitted a request to his supervisor to pay a $10,000 reward to a source who provided information on a major wildlife trafficker. But, for reasons unknown to the agent, his supervisor reduced the amount to $1,000. FWS headquarters officials said field agents submit reward requests to headquarters for approval, and these officials were not aware of instances of proposed reward amounts being changed or denied during the review process. Seven of the 20 stakeholders we interviewed suggested that FWS augment its reward policy to specify factors for agents to consider when developing proposed reward amounts. For example, helpful factors to consider when developing a proposed reward amount may include (1) the number of hours the source dedicated to the case, (2) the risk the source took in providing the information, (3) the significance of the information provided by the source, and (4) the amount of fines or other penalties collected as a result of the information. Two stakeholders expressed concern that some of FWS’s reward payments were insufficient, especially when comparing the amount of time and effort or the risk a source faced in providing the information. A couple of stakeholders also said that without a policy that specifies factors for agents to consider, reward amounts may be subjective and could vary depending on which agent develops the reward proposal. Another stakeholder said that it was important to specify factors for agents to consider when developing proposed reward amounts so that the agency has a reasonable and defensible basis for the reward amounts it pays across cases. According to federal standards for internal control, management should internally communicate the necessary quality information to achieve an agency’s objectives. For example, management communicates quality information down and across reporting lines to enable personnel to make key decisions. FWS officials said they believe that their reward policy is sound, indicating they believe that law enforcement agents have the information they need to develop proposals for reward amounts in cases where rewards are warranted. However, they also agreed that it may be helpful to review their policy but did not commit to doing so. By augmenting its policy to specify factors for agents to consider when developing proposed reward amounts, FWS can better ensure that its agents have the necessary quality information to prepare defensible reward proposals. FWS and NOAA Communicate Little Information to the Public on Financial Rewards Based on our review of the agencies’ websites and other communications, we found that FWS and NOAA communicate little information to the public on financial rewards for reporting information on wildlife trafficking, such as the potential availability of rewards and eligibility criteria. Specifically, some FWS and NOAA law enforcement websites provided information to the public on ways to report violations of the laws that the agencies are responsible for enforcing, such as via tip lines. Some of the websites also provided examples of the types of information the public can report, such as photos or other documentation of illegal activities. However, most of the agencies’ websites did not indicate that providing information on illegal activities could result in a reward. In contrast, the FWS Alaska regional office’s website provided information on the potential availability of rewards and ways the public may submit information for a potential reward. For example, this website provided phone numbers and an e-mail address for the public to use when submitting information. Figure 1 shows the information available on FWS’s and NOAA’s national and regional websites relevant to reporting violations of the laws the agencies enforce in general and on receiving rewards in particular. In addition, FWS and NOAA headquarters officials said their field agents have used other means to communicate the potential availability of rewards in specific cases when the agents had no other information that could help solve those cases. For example, a FWS field official said that the agency advertised a reward offer for information on a case of bald eagle killings by distributing reward posters and posting news releases in the vicinity where the killings occurred. Similarly, NOAA officials said they have advertised reward offers through various means, including circulating reward posters in specific geographic areas after an illegal activity has occurred. Figure 2 shows a reward poster that NOAA distributed in Guam in 2017 advertising a $1,000 reward for information leading to the arrest and conviction of sea turtle poachers. Instead of having a plan for communicating general information to the public on rewards, FWS and NOAA grant discretion to their regional offices and law enforcement agents to determine the type and level of communication to provide, according to FWS and NOAA policies. FWS officials explained that because they typically use financial rewards to thank individuals who come forward on their own accord—rather than using rewards to incentivize individuals with information to come forward—they have not seen the need to communicate more information to the public on the potential availability of rewards. NOAA officials said they have targeted their communications on rewards by publicizing reward offers for specific cases where they do not have leads. They added that they want to receive quality information and already receive a substantial amount of information from sources who reach out to them proactively, so NOAA has not seen the need to communicate more information to the public on the potential availability of rewards. Sixteen of the 20 stakeholders we interviewed said that it would be useful for FWS and NOAA to advertise the potential availability of financial rewards. Several stakeholders said that if the public does not know about the possibility for rewards, then some people with information may not be incentivized to come forward. Two stakeholders added that agencies should carefully consider how and which reward information to communicate to the public so that people who are most likely to have information on illegal wildlife trafficking learn about the potential for rewards. For example, one stakeholder suggested advertising rewards at ports where international shipments are offloaded or placing advertisements at wildlife trafficking nodes, such as entrances to African wildlife refuges. This stakeholder suggested advertising rewards along with wildlife trafficking awareness-raising posters that nongovernment organizations place in some airports. In addition, 14 stakeholders suggested that it would be useful for FWS and NOAA to provide information to the public on the process for submitting information to potentially receive rewards. Several other stakeholders said that it is important for the public to understand whether they may be eligible for a reward, how to submit information, and whether or to what extent their confidentiality will be protected. Another stakeholder provided examples of how other agencies provide information about their reward programs on their websites. SEC and IRS, for instance, use their websites to communicate information to the public on the process for reporting illegal activity for financial rewards. This information includes the types of information to report, confidentiality rules, eligibility criteria, and the process for submitting information to obtain a reward. In addition, the Department of State posts instructions on its websites on how to submit information on an illegal activity and potentially receive a reward. Federal internal control standards say that management should externally communicate the necessary quality information to achieve an agency’s objectives. For example, using appropriate methods to communicate, management communicates quality information so that external parties, such as the public, can help the agency achieve its objectives. This could include communicating information to the public on the types of information and eligibility requirements for potentially receiving rewards for reporting information on wildlife trafficking. FWS officials said that making more reward information available could lead to a significant increase in the amount of information the agency receives, which, in turn, could strain FWS’s resources in following up on that information. However, FWS officials also agreed that it was reasonable to consider making more reward information available to relevant members of the public, particularly in targeted circumstances, but did not commit to doing so. Similarly, NOAA officials said they had some concerns about the additional resources it might take to investigate potentially unreliable or false tips that may result if they make reward information broadly available to the public, but they agreed that it would be reasonable for the agency to consider doing so. NOAA officials also said they may consider making more reward information publicly available at the conclusion of our audit but provided no plans for doing so. By determining the types of additional information to communicate to the public on rewards—such as providing information on the agency’s website on the potential availability of rewards—and then developing and implementing plans to do so, FWS and NOAA can improve their chances of obtaining information on wildlife trafficking activities that they otherwise might not receive. FWS and NOAA Have Not Reviewed the Effectiveness of Their Use of Financial Rewards FWS and NOAA have not reviewed the effectiveness of their use of financial rewards or considered whether any changes might improve the usefulness of rewards as a tool for combating wildlife trafficking. FWS officials said their agency has not reviewed or considered changes to its use of rewards because the agency has not prioritized the use of rewards. NOAA officials said their agency has not focused on using rewards or identified the need to review its use of this tool, particularly in light of other, higher mission priorities. Nine of the 20 stakeholders we interviewed said that FWS and NOAA should review the effectiveness of their use of rewards and consider potential improvements. Several stakeholders said that it would be useful for FWS and NOAA to compare their respective approaches to those of federal agencies that use rewards in contexts outside of wildlife trafficking to identify best practices or lessons learned that might be applicable in the context of combating wildlife trafficking. For example, one stakeholder said that SEC has an effective whistleblower program and may have lessons learned that are relevant for FWS and NOAA to consider. Another stakeholder we interviewed separately indicated that in 2010, before SEC had a whistleblower program that publicized rewards and provided detailed instructions on how members of the public could report information on illegal activities, SEC received few tips. Once SEC implemented a whistleblower program that publicized rewards and provided detailed instructions on its public website, the agency’s use of the program grew substantially, according to the stakeholder. Other stakeholders said it would be useful for the agencies to consider potential improvements to their use of rewards, such as making a standing reward offer for information on wildlife trafficking targeted at high-priority endangered species or particular criminal networks. Two of these stakeholders said such an offer might improve FWS’s and NOAA’s use of rewards by generating more tips than reward offers focused on individual cases. At the same time, they said such an offer would likely filter out some of the false or unproductive tips that the agencies might receive if they made an untargeted standing reward offer. Federal internal control standards state that management should design control activities to achieve objectives and respond to risks by, for example, conducting reviews at the functional or activity level by comparing actual performance to planned or expected results and analyzing significant differences. Further, under the standards, management should periodically review policies, procedures, and related control activities for continued relevance and effectiveness in achieving an agency’s objectives or addressing related risks. FWS and NOAA officials agreed that reviewing the effectiveness of their use of rewards would be worthwhile. Specifically, FWS officials said that it would be useful to compare their approach to those of other federal agencies that use rewards in investigating crimes that involve interstate and foreign smuggling of goods. Similarly, NOAA officials said that reviewing the agency’s use of financial rewards would be worthwhile but cautioned that such a review would need to be balanced against the agency’s constrained resources and many mission requirements. FWS and NOAA officials said they may consider conducting such a review at the conclusion of our audit but provided no plans for doing so. By reviewing the effectiveness of their use of rewards, FWS and NOAA can identify opportunities to improve the usefulness of rewards as a tool for combating wildlife trafficking. Conclusions Wildlife trafficking is a large and growing transnational criminal activity, with global environmental, security, and economic consequences. The federal government has emphasized strengthening law enforcement efforts to combat wildlife trafficking, and using financial rewards to obtain information on illegal activities is one tool that some federal agencies have used. However, to date, FWS and NOAA have not prioritized the use of rewards and were unable to provide sufficient assurance that the 27 rewards they paid during fiscal years 2007 through 2017 represented all of the rewards they provided during that period. By tracking reward information so that it is clearly documented and readily available for examination, FWS and NOAA can better ensure that they have complete information on the rewards they have paid to help manage their use of rewards as a law enforcement tool. Additionally, FWS and NOAA have policies outlining the processes their law enforcement agents are to use in making reward payments, and NOAA’s policy specifies factors for its agents to consider in developing proposed reward amounts, such as the risk the individual took in collecting the information. FWS’s policy does not specify such factors that could inform agents in achieving the agency’s objectives, which is not consistent with federal internal control standards. By augmenting its policy to specify factors for its agents to consider when developing proposed reward amounts, FWS can better ensure that its agents have the necessary quality information to prepare defensible reward proposals. Both agencies have also advertised the potential for rewards in specific cases when agents had no other information, but FWS and NOAA have otherwise communicated little information to the public on the potential availability of rewards. If the public does not know about the possibility of rewards, then some people with information may not be incentivized to come forward. By determining the types of additional information to communicate to the public on rewards—such as providing information on the agency’s website about the potential availability of rewards—and then developing and implementing plans to do so, FWS and NOAA can improve their chances of obtaining information on wildlife trafficking activities that they otherwise might not receive. Finally, FWS and NOAA have not reviewed the effectiveness of their use of financial rewards or considered whether any changes might improve the usefulness of rewards as a law enforcement tool. By undertaking such reviews, the agencies can identify opportunities to improve the usefulness of rewards as a tool for combating wildlife trafficking. Recommendations for Executive Action We are making a total of seven recommendations, including four to FWS and three to NOAA. Specifically: The Assistant Director of the FWS Office of Law Enforcement should track financial reward information so that it is clearly documented and readily available for examination. (Recommendation 1) The Director of the NOAA Office of Law Enforcement should track financial reward information so that it is clearly documented and readily available for examination. (Recommendation 2) The Assistant Director of the FWS Office of Law Enforcement should augment FWS’s financial reward policy to specify factors law enforcement agents are to consider when developing proposed reward amounts. (Recommendation 3) The Assistant Director of the FWS Office of Law Enforcement should determine the types of additional information to communicate to the public on financial rewards and then develop and implement a plan for communicating that information. (Recommendation 4) The Director of the NOAA Office of Law Enforcement should determine the types of additional information to communicate to the public on financial rewards and then develop and implement a plan for communicating that information. (Recommendation 5) The Assistant Director of the FWS Office of Law Enforcement should review the effectiveness of the agency’s use of financial rewards and implement any changes that the agency determines would improve the usefulness of financial rewards as a law enforcement tool. (Recommendation 6) The Director of the NOAA Office of Law Enforcement should review the effectiveness of the agency’s use of financial rewards and implement any changes that the agency determines would improve the usefulness of financial rewards as a law enforcement tool. (Recommendation 7) Agency Comments We provided a draft of this report for review and comment to the Departments of Commerce and the Interior. The departments transmitted written comments, which are reproduced in appendixes IV and V of this report. The Department of Commerce concurred with the three recommendations directed to NOAA and stated that NOAA is developing procedures to ensure that its rewards are closely tracked, clearly documented, and better communicated. In written comments from NOAA, NOAA stated the report fairly and thoroughly reviews NOAA’s use of financial rewards. NOAA outlined the steps it plans to take in response to our recommendations, including developing a procedure to track financial reward information, reviewing information currently disseminated to the public and evaluating whether additional information may be useful, and reviewing the agency’s reward policy to determine whether changes are needed to enhance reward effectiveness. In its written comments, the Department of the Interior concurred with the four recommendations directed to FWS. Interior stated that it appreciated our review of the challenges faced by FWS’s Office of Law Enforcement in combating wildlife trafficking and identifying areas where FWS and NOAA can improve the use of financial rewards as a tool for combating wildlife trafficking. Interior also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of Commerce and the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and of Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to (1) identify laws that authorize the U.S. Fish and Wildlife Service (FWS) and the National Oceanic and Atmospheric Administration (NOAA) to pay financial rewards for information on wildlife trafficking and the extent to which these agencies paid such rewards from fiscal years 2007 through 2017, (2) evaluate FWS’s and NOAA’s policies on financial rewards, (3) evaluate the information available to the public on financial rewards, and (4) determine the extent to which FWS and NOAA reviewed the effectiveness of their use of financial rewards in combating wildlife trafficking. To address these objectives, we reviewed academic literature on the use of financial rewards to combat illegal activities and United Nations Environment Programme reports on the scope and scale of wildlife trafficking. We also interviewed officials from federal agencies that play a role in combating wildlife trafficking or manage programs that pay financial rewards for information on illegal activities. Specifically, we interviewed officials from the Departments of Agriculture, Commerce, Homeland Security, the Interior, Justice, and State, as well as officials from the Internal Revenue Service, the U.S. Securities and Exchange Commission, and the U.S. Agency for International Development. In addition, we reviewed documentation that the Department of the Treasury provided on its role in paying financial rewards. We did not compare FWS’s and NOAA’s use of financial rewards in combating wildlife trafficking to federal agencies’ use of financial rewards in other contexts because the different contexts are not directly comparable. However, we reviewed information on other federal agencies’ use of financial rewards as examples of how financial rewards are used in contexts outside of wildlife trafficking. In addition, we interviewed representatives of six nongovernmental organizations that we selected based on those organizations’ knowledge or experience in combating wildlife trafficking. Specifically, we interviewed representatives from the Elephant Action League, the Environmental Investigation Agency, the National Association of Conservation Law Enforcement Chiefs, the National Whistleblower Center, TRAFFIC, and the World Wildlife Fund. To identify laws that authorize FWS and NOAA to pay financial rewards for information on wildlife trafficking, we asked FWS and NOAA attorneys to compile a list of laws that each of their agencies implements or enforces that prohibit wildlife trafficking and authorize the agency to pay rewards for providing information about trafficking. We then compared that list to the results of our search of the United States Code for such laws. We also reviewed FWS and NOAA documentation for accounts where the fines, penalties, and proceeds from forfeited property that are used to pay rewards are deposited as well as the accounts where appropriations available to pay rewards were deposited. To identify the extent to which FWS and NOAA have paid financial rewards for information on wildlife trafficking, we analyzed FWS and NOAA data on financial rewards the agencies reported paying from fiscal years 2007 through 2017. The data included information on, among other things, the fiscal years in which rewards were paid, laws under which rewards were paid, types of wildlife involved in those cases, the amounts of civil penalties or criminal fines imposed in those cases, the numbers of arrests and convictions as a result of those cases, and whether reward recipients were individuals or groups and U.S. or foreign citizens. To assess the reliability of the data FWS and NOAA provided on financial rewards, we interviewed agency officials knowledgeable about the data and compared the data to case records the agencies provided. Specifically, FWS and NOAA officials said they track all expenditures, including reward payments, in their financial databases, but they are not able to readily identify reward payments because their financial systems do not include a unique identifier for such payments and their reward information is located in multiple databases and formats. As a result, FWS and NOAA officials said they identified the rewards that they reported to us by manually reviewing their financial and law enforcement records, and officials said the information was complete to the best of their knowledge. Based on these steps, we found the data that the agencies provided to us to be sufficiently reliable for reporting information on the rewards the agencies reported paying. However, as we discuss in the report, FWS and NOAA officials could not provide sufficient assurance that the data included all the financial rewards that they had paid from fiscal years 2007 through 2017. To obtain additional detail about cases where financial rewards were paid, we reviewed a nongeneralizable sample of 10 wildlife trafficking cases. We selected these cases based on the agency that investigated the case (to include both FWS and NOAA cases), the amount of the reward paid in the case (to reflect both low and high amounts), the year in which the reward was paid (to include rewards paid more recently), and the type of wildlife trafficked in the case (to include both fish and wildlife cases—there were no plant trafficking cases to select). While the findings from our review cannot be generalized to cases we did not select and review, they illustrate how FWS and NOAA have used financial rewards in wildlife trafficking cases. To evaluate FWS and NOAA policies on financial rewards, we reviewed relevant FWS and NOAA policies and compared them to each other; interviewed FWS and NOAA officials about those policies; and compared the information in the policies with federal internal control standards on information and communication. To evaluate information available to the public on rewards, we reviewed relevant FWS and NOAA publications and examples of communications to the public on the availability of rewards in specific cases and interviewed FWS and NOAA officials. We also reviewed information available on FWS’s and NOAA’s national and regional websites as of December 2017 and January 2018, respectively, relevant to reporting violations of the laws that the agencies enforce in general and on receiving rewards in particular. We compared the agencies’ public communications on rewards with federal internal control standards on information and communication. To evaluate the extent to which FWS and NOAA reviewed the effectiveness of their use of financial rewards in combating wildlife trafficking, we interviewed FWS and NOAA officials and requested any reviews the agencies had conducted regarding their use of financial rewards to compare with federal internal control standards on control activities. FWS and NOAA did not have any such reviews to provide. In addition, for all four objectives, we interviewed a nongeneralizable sample of 20 stakeholders who had experience investigating wildlife trafficking or expertise in the use of financial rewards as a law enforcement tool. To select stakeholders to interview, we first identified a list of stakeholders by reviewing (1) FWS and NOAA data on law enforcement agents with at least 5 years of experience who had investigated wildlife trafficking cases and used financial rewards, (2) Department of Justice data on federal prosecutors who had prosecuted wildlife trafficking cases since fiscal year 2014, (3) literature search results identifying academics with expertise in the use of financial rewards as a law enforcement tool and federal programs that use financial rewards to combat illegal activities in contexts outside of wildlife trafficking, (4) the biographies of members of the federal Advisory Council on Wildlife Trafficking, and (5) recommendations from stakeholders we interviewed. From this list, we then used a multistep process to select the 20 stakeholders to interview. To ensure coverage and a range of perspectives, we selected stakeholders from the following groups: FWS and NOAA law enforcement agents, including field and federal prosecutors responsible for prosecuting wildlife trafficking cases; federal officials responsible for programs that use financial rewards to combat illegal activities in contexts outside of wildlife trafficking; academics with expertise in the use of financial rewards as a law members of the federal Advisory Council on Wildlife Trafficking; and representatives of nongovernmental organizations that investigate wildlife trafficking. We conducted semistructured interviews with the 20 selected stakeholders using a standard set of questions. We asked questions about stakeholder views on the usefulness of financial rewards in combating wildlife trafficking; the strength and weaknesses of the statutory provisions that authorize federal agencies to pay financial rewards for information on wildlife trafficking; FWS’s and NOAA’s use of financial rewards to combat wildlife trafficking; and how, if at all, the two agencies could improve their use of financial rewards to combat wildlife trafficking. We analyzed the stakeholders’ responses to our questions, grouping the responses into overall themes. We summarized the results of our analysis and then shared the summary with relevant FWS and NOAA officials to obtain their views. Views from these stakeholders cannot be generalized to those whom we did not select and interview. We conducted this performance audit from February 2017 to April 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Laws Implemented or Enforced by FWS and NOAA That Prohibit Wildlife Trafficking and Authorize Financial Rewards The Department of the Interior’s U.S. Fish and Wildlife Service (FWS) and the Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA) implement or enforce multiple laws that specifically authorize the payment, under specified circumstances, of financial rewards to persons for information about violations of laws that prohibit wildlife trafficking. The laws that FWS officials identified are listed and summarized in table 3, and the laws that NOAA officials identified are listed and summarized in table 4. In addition, as noted above, the reward provisions in the Magnuson- Stevens Fishery Conservation and Management Act as amended and the Fish and Wildlife Improvement Act as amended authorize the payment of rewards for information about violations of multiple laws. Specifically, the Magnuson-Stevens Fishery Conservation and Management Act as amended authorizes the payment of rewards for information about violations of the act as well as any other marine resource law that the Secretary of Commerce enforces. Further, the Fish and Wildlife Improvement Act as amended authorizes the payment of rewards for information about violations of any law administered by NOAA’s National Marine Fisheries Service relating to plants, fish, or wildlife. NOAA officials identified 14 such laws that prohibit wildlife trafficking (see table 5). If a violation of the laws listed in table 5 occurs, NOAA officials said they could use the Magnuson-Stevens Fishery Conservation and Management Act or Fish and Wildlife Improvement Act reward provision to pay a reward for information on the violation. None of the laws listed in table 5 specifically authorize the payment of financial rewards. Appendix III: FWS and NOAA Cases in Which the Agencies Reported Paying Rewards, Fiscal Years 2007 through 2017 Table 6 provides information on U.S. Fish and Wildlife Service and National Oceanic and Atmospheric Administration cases where these agencies reported paying rewards for information on wildlife trafficking from fiscal years 2007 through 2017. Appendix IV: Comments from the Department of Commerce Appendix V: Comments from the Department of the Interior Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Alyssa M. Hundrup (Assistant Director), David Marroni (Analyst-in-Charge), Cindy Gilbert, Keesha Luebke, Jeanette Soares, Sheryl Stein, Sara Sullivan, and Judith Williams made key contributions to this report.
Why GAO Did This Study Wildlife trafficking—the poaching and illegal trade of plants, fish, and wildlife—is a multibillion-dollar, global criminal activity that imperils thousands of species. FWS and NOAA enforce laws prohibiting wildlife trafficking that authorize the agencies to pay financial rewards for information about such illegal activities. GAO was asked to review FWS's and NOAA's use of financial rewards to combat wildlife trafficking. This report examines (1) laws that authorize FWS and NOAA to pay rewards for information on wildlife trafficking and the extent to which the agencies paid such rewards from fiscal years 2007 through 2017, (2) the agencies' reward policies, (3) information available to the public on rewards, and (4) the extent to which the agencies reviewed the effectiveness of their use of rewards. GAO reviewed laws, examined FWS and NOAA policies and public communications on rewards, analyzed agency reward data for fiscal years 2007 through 2017 and assessed their reliability, interviewed FWS and NOAA officials, and compared agency policies and public communications on rewards to federal internal control standards. What GAO Found Multiple laws—such as the Endangered Species Act and Lacey Act—authorize the Departments of the Interior's U.S. Fish and Wildlife Service (FWS) and Commerce's National Oceanic and Atmospheric Administration (NOAA) to pay rewards for information on wildlife trafficking. FWS and NOAA reported paying few rewards from fiscal years 2007 through 2017. Specifically, the agencies collectively reported paying 27 rewards, totaling $205,500. Agency officials said that the information was complete to the best of their knowledge but could not sufficiently assure that this information represented all of their reward payments. FWS and NOAA have reward policies that outline the general process for preparing reward proposals, but FWS's policy does not specify factors for its agents to consider when developing proposed reward amounts. Some FWS agents GAO interviewed said that in developing proposals, they did not know whether their proposed reward amounts were enough, too little, or too much. By augmenting its policy to specify factors for agents to consider, FWS can better ensure that its agents have the necessary quality information to prepare proposed reward amounts, consistent with federal internal control standards. FWS and NOAA communicate little information to the public on rewards. For example, most agency websites did not indicate that providing information on wildlife trafficking could qualify for a reward. This is inconsistent with federal standards that call for management to communicate quality information so that external parties can help achieve agency objectives. FWS and NOAA officials said they have not communicated general reward information because of workload concerns, but they said it may be reasonable to provide more information in some instances. By developing plans to communicate more reward information to the public, the agencies can improve their chances of obtaining information on wildlife trafficking that they otherwise might not receive. FWS and NOAA have not reviewed the effectiveness of their use of rewards. The agencies have not done so because using rewards has generally not been a priority. FWS and NOAA officials agreed that such a review would be worthwhile but provided no plans for doing so. By reviewing the effectiveness of their use of rewards, FWS and NOAA can identify opportunities to improve the usefulness of rewards as a tool for combating wildlife trafficking. What GAO Recommends GAO is making seven recommendations, including that FWS and NOAA track reward information, FWS augment its reward policy to specify factors for agents to consider when developing proposed reward amounts, FWS and NOAA develop plans to communicate more reward information to the public, and FWS and NOAA review the effectiveness of their reward use. Both agencies concurred with these recommendations.
gao_GAO-18-323
gao_GAO-18-323_0
Background The rail industry was one of the first to pioneer private pensions for its employees in the late 19th century, and by the 1930s, these pensions were more developed than in most other industries. However, according to RRB, these private rail pensions had serious defects that were magnified by the effects of the Great Depression. For instance, RRB noted that the plans were generally inadequately financed and that employers could terminate the plans at will. In prior work, we noted that the Railroad Retirement Act of 1937 was enacted at the urging of rail labor and established the national railroad retirement system administered by RRB. The program was to be solely supported by employees and employers of the rail industry through payroll taxes. According to RRB, this system was created separately from Social Security for several reasons. For instance, RRB notes that Social Security—created in 1935—would not begin payments for several years or credit workers for work prior to 1937, while the deteriorating state of private rail pensions called for immediate retirement payments based on prior service. We previously reported that the 1951 amendments to the Railroad Retirement Act of 1937 substantially increased railroad retirement benefits to bring them in line with benefit increases granted to individuals under Social Security, and that a financial interchange was created between the agencies in 1951 to help pay for these increases. RRB annually computes the amounts that SSA would have collected in taxes from rail workers and their employers, and what SSA would have paid in benefits if rail workers had been covered under Social Security, with the net difference transferred between the agencies. The amounts computed under the financial interchange do not necessarily represent the actual RRB benefits paid to rail workers and their beneficiaries. RRB determined that it was due a net transfer from SSA each year since 1958. Financial interchange transfers make up a significant portion of the financing for RRB’s retirement, disability, and survivors benefits. In fiscal year 2016, RRB paid about $12.4 billion in these benefits and collected $5.9 billion in payroll taxes from rail employees and employers. RRB reported that the remainder of its funding for these benefits came from the financial interchange ($4.1 billion), transfers from the National Railroad Retirement Investment Trust ($1.4 billion), income taxes collected on RRB benefits ($758 million), and other funding sources, such as appropriations. The interchange also serves as a vehicle to fund Medicare Part A (Hospital Insurance) benefits for rail workers. The benefits provided by RRB consist of a core-level of benefits that are similar to those available to most workers covered under Social Security, including Medicare. Rail workers also receive a second level of retirement benefits that approximate payments from private pension plans (see table 1). For non-rail workers, Social Security and Medicare benefits are paid from their respective trust funds: Retirement benefits are paid from SSA’s OASI Trust Fund; Disability benefits are paid from SSA’s DI Trust Fund; and Medicare Part A benefits are paid from the Hospital Insurance Trust Fund. RRB Calculates Financial Interchange Amounts by Approximating Key Flows In and Out of SSA and HHS Trust Funds The financial interchange is intended to place Social Security’s OASI and DI Trust Funds and HHS’s Hospital Insurance Trust Fund on the same financial footing as if rail workers and beneficiaries were covered under Social Security instead of by RRB. Regarding Social Security, RRB is credited for what it paid beneficiaries, administrative costs involved with paying benefits, and interest for the time between the determination of the interchange amount and its actual transfer. SSA is credited for the amount of payroll and income taxes it would have collected from rail workers and for income taxes that would have been paid by RRB beneficiaries on Social Security equivalent benefits. The net of the five amounts is the amount that is transferred (see fig. 1). A net transfer from SSA to RRB means that rail workers would have been a net draw on SSA’s trust funds if covered under Social Security. RRB calculates the financial interchange amount each year, which is done on a retrospective basis, i.e., the amount is determined for the previous fiscal year. By law, the agencies must complete their determination by June of each year. In keeping with the purpose of keeping the OASI and DI trust funds in the same place as if rail workers were covered under Social Security, RRB determines the retirement and disability benefits that rail workers and dependents would have received if they were covered under Social Security. Specifically, RRB uses railroad earnings data provided by employers to replicate SSA’s benefits calculations. Although the basic retirement and disability benefits that SSA and RRB pay to their beneficiaries are based on the same formulas, there are several eligibility differences between the two programs. For instance, a rail worker may receive unreduced retirement benefits at age 60 after 30 years of work, whereas the earliest most workers covered under Social Security can begin receiving retirement benefits is at age 62. According to RRB officials, even though a 60-year-old railroad worker may be receiving RRB retirement benefits, RRB would not receive credit through the interchange for that individual. Once that individual turns 62, RRB determines the amount of reduced Social Security retirement benefits for which he or she would have been eligible, given the person’s earnings history and Social Security’s benefits rules. According to RRB officials, the agency receives a credit through the interchange for this amount even though the individual is receiving full RRB retirement benefits. To account for these potential differences, RRB officials said that the agency must make calculations for individual RRB cases. Additionally, RRB officials said that in light of the number of RRB cases—nearly 400,000—it is not practical to make these calculations annually for each case. Instead, RRB uses SSA rules to calculate benefits for a subset of RRB cases in which the worker’s Social Security number ends in 30, which approximates a 1-percent sample. The sample size was about 4,000 for fiscal year 2016. Once RRB completes its benefit calculation for each of those cases, it aggregates the result and produces an estimated amount for its entire population of cases (see fig. 2). RRB reported in its annual financial interchange determination report that it was credited $7.2 billion dollars in fiscal year 2016 for the estimated amount beneficiaries would have been paid under Social Security. Administrative Expenses These expenses represent those that SSA would have incurred to administer benefits had rail workers been covered under Social Security (as opposed to the actual amount RRB spent to administer its programs). These expenses, which SSA would have funded out of its trust funds, include the cost to enroll individuals in its programs and maintain its benefit rolls. RRB calculates the amount of administrative expenses based on unit-cost data provided by SSA. RRB reported that it was credited about $22 million in administrative costs for fiscal year 2016. Interest Charges SSA credits RRB for interest that accrues on the annual financial interchange transfer from the period in time for which it is calculated (the end of the fiscal year on September 30) until the amount is transferred to RRB in June of each year. The interest rates are equal to those SSA earns on its trust funds. RRB reported that it was credited about $163 million in interest for fiscal year 2016. Payroll Taxes This amount represents the payroll taxes rail employees and employers would have paid into Social Security’s trust funds had workers been covered under Social Security. SSA and RRB generally levy payroll taxes on earnings at the same rate, and RRB officials told us they use payroll data from employers to determine this amount. RRB reported that it credited SSA $2.4 billion for fiscal year 2016. Income Taxes Some RRB beneficiaries pay income taxes on the benefits they receive, and that tax revenue is credited to SSA’s trust funds through the financial interchange. To put the OASI and DI trust funds in the same place as if rail workers were covered under Social Security, RRB credits SSA for the amount of income tax railroad beneficiaries paid on Social Security equivalent benefits. RRB computes this amount using tax data from the Department of the Treasury, and credited about $296 million to SSA for fiscal year 2016. RRB also may adjust calculations on transfers from prior years; for instance, if new income was reported for individuals or if benefit overpayments are discovered for individuals in the sample. Medicare Transfers The process for determining the financial interchange transfer with HHS— which helps finance Medicare benefits for rail workers—has fewer components than for retirement and disability benefits. Generally, RRB determines the Medicare payroll taxes and income taxes paid by rail workers and transfers this amount, less administrative expenses, to HHS (see fig. 3). RRB estimates how much it collects in Medicare payroll taxes by using payroll data provided by employers for workers whose Social Security numbers end in 30. RRB credited HHS for about $637 million for fiscal year 2016. Overall, the procedures we observed, and which RRB explained and demonstrated, for calculating the financial interchange are consistent with the methodology agreed to by RRB, SSA, and HHS. An annual determination report produced by the three agencies documents this methodology. Additionally, several audits conducted for the RRB Office of Inspector General determined that the methodology is appropriate for achieving the purpose of the financial interchange. Specifically, the audits concluded that the sample used in calculating benefits was representative of RRB’s population of beneficiaries, the formulas used to project the results of the sample on the entire population of beneficiaries were consistent with RRB’s design, and that assumptions made by RRB when carrying out calculations were reasonable. High Ratio of Beneficiaries to Rail Workers Has Resulted in Transfers From SSA to RRB Each Year Since 1958 SSA has made a net transfer to RRB through the financial interchange each year since 1958. The cumulative net transfer from the Social Security trust funds to RRB through 2015 was approximately $266 billion in 2016 dollars. Of this amount, transfers related to retirement and survivor benefits comprised about $256 billion and disability benefits accounted for about $10 billion. This trend in transfers is primarily caused by RRB benefit payments exceeding payroll taxes collected as calculated by the interchange, which has been the case each year of the financial interchange, resulting in a net amount owed to RRB from SSA each year (see fig. 4). Based on the data RRB reported, the continuing flow of funds to RRB from SSA has largely been driven by a steadily shrinking number of active workers in the rail industry paying payroll taxes in support of a larger population of beneficiaries. According to RRB data, the number of workers in the rail industry peaked at the end of World War II, when there were almost 1.7 million workers. Since then, this number declined steadily to about 231,000 in 2016. Additionally, the number of beneficiaries has exceeded the number of active workers since 1961. According to RRB data, there was about 1 beneficiary for every 10 workers in 1938; the ratio had increased to 3 beneficiaries for every 10 rail workers in 1951, when the financial interchange was created. By 2016, there were 28 beneficiaries for every 10 workers. Furthermore, RRB officials noted that another factor causing increased fund transfers from SSA to RRB was a series of successive amendments to the Social Security Act which raised benefits immediately while deferring tax increases to pay for the increased benefits. As a result of these two factors, the payroll taxes paid by rail workers have not been sufficient to pay for all of the benefits paid by RRB. Hence, the financial interchange has consistently transferred money from SSA to RRB (see fig 5). According to SSA actuarial estimates, the flow of funds to RRB from SSA is projected to continue. Social Security’s 2017 trustees report projects that the amount of transfers to RRB will continue to grow though at least 2026. Moreover, RRB’s most recent actuarial valuation report estimates that under three employment assumptions—optimistic, moderate, and pessimistic—the number of beneficiaries will continue to exceed the number of rail workers through at least 2088. RRB has collected payroll taxes for HHS since 1966. From 1966 through 2016, RRB reported that it transferred a total of $30 billion in 2016 dollars through the financial interchange to the Hospital Insurance Trust Fund (see fig. 6). RRB Takes Measures to Oversee the Financial Interchange Calculation, but Shortcomings Increase the Risk of Errors RRB Takes Oversight Steps, but Manual Data Entry and Systems Limitations May Prevent RRB from Detecting Mistakes RRB takes a number of steps to ensure that the financial interchange amount is accurately calculated each year. For example: Sample verification: To make sure that the financial interchange sample is up to date, RRB staff told us that they query their beneficiary database at the beginning and end of the annual financial interchange calculation to ensure that all beneficiaries who should be part of its sample—those with a Social Security number ending in 30—are included. Those included in the sample can change from year to year, for instance, when new beneficiaries join the retirement rolls or when beneficiaries die. Supervisory review: RRB officials told us that the work of a new employee who calculates the financial interchange is reviewed by another employee until the new employee is determined to be proficient. Error checks: Electronic error checks built into the system RRB uses to calculate the financial interchange help prevent mistakes by flagging erroneous values. These checks alert employees in real time that an incorrect value may have been entered (for example, a benefit amount that exceeds what beneficiaries can receive). Officials also told us that they run similar checks in batches throughout the year to sweep for any potential errors that were not addressed by employees. They noted that they will work with staff to address all potential errors before the financial interchange calculation is finalized. However, RRB’s error checks do not cover all potential erroneous values. High-level review: RRB officials told us that the Chief of Benefit and Employment Analysis and his staff review the results of the interchange calculations and determine if the end result is reasonable compared to projections made earlier in the year, based on actual payroll and beneficiary data. Despite these steps, limitations in RRB’s error checks and its reliance on manual data entry are potential sources of mistakes in financial interchange calculations. The process RRB staff follow in computing benefit amounts for the financial interchange involves manual data entry of earnings data and SSA-equivalent benefits. RRB’s error checks will help identify values that are impossible—such as a benefit amount that exceeds the maximum a beneficiary can receive—but not values that are incorrect but still within the range of possibility. RRB staff demonstrated this scenario for us and acknowledged this as a limitation in their internal controls. Any data entry errors have the potential to result in larger errors in the financial interchange determination. The benefits portion of the financial interchange determination is based on a sample of all cases. Should any errors occur in the sample, they will be magnified when RRB inflates the estimate to arrive at an amount for the entire population of beneficiaries. Additionally, RRB’s process could result in incorrect transfers for years. The sample is chosen in the same way each year—individuals with Social Security numbers ending in 30—so the same cases will remain part of the sample until the individuals leave the rolls. RRB officials told us that they generally only have to do a full set of calculations for new cases or cases in which additional income is detected that affects benefit amounts. RRB officials estimated that about 20 percent of cases in the financial interchange sample each year require a full calculation. For the remainder of cases in the interchange sample, officials said that no annual recomputation is needed. Instead, the previous year’s results are adjusted according to any cost of living increase. If a data entry error is made in one of these cases, RRB may not discover it until the individual leaves the rolls or dies, at which point RRB staff told us they recalculate the individual’s benefit amount. Data sharing between RRB and SSA could reduce the potential for data entry errors, but the two agencies have not recently pursued this option. RRB officials told us that prior to 2008 they used computer code to automatically save data from SSA databases into spreadsheets, where the data could be used for calculating the financial interchange. However, SSA instructed RRB to stop using this method in 2008 because of security concerns about saving this information outside of SSA systems. RRB officials added that this constraint prevents them from developing a more efficient method of data collection that would improve the accuracy and timeliness of benefit calculations for the financial interchange. However, RRB officials said that they have not formally approached SSA in the last several years to discuss potential alternatives for gaining greater access to data. SSA officials said that RRB should follow SSA’s procedures for requesting a data exchange if RRB wishes to revisit this topic. Federal internal control standards state that agencies should use quality information to achieve their objectives. By taking additional steps to obtain data from SSA electronically, RRB can better position itself to ensure that data entered into its systems are correct and that its calculations are free of errors. Limited Documentation and Formal Policies Increase the Risk for Errors in Key Aspects of the Financial Interchange Process RRB has limited documentation and does not have formal policies to guide several key aspects of the financial interchange calculation. While we did not identify any actual errors in its calculations, these shortcomings in its controls increase the risk of calculations being carried out inconsistently or incorrectly. Limited Documentation of the Financial Interchange Process The broad steps that RRB takes to determine the amounts of the financial interchange are documented in an annual determination report produced by RRB. They include, for example, the factors used to calculate administrative costs, discussion of adjustments made to calculations from prior years, and descriptions of the formulas used to project the results of RRB’s benefit sample to the population of railroad beneficiaries. However, the agency does not have clear documentation of the detailed steps used by staff to calculate the interchange amounts. A 2010 audit of the financial interchange process conducted for the RRB Office of Inspector General found that documentation of the financial interchange process was insufficient for a knowledgeable third party to replicate without verbal explanation from RRB staff. In response, RRB officials told us that they produced some documentation such as charts showing the workflows for different portions of the process, such as for calculating benefits, payroll taxes, and financial projection—and instructions for staff in RRB’s Bureau of the Actuary for high-level review of the formulas and entries for the final calculation results. However, the documentation did not provide enough detail about the steps staff must take when conducting financial interchange calculations so the process can be followed without additional explanation. For instance, the documentation did not discuss the process by which staff obtain earnings data and enter it into SSA’s benefit calculator, manually enter the results into RRB’s system, or the different alerts that notify staff of potential mistakes and how staff deal with them. Federal internal control standards state that effective documentation provides a means to retain organizational knowledge and mitigate the risk of having knowledge limited to a few personnel, as well as a means to communicate that knowledge as needed to external parities, such as auditors. Written documentation with specific steps for carrying out the financial interchange calculation and using its data system would help RRB ensure that its staff and others could carry out and replicate its process consistently. Limited Documentation of RRB’s Computer System RRB does not have current or complete documentation related to the computer system it uses to compute the financial interchange. Specifically, RRB officials said that they do not have current documentation such as a manual or data dictionary that would provide information on the data elements in the system, their definitions, descriptions, and range of potential values. They said a data dictionary is not necessary because data are contained in a format in which rows and columns are labeled according to fields and years. However, such labeling does not include documentation, for example, about whether values entered in those fields are allowable. Federal internal control standards state that effective documentation is needed to retain knowledge and prevent knowledge from being limited to a few staff. Even if the data system is relatively uncomplicated, without such documentation, it is difficult for RRB staff and others to fully understand all elements in the system, and it could complicate efforts to make any changes in the future or bring new staff up to speed on the system. No Written Documentation on Procedures for Overriding Potential Errors RRB does not have written procedures for how to address instances in which staff do not correct potential errors flagged by its computer system. As noted earlier, RRB’s system for calculating the financial interchange will alert staff to potential data entry errors. RRB officials said this system has the ability to allow staff to override the alert in some cases, generally in complex cases, such as when RRB benefits are offset by other public pensions. In these cases, the system does not distinguish between an actual error and instances in which additional work and review are needed because of complex benefit calculations. Staff can override the alert in these cases where there is no actual error, but officials noted that a report of potential errors that is generated by the system would still include these cases, which may be referred back to staff for clarification or correction. If implemented correctly, these procedures could help staff take appropriate action on these complex cases. However, current procedures are not formally documented and officials said they have not considered producing written procedures because they believe the process for addressing alerts is clear. Federal internal control standards indicate that effective documentation assists in management’s design of internal controls and can mitigate the risk that knowledge is limited to a few staff. RRB’s lack of written procedures can make it difficult for staff or reviewers to know if procedures are carried out consistently—such as whether staff appropriately override an error alert—and can create challenges if there is staff turnover. It is important to ensure that all potential errors are addressed correctly given that mistakes in the financial interchange sample can be multiplied when estimating benefit payments for the universe of RRB beneficiaries. No Formal Policy on Supervisory Review According to RRB officials, new employees will have their calculations reviewed until the employees are deemed to be proficient, and calculations by any staff member are subject to review and periodically reviewed for accuracy. Federal internal control standards call for documenting agency procedures. However, RRB does not have a minimum or maximum time established for which it will review the work of new staff, and does not have an overall policy for reviewing staff members’ work after they have been deemed proficient. Officials told us they had not considered setting a policy regarding supervisory review. They added that individualized, on-the-job training is more appropriate for new staff than a formalized process. In the case of current employees, any potential errors would be identified when the case is terminated, at which time all cases are reviewed and recomputed. Additionally, officials said that a formal policy would not increase the number of cases reviewed and potentially constrain their ability to correct new errors as they occur. Nonetheless, without formal policies on supervisory review, RRB cannot reasonably ensure that the work performed by staff is adequately or consistently reviewed for quality. SSA and HHS Do Not Review the Results of Case-Level Calculations SSA and HHS provide some oversight of the financial interchange process, but do not review case-level calculations. Both agencies approve the results of the financial interchange calculations, but officials from SSA and HHS told us that their oversight is limited to high-level reviews of RRB’s calculations to determine whether results significantly vary from previous years. For instance, staff from SSA’s Office of the Chief Actuary told us that they examine RRB’s payments and revenues against SSA’s benefits paid and payroll taxes collected to determine if there are large or inexplicable changes from year to year, in which case they will ask RRB for additional information to understand the changes. Additionally, RRB officials told us that formulas used in their spreadsheets to calculate the results of the interchange have been reviewed by SSA actuaries. While these actions could help identify larger errors, the agencies will not be able to detect whether errors are made on complex, case-level calculations or if SSA rules are being correctly followed. In response to prior errors in financial interchange calculations, RRB officials told us that SSA reviewed case-level calculations from the 1990s until 2002. SSA officials told us that they have not reviewed cases since then because of resource constraints. A 2009 SSA Office of the Inspector General report recommended that the agency consider increasing its oversight of the process, such as setting a schedule for review of individual cases given the importance of reviews in verifying transfers. However, SSA has not taken action on this recommendation. HHS officials told us that the financial interchange is one of a number of relatively small funding streams and the agency has never had cause to suspect mistakes and has never examined case-level calculations. Federal internal control standards state that agencies should establish and operate monitoring activities to evaluate the results of activities. Without monitoring how calculations are made, SSA cannot reasonably ensure that the transfers it makes or receives with RRB are accurate. In commenting on a draft of this report, HHS raised questions about whether it has the authority to review case-level calculations, but noted in follow-up communication that this issue is currently undergoing legal review at HHS. As a result, HHS officials told us that they would not be able to provide additional clarification at this time. We continue to believe that HHS would be better positioned to ensure that transfers it makes and receives are calculated correctly if it reviews case-level calculations. Conclusions The financial interchange provides RRB with a significant portion of its funding, and trends in the number of beneficiaries and workers suggest this will continue to be the case in the future. RRB developed a process to calculate the financial interchange amount, and the accuracy of the calculations depends in large part on correct data being manually entered into RRB’s computer system. However, RRB’s current controls do not address some potential sources of error. Having the ability to electronically obtain data from SSA could help reduce the risk posed by data entry errors. Further, RRB has limited written documentation for carrying out aspects of the financial interchange calculation, such as how its computer system is structured, how to address instances when staff override error alerts, and how staff work is reviewed. Without such documentation, RRB puts itself at risk of staff carrying out actions inconsistently, losing operational knowledge when staff leave or retire, and complicating oversight of its operations. Lastly, SSA and HHS increase the risk of errors by not performing case- level reviews of financial interchange calculations. This is especially true for the SSA portion of the interchange, which involves complex calculations performed according to SSA rules. In its role as the administrator of the OASI and DI programs, SSA is best positioned to determine if its rules are properly being applied to financial interchange calculations. The large sums SSA transfers through the interchange— over $4 billion annually—warrant additional oversight to ensure that transfer amounts are correct. Recommendations for Executive Action We are making a total of eight recommendations, including five to RRB (The Board), two to the Commissioner of SSA, and one to the Secretary of HHS. The Board should work with SSA to explore options for obtaining data electronically and limiting the reliance of the financial interchange process on manual data entry. (Recommendation 1) The Board should produce written documentation on the financial interchange process such that a knowledgeable third party could carry out and replicate its process consistently without further explanation. (Recommendation 2) The Board should produce written documentation of its computer system and its structure, such as a manual for the computer system, and data dictionary to provide information on the data elements in the system, their definitions, descriptions, and range of potential values. (Recommendation 3) The Board should produce written documentation of its procedures for instances when staff override error alerts generated by its computer system. (Recommendation 4) The Board should produce formal policies on how the work of staff performing the financial interchange is reviewed. (Recommendation 5) The Commissioner of SSA should work with RRB to explore options for electronically sharing data and limiting the reliance of the financial interchange process on manual data entry. (Recommendation 6) The Commissioner of SSA should take additional steps to provide oversight of financial interchange calculations at the individual-case level. This could include periodically reviewing a subset of these cases. (Recommendation 7) The Secretary of HHS should, consistent with its existing statutory authority, take additional steps to provide oversight of financial interchange calculations at the individual-case level. If the Secretary concludes that there are limitations in its authority in this area, the Secretary should seek to obtain the necessary additional authority. (Recommendation 8) Agency Comments and Our Evaluation We provided a draft of this report to RRB, SSA, and HHS for review and comment. In written comments, both RRB and SSA agreed with the recommendations. RRB noted that it will devote the resources needed to improve the written documentation of its procedures and computer system. RRB and SSA also provided technical comments which we incorporated as appropriate. Copies of their written comments are reproduced in appendixes I and II. In written comments, which are reproduced in appendix III, HHS disagreed with the recommendation that it take additional steps to provide oversight of financial interchange calculations at the individual-case level. HHS noted that while in theory it may be a good idea to incorporate such review into the process, it is limited by statute in its ability to oversee how RRB calculates transfers between HHS and RRB. HHS went on to describe a section of the Social Security Act that they noted “pertains more to Supplemental Medical Insurance trust fund draws for administrative costs.” Notably, with respect to HHS, our report does not involve that trust fund, but rather addresses the Hospital Insurance Trust Fund. Although HHS’s comments did not clarify why it believes that this section of law would limit its authority with respect to the Hospital Insurance Trust Fund, it nevertheless asserted that it does apply in this scenario. We reached out to HHS to seek clarification of its comments. For example, we inquired about the applicability of a separate provision of law that would appear to establish a role for HHS to work with RRB to determine financial interchange amounts. Ultimately, HHS did not provide the clarification we sought, instead indicating via email that this recommendation is currently undergoing legal review and that HHS is unable to provide a response to our questions at this time. HHS further stated that it will continue to work on this issue to provide GAO with updates in the future. In light of the uncertainty surrounding HHS’s authority in this area and the fact that HHS declined to respond to our requests for clarification of its legal authority, we have modified our recommendation to reflect the fact that HHS may need to seek additional statutory authority to implement our recommendation, should HHS determine it to be necessary. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Railroad Retirement Board, the Commissioner of the Social Security Administration, and the Secretary of the Department of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 7215 or curdae@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in Appendix IV. Appendix I: Comments from the Railroad Retirement Board Appendix II: Comments from the Social Security Administration Appendix III: Comments from the Department of Health and Human Services Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Mark Glickman (Assistant Director), Daniel R. Concepcion (Analyst-in-Charge), and Randy DeLeon made key contributions to this report. Additional contributors include David Ballard, Carl Barden, William Boutboul, James Cosgrove, Alexander Galuten, Jennifer Gregory, Sheila McCoy, Jean McSween, Mimi Nguyen, Joseph Silvestri, Almeta Spencer, and Kate van Gelder.
Why GAO Did This Study RRB collects payroll taxes and administers retirement, disability, and Medicare benefits for rail workers and their families. A financial interchange exists between RRB, SSA, and HHS in order to put the trust funds for these benefits in the same financial position as if Social Security covered rail workers. RRB generally transfers to the Social Security and Hospital Insurance trust funds the taxes that would be collected from rail workers and employers, while SSA provides RRB the benefits that would otherwise be paid directly to rail workers. GAO was asked to review the financial interchange calculation process. This report examines (1) the steps taken to calculate financial interchange amounts, (2) factors that could account for trends in transfers over time, and (3) the extent to which RRB, SSA, and HHS provide oversight to ensure calculations are accurate. GAO reviewed agency policies, procedures, and regulations; observed RRB staff calculating four cases selected for beneficiary type; reviewed data on payment and beneficiary trends; and interviewed agency officials. What GAO Found Established in 1937, the Railroad Retirement Board (RRB) administers retirement and disability benefits for rail workers and their families. A financial interchange between RRB and the Social Security Administration (SSA) was created in 1951, which as GAO previously reported, helped finance RRB benefits as they increased over time to keep pace with growing Social Security benefits to individuals. Through its financial interchange calculation, RRB takes steps each year to estimate the amount of funds that would have flowed in and out of Social Security's trust funds if rail beneficiaries were covered by Social Security instead of RRB. Five key steps go into the annual calculation: RRB is credited for (1) the estimated amount of benefits it would have paid to beneficiaries under SSA rules, (2) administrative costs, and (3) interest accrued on the financial interchange amount. SSA is credited for the revenues it would have received from rail workers if they paid into Social Security; specifically, (4) payroll taxes and (5) income taxes paid on benefits received. The determined net amounts are transferred between the agencies, which since 1958 have been from SSA to RRB each year. RRB received $4.1 billion in fiscal year 2016, almost one-third of the $12.4 billion in retirement and disability benefits it paid that year. The financial interchange was expanded to Medicare in 1965 to facilitate funding of Medicare benefits to rail workers; RRB transfers Medicare payroll taxes collected, income taxes paid on benefits received, and interest, minus administrative costs to the Department of Health and Human Services (HHS). A high ratio of beneficiaries to active railroad workers primarily explains the net transfers from Social Security's trust funds to RRB each year since 1958. Rail employment has fallen steadily since World War II, and the number of beneficiaries has exceeded the number of workers since 1961. RRB had 2.7 beneficiaries for every worker in 2015. As a result, RRB has paid out more in benefits than it has collected in payroll taxes and projects this to continue for the foreseeable future. RRB takes a number of steps each year to ensure the accuracy of its calculations, such as checking that the sample of cases used to estimate benefit payments is complete, reviewing the work of new employees, and using electronic alerts to help prevent staff from entering incorrect information into its computer system. SSA and HHS also conduct high-level reviews of the calculation results to identify any significant changes from one year to the next. However, RRB's process includes manual data entry and its electronic edit checks cannot flag entries that are incorrect but plausible, which could lead to calculation errors. RRB also has limited documentation of its calculation process, and does not have formal policies on how staff should address some potential calculation errors and on how supervisors should review staff work. This is contrary to internal control standards for having quality data and documenting procedures. In terms of SSA and HHS, they do not currently review case-level calculations made by RRB, and cannot reasonably ensure that work used to determine the transfers they made and received is correct. What GAO Recommends GAO makes eight recommendations, including that RRB create formal policies and improve documentation of its processes, work with SSA to obtain data electronically, and that SSA and HHS increase their oversight. RRB and SSA agreed, while HHS did not, asserting that statute limits its authority; however, HHS continues to review this issue. HHS should seek this authority if it determines it necessary.
gao_GAO-19-25
gao_GAO-19-25_0
Background This section discusses (1) the history and status of the MOX project; (2) the roles of DOE, NNSA, and the contractor in managing and overseeing the MOX project; (3) project management lessons learned; and (4) DOE’s and NNSA’s recurring project management problems. History and Status of the MOX Project DOE began the MOX project over 20 years ago, in 1997, as part of a strategy to manage the disposition of large quantities of surplus, weapons-grade plutonium no longer needed for defense purposes. This strategy, now undertaken through NNSA’s Plutonium Disposition program, originally planned to dispose of the plutonium through a dual approach—(1) conversion into mixed-oxide fuel and (2) immobilization in glass or ceramic material—but NNSA later cancelled the immobilization approach in favor of the approach for only mixed-oxide fuel. In 1999, DOE awarded a contract to design, construct, and operate a MOX facility to the contractor consortium of Duke, Cogema, Stone & Webster, LLC— now called MOX Services, LLC (MOX Services). In February 2002, NNSA reported to Congress that the construction of the MOX project would begin in fiscal year 2004, with operations set to begin in fiscal year 2007, and cost nearly $1 billion to design and construct. However, as figure 1 shows, construction of the MOX project did not begin until 2007 after DOE formally approved the project’s estimated cost of about $4.8 billion and estimated completion date of September 2016. In December 2008, DOE approved a revised cost estimate for completing construction of the MOX project of $4.9 billion and a 1-month delay in the start of operations to October 2016. From 2009 through 2011, the estimated cost to complete construction of the MOX project remained at $4.9 billion. However, the MOX project’s cost and schedule estimate changed significantly in 2012. That year, at NNSA’s direction to update the estimate, the MOX contractor submitted a proposal to increase the cost of the facility to about $7.7 billion—an increase of about $2.8 billion from the 2008 estimate—with the start of operations delayed by about 3 years, to November 2019. After receiving the MOX contractor’s revised estimate that indicated significant cost increases and schedule delays to the project, NNSA stated in its fiscal year 2014 budget request that pursuing the MOX approach might be unaffordable and proposed to slow down construction while the agency assessed alternative approaches for plutonium disposition. After a series of reviews, DOE ultimately concluded that pursuing an alternative disposition approach—referred to as “dilute and dispose”—could significantly reduce the life-cycle cost of the Plutonium Disposition program, compared with continuing the program using the MOX approach. Following the identification of a potentially less costly approach to plutonium disposition, in February 2016, DOE’s fiscal year 2017 budget request proposed terminating the MOX project in favor of pursuing the dilute and dispose approach. Congress appropriated funding for the MOX project for fiscal years 2017 and 2018 and directed DOE to continue work on the project. In August 2016, DOE issued a revised cost estimate of approximately $17.2 billion to complete construction of the MOX project by 2048. In the face of this significant cost increase, the National Defense Authorization Act for Fiscal Year 2018 authorized the Secretary of Energy to terminate the MOX project if, among other things, he could certify that the remaining life-cycle cost for an alternative option for carrying out plutonium disposition would be less than approximately half of the estimated remaining life-cycle cost of carrying out the MOX project. In May 2018, DOE completed this certification and notified Congress of its intention to terminate construction of the MOX project and to instead pursue the dilute and dispose option. The Secretary of Energy reported that the life-cycle cost estimate was $19.9 billion for the dilute and dispose option compared to $49.4 billion for the MOX project. In October 2018, NNSA terminated the project. Additional information on the history and status of the MOX project is in appendix II. Roles of DOE, NNSA, and the Contractor in Managing the MOX Project DOE and NNSA are responsible for providing overall direction to, and oversight of, the contractor for the MOX project. The contractor, MOX Services, is responsible for the design, construction, and operation of the MOX facility. DOE. The Office of Project Management participates in a number of the MOX project’s oversight activities. In particular, the office has led independent reviews of the MOX project to validate its cost and schedule estimates and has conducted certification and surveillance reviews of the MOX contractor’s earned value management (EVM) system. NNSA. Subsequent to its establishment in 2000, several NNSA offices have provided overall direction to, and oversight of, the contractor for the MOX project, including the Office of Fissile Materials Disposition and the Office of Defense Nuclear Nonproliferation. In November 2011, after starting to place increased emphasis on improving its management of projects, the newly created Office of Acquisition and Project Management began providing overall direction to, and oversight of, the contractor for the MOX project. In March 2013, the Office of Acquisition and Project Management established the NNSA MOX Project Management Office at the Savannah River Site to lead the onsite project and contract management direction, administration, and oversight of the MOX project. MOX Services. As the contractor for the MOX project, MOX Services is responsible for designing, constructing, and operating the MOX facility. MOX Services has also subcontracted work to complete certain construction activities, such as the fabrication of specific types of equipment, including the complex gloveboxes needed for handling plutonium and the heating, ventilation, and air conditioning systems. Figure 2 depicts the roles of, and interrelation among and between, DOE, NNSA, and the MOX contractor in overseeing the MOX project. Project Management Lessons Learned According to key practices that we and others have identified for both program and project management, it is important to identify and apply lessons learned from programs, projects, and missions to limit the chance of recurrence of previous failures or difficulties. As such, the use of lessons learned—such as project management lessons learned—is a principal component of an organizational culture committed to continuous improvement. Lessons learned, therefore, serve to communicate knowledge more effectively and to ensure that beneficial information is factored into planning, work processes, and activities. They also provide a powerful method of sharing ideas for improving work processes, facility or equipment design and operation, quality, and cost-effectiveness. Moreover, as we and others have previously found, agencies can learn lessons from an event and make decisions about when and how to use that knowledge to change behavior. Key practices of a lessons-learned process include collecting, analyzing, saving or archiving, and sharing and disseminating information and knowledge gained on positive and negative experiences (see fig. 3). DOE and NNSA Have Faced Recurring Project Management Problems For more than 2 decades, we and others have reported on the recurring nature of the problems affecting DOE’s and NNSA’s ability to manage contracts and projects effectively. Many of these problems have related to DOE’s and NNSA’s struggles with managing projects, such as the MOX project, within their initial cost and schedule estimates, including the following: In 1999, the National Academy of Science’s National Research Council reported that recurring problems with project management had raised questions about the credibility of DOE’s conceptual designs and cost estimates. In a March 2007 report, we found that 9 of 12 major projects we reviewed—including the MOX project—had exceeded their original cost estimates, schedule estimates, or both, principally because of ineffective project oversight and contractor management. In a November 2014 report, the Congressional Advisory Panel on the Governance of the Nuclear Security Enterprise (Augustine-Mies Panel) stated that NNSA’s inability to estimate costs and execute projects according to plan has been a major source of dissatisfaction among the national leadership and had significantly undermined NNSA’s credibility. Further, in April 2015, we found that NNSA has had a long history of identifying corrective actions for problems and declaring them successfully resolved, only to then identify additional actions needed to address the problems. As we found, the recurrence of such problems suggests that NNSA did not have a full understanding of the root causes of its contract- and project-management challenges. Moreover, our 2017 high-risk report found that DOE had taken several important steps that demonstrate its commitment to improving contract and project management, but that DOE’s efforts had not fully addressed several areas where the department continues to have shortcomings. Areas with shortcomings include acquisition planning for major contracts and the quality of enterprise-wide cost information available to DOE managers and key stakeholders. Additional information on our prior work highlighting selected DOE and NNSA project management problems is in appendix III. NNSA Recognized Certain Indicators of Cost and Schedule Problems after Strengthening Its Oversight in 2010 and 2011 Prior to 2011, NNSA project staff had failed to recognize and fully resolve certain cost and schedule problems that indicated that the MOX project would not be completed on time or within its approved cost estimates. However, after taking actions to strengthen its project management oversight in late 2010 and 2011, NNSA recognized indicators of a number of problems with the MOX project that contributed to NNSA’s decision to terminate the project. NNSA Failed to Recognize and Fully Resolve Certain Cost and Schedule Problems Affecting the MOX Project Prior to 2011 Prior to 2011, NNSA’s staff responsible for overseeing the MOX project failed to recognize and fully resolve certain cost and schedule problems that indicated that the project would not be completed on time or within its approved cost estimates. The NNSA staff responsible for overseeing the MOX project at that time were generally inexperienced in overseeing complex nuclear construction projects. From 2007 through 2011, staff overseeing the MOX project were primarily familiar with large programmatic initiatives and operations but had little experience in managing large, complex first-of-a-kind nuclear construction projects, according to a May 2014 root cause analysis. Although information available to the NNSA staff showed that there were cost and schedule problems that indicated the increasing likelihood that the project would not be completed within its approved total cost estimate of $4.9 billion, the staff did not recognize and fully resolve four key problems. First, information about the contractor’s use of inaccurate rates to estimate the time needed to complete certain construction activities— commonly referred to as unit rates or planned production rates—indicated that the project would not be completed within its approved cost estimate. These rates are used to reflect levels of productivity during construction and to help develop projects’ cost and schedule estimates, including updates to annual forecasted estimates. Following the start of construction in August 2007, the MOX contractor began to experience lower-than-estimated productivity rates for key construction activities, according to the May 2014 root cause analysis report. Despite this issue, the contractor did not incorporate more realistic assumptions regarding the unit and production rates, such as by updating the estimated costs and time needed to complete specific construction activities, when developing the contractor’s annual forecasted estimates of the project’s total cost for 2008 through 2011. MOX contractor representatives told us that the unit rates they used to develop cost and schedule estimates were realistic based on assumptions at that time and that DOE was involved in the development of the unit rates. In addition, the MOX contractor’s representatives told us that expected improvements in unit rates did not materialize because of higher than expected levels of worker turnover. NNSA staff overseeing the project at that time did not recognize that the unit rates for calculating and updating unit rate estimates should be realistic and reflect levels of productivity during construction, as called for in project management principles, or resolve the issues. As a result, the staff did not take action to resolve the MOX contractor’s continued use of unrealistic unit rates that did not reflect actual construction progress being made. Furthermore, NNSA staff did not recognize the extent to which decreased productivity by the contractor created future cost increases and schedule delays or resolve the issue. Consequently, from 2008 to 2011, the MOX contractor continued to use its overly optimistic and unrealistic unit rate estimates when developing its annual forecasted cost estimates. Second, the MOX contractor’s annual forecasted estimates for the project consistently increased from 2008 through 2011, and the level of confidence in those estimates decreased, indicating that the project would not be completed within its approved cost estimate. Beginning in 2008, the MOX contractor submitted an annual update to its forecasted estimate for the project. These estimates increased each year, rising by about $140 million to $280 million annually, with the estimated total project cost increasing from about $4.1 billion in 2008 to about $4.7 billion in 2011 (an increase of about 15 percent). The MOX contractor’s representatives said they attempted to mitigate the increases, such as by identifying cost savings on the project. Additionally, as the May 2014 root-cause analysis report stated, the level of confidence for completing the MOX project within the approved $4.9 billion total project cost estimate declined each year, from an 85 percent likelihood of completing the project within the estimate in 2009 to 45 percent in 2011. Both the annual increases in forecasted estimates and the annual decline in level of confidence illustrated the increasing likelihood that the MOX contractor would not complete the project for $4.9 billion. As a result of inexperience, the NNSA staff overseeing the project at that time did not adequately examine the potential consequences of such cost performance trends over the future schedule and through project completion or resolve the issues. As the May 2014 root-cause analysis report stated, NNSA staff did not fully recognize how the risks and challenges the MOX project faced negatively affected not only the project’s performance but also its cost and schedule. For example, that report found that the staff were unable to determine that there were fundamental problems with completing the MOX project’s design and with maintaining construction efficiency and progress; both of which contributed to schedule delays and cost increases. The May 2014 root- cause analysis report stated that because of inexperience in project management, NNSA staff did not direct the MOX contractor to develop a more realistic and achievable forecasted estimate for the total cost to complete the MOX project until January 2012. Third, information about procuring materials out of sequence and the resulting rework indicated that the project would not be completed on schedule or within its approved cost estimate. According to NNSA officials, the MOX contractor’s method for measuring earned value incentivized the contractor to purchase and procure materials early and, in a number of cases, out of sequence, as this helped demonstrate progress. For example, figure 4 shows outdoor “laydown yards” and an offsite warehouse storing large amounts of commodities, such as pipes and electrical panels, that NNSA officials said the MOX contractor procured earlier than needed. The May 2014 root-cause analysis report stated that between 2007 and 2011, the equipment and material procured out of sequence resulted in the need for rework in some cases because later design changes required changes to the equipment or the need to procure different items, leading to additional costs for the project. The MOX contractor’s representatives told us they disagreed with NNSA’s characterization that they procured material too early. According to the contractor representatives, they purchased materials in support of both the project schedule and planned construction end date of 2016, as well as to achieve the efficiencies through bulk pricing or reduced delivery charges from procuring larger quantities of items or multiple items at the same time. Additionally, the MOX contractor representatives disagreed that they structured the methods for measuring earned value performance to claim earned value in ways that did not reflect actual progress. In particular, the MOX contractor representatives said that NNSA staff were involved in the development of the original methods used for measuring earned value. NNSA staff did not take steps to resolve the issues with the disproportionate value earned by the MOX contractor for purchasing, procuring, and placing certain commodities until 2015 when the MOX contractor revised its methods for measuring earned value. Consequently, the reported commodity installation data based on the MOX contractor’s methods for measuring claimed earned value inflated the amount of progress being made on the construction of the MOX project compared with the amount of work completed. Fourth, information about the use of management reserve funds early in the project indicated that the project would not be completed within its approved baseline. To address cost increases experienced early in the project, the MOX contractor began to use the project’s management reserve funds. A May 2010 surveillance review of the MOX contractor’s EVM system prepared for DOE by an independent contractor identified this issue and concluded that the rate at which the MOX contractor was using its management reserve indicated that it was unlikely that there would be any reserve left to address any risks that were expected to be encountered later in the project. DOE’s June 2011 follow-up review of the MOX contractor’s EVM system found that the MOX contractor was no longer covering cost variances by using management reserve; however, the MOX contractor’s previous use of management reserve to cover cost overruns had resulted in inaccurate, inflated cost performance and understated forecasted cost estimates. The MOX contractor’s representatives told us they disagreed with the premise that the management reserve was used to obscure cost performance. Moreover, they noted that NNSA’s cost-accounting and management staff worked with the contractor on all EVM issues, including the use of management reserve. NNSA staff did not recognize and resolve issues with the contractor’s use of the management reserve to mitigate cost overruns or the effect on the project’s cost performance and forecasted cost estimates in part because, as the May 2014 root cause analysis report stated, the staff possessed little experience in project management. According to project management principles, management reserve should be prevented from being consumed too early so as to ensure that enough reserve remains available to address any problems that may arise late in the project. The inexperienced NNSA staff also did not recognize that certain problems were creating cost overruns because, as stated in the May 2010 surveillance review, the MOX contractor’s use of the management reserve to cover such overruns hid the problems and did not alleviate their root causes. As a result of not recognizing or resolving the MOX contractor’s inappropriate use of the management reserve earlier, NNSA reported inaccurate measurements of cost performance to DOE and other stakeholders. DOE’s Project Management Changes Strengthened Oversight of the MOX Project In late 2010 and 2011, DOE began to implement actions to strengthen project management across the department, including NNSA. These actions, which agency officials said were primarily undertaken in response to project management problems we and others had identified, contributed to the steps NNSA began to take to strengthen its project management and oversight of the MOX project. Changes that strengthened NNSA’s oversight of the MOX project included: (1) initiating project peer reviews and (2) making several organizational changes to improve project oversight. These changes to DOE’s and NNSA’s oversight of the MOX project contributed to the decision to terminate the project. First, in its November 2010 update to requirements for capital asset projects, DOE established a requirement to conduct peer reviews at least once a year for large or high-visibility projects with a total project cost of $100 million or greater. The update required peer reviews more frequently for complex projects or those experiencing performance challenges. According to DOE and NNSA officials, they added the requirement in response to a recommendation in our May 2008 report. According to NNSA officials, as a result of this requirement, NNSA began conducting peer reviews of the MOX project in 2011. These reviews led NNSA to identify significant cost and schedule problems at the MOX project and included a number of recommendations to improve project performance. For example, a March 2012 NNSA peer review found that the MOX project’s total cost may have been understated by anywhere from $600 million to $900 million, in part because the contractor’s estimated unit rates and planned production rates were not reflective of the actual performance at that time. Moreover, the peer review found that the estimated completion date of October 2016 was also at risk. As a result, the peer review team recommended, among other things, that the MOX contractor develop an update to its formal cost and schedule estimate. As a result of the findings and recommendations from its peer reviews, NNSA requested and the MOX contractor submitted in September 2012 a proposal that included a revised cost estimate for the MOX project of about $7.7 billion and an estimated completion date of November 2019. In response to the significant cost increases, schedule delays, and project risks captured in the MOX contractor’s updated cost and schedule estimate, NNSA proposed a slowdown of MOX project construction activities in its fiscal year 2014 budget request to begin assessing alternative plutonium disposition strategies. Second, NNSA carried out several organizational changes starting in 2011 that led to improved oversight of the MOX project in some areas and the continued identification of cost and schedule problems. Specifically, NNSA transitioned management and oversight of the MOX project from the Office of Defense Nuclear Nonproliferation to the Office of Acquisition and Project Management, an office newly created in January 2011 to improve project oversight through the application of project management principles. In 2013, the Office of Acquisition and Project Management created the MOX Project Management Office at the Savannah River Site to provide project and contract management oversight for the MOX project. After establishing the MOX Project Management Office, the Office of Acquisition and Project Management sought to better address long- standing staffing challenges. For example, a May 2006 external independent review conducted for DOE found that, among other things, NNSA understaffed the oversight of the MOX project and recommended that DOE acquire sufficient personnel with the proper skills to manage and perform oversight of the project. However, NNSA did not address this issue until after the creation of the Office of Acquisition and Project Management. The Office of Acquisition and Project Management increased the number of staff with specific project management skillsets at the MOX Project Management Office from 20 for fiscal years 2010 to 2012 to 36 (18 federal employees and 18 support service contractors) for fiscal years 2016 to 2018. As a result of the staffing changes, the NNSA MOX Project Management Office strengthened its oversight of the MOX project, which contributed to the identification of additional problems, as described below. Conducted more in-depth assessments of the MOX contractor’s EVM system. After initially certifying the MOX contractor’s EVM system in May 2008, a May 2010 surveillance review of the MOX contractor’s EVM system prepared for DOE by an independent contractor identified a number of issues. The MOX contractor addressed the issues, according to DOE’s June 2011 review, resulting in the recertification of the EVM system at that time. According to NNSA officials, NNSA’s MOX Project Management Office conducted more in-depth assessments of the MOX contractor’s EVM system starting in 2013. These assessments led NNSA staff to identify a number of concerns with the contractor’s EVM system, such as earned value data errors; overstatements of the data on the percentage of work completed in certain areas; and in one instance, about $300 million in known cost growth that was not incorporated into the MOX project’s forecasted estimate of total project cost. According to NNSA officials, in March 2016, the NNSA federal project director requested an in-depth review of the contractor’s EVM system because of the continued identification of issues with the system, and the MOX contractor not adequately addressing them. According to its October 2016 review, DOE’s Office of Project Management identified significant deficiencies representing systematic and material internal control weaknesses and concluded that the MOX contractor’s EVM system could not be relied upon to provide credible and reliable cost and schedule performance data for either the project’s current status or its forecasted cost and schedule estimates. As a result, DOE’s Office of Project Management rescinded the MOX contractor’s EVM system certification because the system was no longer in compliance with the relevant standards. Implemented a more rigorous invoice review process. According to NNSA officials, prior to 2014, NNSA did not have a rigorous process in place to review the contractor’s invoices. The officials said that NNSA staff did not review all invoices and, for the reviews that were completed, they did not always thoroughly examine the details behind the invoices, such as reviewing invoices to verify that costs were allowable under DOE regulations. The NNSA officials told us that as part of their efforts to improve oversight of the MOX contractor’s invoice submissions, NNSA’s MOX Project Management Office staff developed a more rigorous invoice review process that resulted in a September 2014 guide. In addition, the NNSA MOX Project Management Office assigned an additional staff member to (1) help conduct invoice reviews due to the volume of work needed to review the MOX contractor’s invoices and (2) ensure that payments were made within the 14 days generally required by regulation. According to NNSA officials, as a result of the changes implemented by the office, NNSA identified a number of potentially unallowable costs ranging from less than $1,000 to more than $2 million. Reviewed the MOX contractor’s annual incurred costs. NNSA officials said that incurred cost audits were supposed to be conducted at least annually for the MOX project and that the Defense Contract Audit Agency was supposed to conduct the audits. However, these officials explained that due to a significant backlog, the Defense Contract Audit Agency did not complete all of the required audits. In light of the Defense Contract Audit Agency’s significant backlog—as well as a requirement prohibiting the agency from conducting non- defense agency audits—the NNSA MOX Project Management Office arranged to have a third party conduct an audit of the MOX contractor’s fiscal year 2010 incurred costs. This third-party audit identified more than $30 million in potentially unallowable costs. The significant cost and schedule problems that NNSA staff identified after strengthening its oversight of the MOX project contributed to NNSA’s decision to terminate it. Project management principles state that effective project management helps organizations to, among other things, increase the chances of success; resolve problems and issues; and identify, recover, or terminate failing projects. After NNSA’s project peer reviews and the MOX contractor’s proposed update to the project’s cost and schedule estimate showed the significant likelihood of additional cost growth and schedule delays, NNSA proposed slowing down construction of the MOX facility in 2013 and ultimately terminated the project in October 2018. DOE Has Requirements for Documenting and Sharing Lessons Learned, but They Do Not Ensure Consistent or Timely Documentation or the Evaluation of Corrective Actions As outlined in DOE Order 413.3B, DOE requires that project management staff document and share project management lessons learned on capital asset projects like MOX but does not require that all project management lessons learned from capital asset projects be documented consistently or shared in a timely manner. Moreover, DOE Order 413.3B does not require the evaluation of the results of corrective actions taken in response to lessons learned that are identified during the course of capital asset projects such as the MOX project to ensure that the problems experienced are resolved department-wide. DOE’s Requirements for Documenting and Sharing Lessons Learned for Capital Asset Projects DOE’s requirements for capital asset projects, as outlined in Order 413.3B, specify that project management lessons learned should be captured—that is, documented—throughout the continuum of a project. According to the order, there are five critical decisions (CD) that structure the life of a project. The CDs, which are summarized in figure 5, include approving: mission need (CD-0); alternative selection and cost range (CD-1); project performance baseline (CD-2); the start of construction or execution (CD-3); and the start of operations or project completion (CD- 4). DOE Order 413.3B requires project staff to submit project management lessons learned to DOE’s Office of Project Management within 90 days of two critical decision points: (1) upfront planning and design lessons learned are to be submitted within 90 days of CD-3 approval and (2) project execution and facility startup lessons learned are to be submitted within 90 days of CD-4 approval. DOE Order 413.3B also requires that lessons learned for capital asset projects be collected, analyzed, and disseminated by project management support offices. These offices consist of DOE or NNSA staff who provide support to federal project directors and are established exclusively to oversee and manage the activities associated with projects. Additionally, DOE Order 413.3B states that the Project Management Risk Committee should support project management activities within DOE by enabling the sharing of lessons learned on a routine basis. DOE and NNSA officials told us that program and project offices document and save project management lessons learned for capital asset projects in different ways. In particular, DOE and NNSA officials told us that peer reviews, which are saved in DOE’s Project Assessment and Reporting System (PARS II) database, are a primary source of project management lessons learned. The officials also said that project management lessons learned are saved through monthly project reports, monthly staff meetings, Project Management Risk Committee meeting notes, and project management workshops and training courses. In addition, DOE and NNSA officials told us that some lessons learned are shared through informal person-to-person discussions that allow lessons learned to be shared among staff. Further, the officials said that they address project management problems identified in lessons learned by making changes to DOE Order 413.3B. In addition, while not required, DOE may capture some lessons learned for projects during the project review process. For example, DOE’s standard-operating procedures for conducting external independent reviews state that the scope of such reviews can include assessing whether project teams are documenting and sharing lessons learned from their projects internally and externally. However, as noted in the standard-operating procedures, this is an example of an area that can be included as part of an external independent review, although there is no requirement to do so. DOE’s Lessons-Learned Requirements for Capital Asset Projects Do Not Ensure Consistent or Timely Documentation and Sharing or the Evaluation of Corrective Actions DOE Order 413.3B requires project management lessons learned for capital asset projects to be documented throughout the life of a project but does not specifically require lessons learned to be documented and saved in a consistent manner or shared routinely or in a timely manner. Moreover, the order does not require all corrective actions related to these lessons learned to be evaluated for effectiveness. DOE Does Not Require That Lessons Learned for Capital Asset Projects Be Documented and Saved Consistently Although DOE and NNSA use multiple means to document and save lessons learned, we found DOE and NNSA program and project offices do not document and save such lessons consistently so that they are readily accessible by other staff. For example, NNSA uses an internal database to save project management lessons learned for its projects. However, NNSA officials told us that DOE staff outside of NNSA must request access to the database before they can read and examine the lessons learned that are documented and saved in the database. Officials from DOE’s Office of Science told us that their office submits some lessons learned to the PARS II database and maintains some project management lessons-learned reports on a publicly available webpage. A senior official from DOE’s Office of Environmental Management told us that some lessons learned from its projects are sent to its staff through monthly lessons-learned bulletins, but the bulletins are not entered into PARS II. In addition, DOE and NNSA officials said that project staff can enter specific lessons learned gleaned from their project in a lessons- learned repository within PARS II. For example, as of November 2017, PARS II contained 20 entries for project management lessons learned from the MOX project. According to key practices for lessons learned identified by us and the Center for Army Lessons Learned, a central component of a successful lessons-learned process is to ensure that lessons learned are stored in a logical, organized manner. Specifically, as we have previously found, lessons learned should be stored in a manner—such as an electronic database—that allows users to perform information searches using key words and functional categories. Moreover, information in the database should be updated regularly and provide a logical system for organizing information that is easily retrievable and made available to any requester. We have also found that relying on person-to-person discussions to share lessons learned can be problematic because personal networks can dissolve—for example, through attrition or retirement—and informal information sharing does not ensure everyone is benefiting from the lessons that are gleaned. Further, by not documenting and saving all lessons learned (e.g., those shared through person-to-person exchanges), there is also generally no way to ensure the validation of the information shared. This is not consistent with the key practice from the Center for Army Lessons Learned, which states that by documenting and saving project management lessons learned in a logical, organized manner such as an electronic database, lessons learned can be archived, managed, and made available for review by other projects and applied to them at a future date. Because DOE Order 413.3B does not indicate where all project management lessons learned should be documented and saved in a consistent manner, the department cannot ensure that future capital asset projects will be able to take advantage of experiences from past projects. We found that DOE and NNSA did not document all lessons learned in a consistent manner, and DOE officials acknowledged that DOE Order 413.3B does not require documenting or saving lessons learned that are presented through various formal or informal means in a common location. By developing requirements that clearly define how and where all project management lessons learned should be documented and saved to make them readily accessible across the department, such as in a database, DOE—including NNSA—could improve the agency’s existing lessons-learned process. DOE Does Not Require That Lessons Learned for Capital Asset Projects Be Submitted and Shared Routinely or in a Timely Manner DOE Order 413.3B’s requirements for project management lessons learned do not require that all lessons learned be shared routinely or in a timely manner. In particular, the order does not require that lessons learned be submitted and shared routinely until CD-3—the start of construction. Consequently, DOE and NNSA staff are not required to submit lessons learned during the CD-0, CD-1, and CD-2 phases of a project. These earlier phases, which involve upfront planning and design for the selected project, often occur many years before the approval and start of construction. Notably, both the MOX and Uranium Processing Facility (UPF) projects took about 10 years to reach the start of construction (CD-3) and experienced cost increases and schedule delays. We and others have previously found that lessons learned should be submitted in a timely manner so as to ensure that key information is available to identify and address problems or incorporate successful activities as early and quickly in the process as possible. For example, we found that lessons-learned reports (i.e., reports documenting lessons- learned reviews) should be prepared promptly so that knowledgeable personnel are available to contribute to the reports, important details are recalled accurately, and there are no delays in the dissemination of lessons learned. Moreover, according to the Center for Army Lessons Learned, the guiding principle in executing a sharing strategy for lessons learned is to get the right information to the right person at the right time. Such a strategy can entail developing a process for creating timelines for sharing lessons learned that are tied to the urgency of the information and a means to disseminate that information. Because DOE Order 413.3B does not require lessons learned to be submitted prior to CD-3, the department is limiting its ability to promptly evaluate and address early issues with projects and apply such lessons learned to other projects department-wide. This approach could affect the successful completion of capital asset projects, particularly those that experience prolonged upfront planning and design phases similar to those the MOX and UPF projects experienced. By developing requirements for sharing project management lessons learned from early in the CD phases of projects (i.e., prior to CD-3) routinely and in a timely manner to improve the ability to identify and evaluate problematic practices and positive experiences, DOE—including NNSA—could help improve the success of future capital asset projects and avoid the problems encountered overseeing the MOX project. DOE Does Not Require the Evaluation of the Effectiveness of Corrective Actions Taken DOE Order 413.3B does not require the evaluation of the results of corrective actions taken to address project management lessons learned that are identified during the course of capital asset projects such as MOX. According to DOE guidance and statements, officials track whether lessons identified through reviews or other efforts are implemented. For example, according to DOE’s standard-operating procedures for conducting external independent reviews and officials from DOE’s Office of Project Management, DOE staff conducting external independent reviews of projects should assess whether project teams are reviewing and incorporating applicable lessons learned. In addition, DOE project management officials told us that peer review recommendations and the corrective actions to be taken to address them are tracked until the closure of each recommendation. However, DOE has not evaluated whether corrective actions taken have led to the resolution of the problematic practices identified in the lessons learned because DOE Order 413.3B does not require this type of evaluation. According to key practices for lessons learned identified by the Center for Army Lessons Learned and us, a central component of a successful lessons-learned process is to establish a means to ensure that issues are being resolved as intended. The Center for Army Lessons Learned states that while not all issues require a formal process to resolve, there should be a process in place to identify and prioritize the most important things that need to be fixed. For example, this process could entail addressing only those problems that may necessitate the need for department-wide improvements, as some issues may be narrowly focused and be specific to one project or site. The Center for Army Lessons Learned further states that an organization’s ability to change behavior by implementing a lesson is ineffective unless the organization observes changes in behavior and verifies that the lesson is learned. Additionally, we have found that if agency management decides to take action to apply an identified lesson, then it should take subsequent action to observe that the change in behavior actually occurred and collect additional information to verify that the change had the desired effect. Although DOE Order 413.3B does not require DOE to evaluate the effectiveness of corrective actions other than those associated with peer reviews, other DOE orders and guidance require the evaluation of the effectiveness of other types of corrective actions. For example, DOE Order 226.1B requires that DOE’s organizations and contractors implement oversight processes that ensure they evaluate and correct relevant quality assurance problems on a timely basis to prevent their recurrence. In addition, DOE’s order and guide for implementing an effective quality assurance program highlight the importance of undertaking corrective actions to prevent the recurrence of problems, including determining the effectiveness of the corrective actions for significant problems. By developing requirements for evaluating the effectiveness of corrective actions taken in response to project management problems in capital asset projects, particularly those that necessitate the need for department-wide improvements, DOE—including NNSA—could verify that changes made as a result of lessons learned had the intended outcome as the agency does for contractors. Conclusions DOE and NNSA made changes that strengthened oversight of large capital asset projects. These changes helped NNSA better identify cost and schedule problems affecting the MOX project and contributed to NNSA’s decision to ultimately terminate the project. DOE’s Order 413.3B includes certain requirements for documenting and sharing project management lessons learned. However, the requirements in DOE Order 413.3B do not fully incorporate several key practices for lessons learned. For example, the order does not require that DOE or NNSA document project management lessons learned for capital asset projects consistently or that such lessons learned are shared in a timely manner. By developing requirements that clearly define how and where all project management lessons learned should be documented and saved to make them readily accessible across the department, such as in a database, DOE—including NNSA—could improve the existing lessons- learned process and enable future projects across the department to take advantage of experiences from past projects. In addition, because DOE Order 413.3B does not require lessons learned for capital asset projects to be submitted prior to the start of construction (CD-3), the department is limiting its ability to promptly evaluate and address early issues with projects as well as applying such lessons learned to other projects department-wide. By developing requirements for sharing project management lessons learned from the beginning of a project routinely and in a timely manner to improve DOE’s ability to identify and evaluate problematic practices and positive experiences, DOE—including NNSA—could help improve the success of future capital asset projects and avoid the problems the agency encountered on the MOX project. Moreover, while DOE tracks the implementation of certain project management lessons learned for capital asset projects, DOE Order 413.3B does not require that DOE—including NNSA—evaluate corrective actions identified outside the peer review process and taken in response to lessons identified to verify that the changes made had the desired effect. By developing requirements for evaluating the effectiveness of corrective actions taken in response to project management problems in capital asset projects, particularly those that necessitate the need for department-wide improvements, DOE could verify that changes made as a result of lessons learned had the intended outcome as the agency does for contractors. Recommendations for Executive Action We are making the following three recommendations to DOE: The Secretary of Energy, in coordination with DOE’s Office of Project Management and NNSA’s Office of Acquisition and Project Management, should develop requirements that clearly define how and where project management lessons learned for capital asset projects should be documented and saved to make them readily accessible across the department. (Recommendation 1) The Secretary of Energy, in coordination with DOE’s Office of Project Management and NNSA’s Office of Acquisition and Project Management, should develop requirements for sharing project management lessons learned for capital asset projects from the beginning of a project (i.e., prior to the start of construction at CD-3) routinely and in a timely manner to improve DOE’s ability to identify and evaluate problematic practices and positive experiences. (Recommendation 2) The Secretary of Energy, in coordination with DOE’s Office of Project Management and NNSA’s Office of Acquisition and Project Management, should develop requirements for evaluating the effectiveness of corrective actions taken in response to project management problems for capital asset projects, with a focus on those lessons that necessitate the need for department-wide improvements. (Recommendation 3) Agency Comments, Third-Party Views, and Our Evaluation We provided a draft of this report to DOE, NNSA, and MOX Services for review and comment. In written comments, which are reproduced in full in appendix IV, DOE concurred with the report’s recommendations and described actions that it intends to take in response to our recommendations. In response to our first recommendation, DOE intends to issue a policy memorandum by December 2019 and revise DOE Order 413.3B to identify the project management lessons learned repository and outline the kinds of information the repository will collect. In response to our second recommendation, DOE intends to issue a policy memorandum by December 2019 and revise DOE Order 413.3B to collect lessons learned as part of its peer review process. Because DOE Order 413.3B requires that peer reviews for projects of $100 million or greater be conducted once between CD-0 and CD-1, annually between CD-1 and CD-2, at least annually between CD-2 and CD-4, and more frequently for the most complex projects or those experiencing performance challenges, this action is responsive to our recommendation and should help DOE begin to identify lessons learned in a more routine and timely manner. In response to our third recommendation, DOE plans to revise the Project Management Risk Committee charter by assigning it the responsibility to qualitatively evaluate the effectiveness of corrective actions taken in response to project management lessons learned from projects with a total cost greater than $750 million having department-wide implications. We are encouraged that DOE agrees with our recommendation and view this change as a positive first step. However, this action may not fully address the recommendation. For example, the planned action states that the Project Management Risk Committee would evaluate the effectiveness of corrective actions for projects with total costs of $750 million or more, but there may be some lessons learned with applicability department-wide from projects that do not meet this cost threshold. Additionally, DOE’s planned action as described in its response does not discuss who would be responsible for evaluating the effectiveness of corrective actions or a timeline for performing the assessments. The Project Management Risk Committee has typically served as a review group and has not itself performed such evaluations. DOE and MOX Services also provided technical comments, which we incorporated in our report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of NNSA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology Our report examined (1) when the National Nuclear Security Administration’s (NNSA) project management oversight processes recognized cost and schedule problems at the Mixed Oxide Fuel Fabrication Facility (MOX) project and the actions the agency took to address them and (2) the extent to which the Department of Energy (DOE) requires that project management lessons learned from MOX and other projects be documented and shared. To address both objectives, we reviewed relevant documents from DOE, NNSA, and MOX Services, LLC (MOX Services), the contractor constructing the MOX project. We reviewed past reports by GAO and the National Academy of Sciences’ National Research Council to examine previously identified weaknesses in DOE project management, contractor performance, and federal oversight of individual projects, as well as DOE’s efforts to make improvements. We also reviewed DOE reports focused on analyzing the root causes of contract- and project- management issues affecting DOE and NNSA and identifying potential corrective actions and other general improvements. We visited the Savannah River Site to tour the MOX project while it was under construction and interviewed officials from NNSA’s MOX Project Management Office, including the federal project director, and representatives from MOX Services. We also monitored the status of the MOX project. To examine when NNSA’s project management oversight processes recognized cost and schedule problems at the MOX project and the actions the agency took to address them, we identified and reviewed DOE and NNSA documents outlining the agencies’ management and oversight roles and responsibilities and the processes the agencies used to monitor the cost and schedule of the MOX project. We also examined NNSA guidance and memorandums detailing the 2011 transition of oversight responsibilities for the construction of the MOX project from NNSA’s Office of Defense Nuclear Nonproliferation to its Office of Acquisition and Project Management and the effect this change had on NNSA’s efforts to oversee the project. In addition, we reviewed DOE, NNSA, and MOX Services documents, as well as independent reviews and assessments, concerning the performance and status of the MOX project. In particular, we reviewed a May 2014 report prepared for DOE that identified and analyzed the root causes behind the cost increases that affected the MOX project through 2012, after the formal approval of its cost and schedule estimates in 2007. We also reviewed surveillance reviews and a May 2013 assessment of the MOX contractor’s earned value management (EVM) system, which the contractor and NNSA used to monitor project performance and status, including cost and schedule, after construction began. Moreover, we examined project cost and budget information that DOE, NNSA, MOX Services, and others developed—such as the contractor’s September 2012 baseline change proposal and DOE’s August 2016 revised cost and schedule estimate—to determine when they began to identify the MOX project’s cost increases and schedule delays and why such problems might have occurred. We also reviewed reports by GAO and DOE’s Office of Inspector General that identified and discussed cost and schedule problems affecting the MOX project. Additionally, we interviewed officials from DOE and NNSA to discuss how and when they identified the MOX project’s cost and schedule problems. To examine the extent to which DOE requires that project management lessons learned from MOX and other projects be documented and shared, we reviewed DOE’s Order 413.3B, which outlines the primary set of project management requirements governing DOE and NNSA capital asset projects that have a total project cost of greater than $50 million. We also reviewed DOE guidance documents, such as those related to DOE Order 413.3B, to further understand DOE’s suggested approaches for meeting its existing lessons learned requirements. Similarly, we reviewed documents from NNSA and DOE’s Offices of Environmental Management and Science, such as those found in business-operating procedures and standard-operating policies and procedures, to examine how those documents supplement the lessons learned requirements included in DOE Order 413.3B. In addition, we collected examples of capital asset project-management lessons learned from DOE and NNSA, including those from the MOX project, from a variety of sources, such as lessons-learned reports, project peer reviews, entries stored in DOE’s Project Assessment and Reporting System (PARS II) and NNSA’s internal databases, monthly lessons-learned bulletins, and presentations, among others. To better understand lessons learned and their role within project management, we reviewed reports by GAO, the U.S. Army’s Center for Army Lessons Learned, and the Project Management Institute that identify and discuss key practices for lessons learned. We selected these sources because they are widely recognized for key practices on lessons learned. We then compared the project management lessons learned requirements outlined in DOE Order 413.3B against these key practices. We also discussed project management lessons learned requirements and processes with officials from DOE’s Offices of Environmental Management, Project Management, and Science and NNSA’s Office of Acquisition and Project Management. We conducted this performance audit from May 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Timeline of Selected Information and Events Pertaining to the MOX Project Appendix II: Timeline of Selected Information and Events Pertaining to the MOX Project Details DOE announced a plan to dispose of surplus, weapons-grade plutonium through a dual approach that would include constructing a facility for the purposes of converting the plutonium into mixed-oxide fuel for use in modified commercial reactors. The initial estimate for the MOX project—that is, not an approved baseline—totaled $1.4 billion, with completion of construction expected to be in September 2004. DOE awarded the contract for designing, constructing, and operating a MOX facility to the consortium of Duke, Cogema, Stone & Webster, LLC—now MOX Services, LLC, or MOX Services. According to a December 2005 DOE Inspector General report, in 1999, an independent team reviewed the MOX contract and warned of the potential for escalating costs because the contractor had no incentives to minimize costs nor penalties for overruns or poor performance. DOE announced that it would construct the MOX project (as well as two other facilities) at the Savannah River Site located in Aiken, South Carolina. A February 2001 independent cost estimate of the MOX contractor’s preliminary cost estimate for the MOX project concluded that it would cost about $2.4 billion to construct and operate it. The independent cost estimate concluded that it would cost about $1.1 billion to construct the facility. The National Nuclear Security Administration’s (NNSA) February 2002 report to Congress on the disposition of surplus defense plutonium at the Savannah River Site concluded that the facility component of the mixed-oxide fuel option identified would cost about $2.2 billion to implement over about 20 years. According to the report, about $1 billion of these costs would be for designing and constructing the facility, with construction being completed during fiscal year 2007. According to DOE’s fiscal year 2004 budget request, a preliminary estimate of the MOX project’s total cost totaled about $1.8 billion. A July 2004 independent review found that the MOX project had experienced a cost increase of about 300 percent for the design and development phase compared to what was preliminarily planned for in 1999, in part to due to a number factors, including design changes and underestimates. Moreover, the report cited the MOX project as an example of a DOE project greater than $500 million that should have had an approved performance baseline many years prior given that it had reached critical decision (CD)-1 approval, or the approval of alternative selection and cost range, in 1997. According to a December 2005 report by DOE’s Office of Inspector General, as of July 2005, NNSA’s not-yet-validated estimate for the design and construction of the MOX project was about $3.5 billion ($2.8 billion for construction). Phases Details In February 2006, DOE’s fiscal year 2007 budget request reported a preliminary estimate for the MOX project totaling about $3.6 billion, but the department reiterated that the estimate would be finalized following the completion of the project’s performance baseline. The request also noted that design costs for the MOX project increased from $243 million to $765 million, primarily due to the decision to fund some design work for gloveboxes and enhanced aqueous polishing during the design phase as opposed to the construction phase and increased design work to adapt the facility to handle and treat several tons of pure plutonium resulting from the cancelation of plutonium immobilization, which would have entailed incorporating plutonium into a corrosion-resistant ceramic matrix and then encasing the immobilized plutonium in glass along with highly radioactive nuclear wastes that already existed at DOE sites, thereby rendering the plutonium as inaccessible and unattractive for reuse in nuclear weapons. However, NNSA canceled this approach in 2002. A July 2006 external independent review of the MOX project’s preliminary cost and schedule estimate projected the MOX project’s total cost to be about $4.7 billion, with the project expected to be completed in April 2016. The review’s estimated total project cost reflected an increase of $352 million over the proposed total project cost of $4.3 billion due to increases in the cost of some construction activities and contingency. In February 2007, DOE’s fiscal year 2008 budget request reported that the revised total cost for the MOX project totaled about $4.7 billion and that the estimate was in the final stages of validation as part of the department’s critical decision process. The request stated that the revised cost was a change from the prior not-yet-validated $3.6 billion estimate in DOE’s fiscal year 2007 budget request, with over 50 percent of the $1.1 billion cost increase attributed to an increase in contingency funds for the project during construction and cold startup. Also in February 2007, responsibility for the MOX contract was officially transferred to the Savannah River Site Office. In April 2007, DOE formally approved a cost estimate, or baseline, for the MOX project of $4.8 billion and start of operations in September 2016. In August 2007, construction of the MOX project began. In May 2008, DOE certified the MOX contractor’s earned value management (EVM) system. A July 2008 independent project review identified a number of concerns, including that only one person was dedicated to the development and upkeep of the MOX project’s procurement status information and that the project’s procurement strategy would require additional procurement and engineering staff to meet future demands. In December 2008, as a result of funding reductions for fiscal year 2008, DOE approved a revised cost estimate for the MOX project of $4.9 billion and a 1-month delay in the start of operations to October 2016. According to a July 2009 report, the MOX contractor’s 2009 annual forecasted estimate for completing the MOX project totaled approximately $4.4 billion, an increase of about $283.8 million from the 2008 annual forecasted estimate. In May 2010, an independent review of the MOX contractor’s EVM system found that the contractor’s performance data could not be used to accurately assess the cost performance of the project, in part because the contractor was inappropriately using management reserve funds to cover cost overruns. The MOX contractor began to implement a number of corrective actions in response to the report’s findings. According to an August 2010 report, the MOX contractor’s 2010 annual forecasted estimate for completing the MOX project totaled approximately $4.6 billion, an increase of about $207.1 million from the 2009 annual forecasted estimate. Phases Details In February 2011, DOE’s Office of Acquisition and Project Management—now the Office of Project Management—changed the overall status of the MOX project from green to yellow, indicating that the project was at risk of breaching its approved cost estimate (i.e., performance baseline). A May 2011 project peer review found that the MOX project faced expected cost growth and would be challenged in identifying approximately $364 million in cost savings necessary to deliver the project at its total project cost (of $4.9 billion). A June 2011 follow-on to the May 2010 independent review of the MOX contractor’s EVM system found that the project was likely to exceed the total project cost by anywhere from $104 million to $699 million, with an estimated most likely cost overrun of $493 million. Nonetheless, DOE recertified the MOX contractor’s EVM system after the MOX contractor completed a number of corrective actions. According to a July 2011 report, the MOX contractor’s 2011 annual forecasted estimate for completing the MOX project totaled approximately $4.7 billion, an increase of about $142.4 million from the 2010 annual forecasted estimate. In January 2012, NNSA directed the MOX contractor to add additional scope for plutonium metal oxidation capability and to include updates to the project’s current cost and schedule projections, with a baseline change proposal due by the end of May 2012. A March 2012 project review found that the MOX project’s cost and schedule baselines had a very low probability of being met, and estimated that the total project cost was likely underestimated by anywhere from $600 to $900 million when compared to the project’s approved total cost of $4.9 billion. The review team recommended that the project should develop an updated and more realistic baseline. Also in March 2012, DOE changed the overall status of the MOX project from yellow to red, indicating that the project was expected to breach its approved cost estimate (i.e., its performance baseline). A July 2012 project peer review found that the MOX project’s likely total project cost would fall within the range of $6.9 billion to $7.3 billion as opposed to the project’s approved total cost of $4.9 billion. In September 2012, the MOX contractor submitted its revised baseline change proposal to update the MOX project’s cost and schedule projections, including additional scope of work that would provide the MOX project with a plutonium metal oxidation capability, referred to as direct metal oxidation. According to the contractor’s proposal, it would cost about $7.4 billion to complete the MOX project without the direct metal oxidation by November 2019. The addition of the direct metal oxidation scope of work would cost an additional $262.3 million, which would be completed in June 2023 after the completion of MOX project and the start-up of operations by November 2019. Phases Details In April 2013, DOE’s fiscal year 2014 budget request proposed a slowdown of construction of the MOX project while NNSA took steps to assess alternative plutonium disposition strategies. According to the request, NNSA cited the increase to the contractor’s total estimated cost for the project and the budget environment as factors in its decision to pursue a slowdown of the MOX project while conducting an assessment of potential alternative plutonium disposition strategies. According to NNSA, a May 2013 estimate prepared by the U.S. Army Corps of Engineers estimated that, not including contractor fee, it would cost $9.4 billion to construct the MOX project by 2024 at an annual funding level of $630 million. According to NNSA, a June 2013 estimate prepared by the MOX contractor estimated that it would cost between $8.5 and $9.7 billion to construct the MOX project, with completion from 2023 to 2032 depending on whether the annual funding level totaled $350 million or $500 million. In September 2013, NNSA estimated it would cost about $10.5 billion to construct the MOX project by 2027 at an annual funding level of $500 million. According to NNSA, a November 2013 estimate prepared by the U.S. Army Corps of Engineers estimated that it would cost from $10 to $11.7 billion to construct the MOX project, with completion from 2026 to 2036 depending on whether the annual funding level totaled $350 million or $500 million. In March 2014, DOE’s fiscal year 2015 budget request stated that ongoing analysis led to the determination that the MOX project would be significantly more expensive than anticipated and concluded that, due to cost increases, the MOX approach was not viable within available resources. The request, therefore, called for placing the facility in cold stand-by so NNSA could further study more efficient options for plutonium disposition. A May 2014 root cause analysis report found that some of the cost drivers that contributed to the MOX project’s cost increases since 2007 included not having sufficiently experienced project teams in place, basing the approved cost and schedule estimates on incomplete front- end planning, not sufficiently developing designs to support the project’s fast-track procurement and construction, experiencing greater than expected inefficient execution of construction activities, not implementing effective corrective actions, and not adequately applying federal oversight to identify and address project performance issues Also in May 2014, the DOE Office of Inspector General reported continuing concerns about the achievability of the estimated cost and completion date for the MOX project. The report also noted that the MOX project no longer had an approved cost and schedule estimate and in light of the project continuing to receive significant funding, recommended that the MOX contractor develop a new cost and schedule estimate. In September 2014, in light of certain insufficient project data, NNSA directed the MOX contractor to conduct a review to determine and validate the work completion status—that is, state of completeness—for all commodities being installed in the MOX project. In December 2014, both the Carl Levin and Howard P. McKeon National Defense Authorization Act for Fiscal Year 2015 and the Consolidated and Further Continuing Appropriations Act, 2015 directed DOE to continue construction and project or program support activities related to the MOX project. However, the National Defense Authorization Act also directed DOE to report on, among other things, alternatives to the MOX project, including cost estimates for each alternative, and how such alternatives would conform to the Plutonium Management and Disposition Agreement. Phases Details In February 2015, DOE’s fiscal year 2016 budget request called for the continued construction of the MOX project, in part because all four congressional committees of jurisdiction directed that construction on the MOX project continue in fiscal year 2015 while NNSA conducted additional cost studies and technology alternative studies. In March 2015, NNSA’s MOX Project Management Office assessed the MOX contractor’s use of level of effort versus the discrete method of earned value and determined a disproportionate use of level of effort—around 56 percent—was masking the performance of the contractor’s discrete work and therefore affecting the accurate measurement of the project’s progress. In April 2015, the Aerospace Corporation completed a report on the MOX project and estimated that the MOX project’s total cost would be about $21.5 billion, with projected completion in 2045 at an annual funding level of $500 million. In June 2015, the MOX contractor finished its completeness verification review and found that it had over-reported on the results of certain commodities being installed in the MOX project. As a result of this review, the MOX contractor revised the amount of earned value claimed for these commodities to address the over-reporting and provide a more realistic accounting of the selected commodities. In February 2016, DOE’s fiscal year 2017 budget request proposed terminating the MOX project in favor of the dilute and dispose option as the path forward for the disposition of the nation’s surplus, weapons-grade plutonium. According to the request, the MOX project was found to be significantly more expensive than anticipated and would require approximately $800 million to $1 billion annually for decades. A May 2016 report prepared for the MOX contractor by High Bridge Associates, Inc., estimated that completing the construction of the MOX project could cost about $5.2 billion and be completed in 10 years, with an annual funding level of about $520 million. In July 2016, the MOX contractor submitted its annual forecasted estimate for completing construction of the MOX project and estimated the total project cost to be about $10 billion, with completion in 2029, with an annual funding level of $350 million. In August 2016, DOE issued an updated performance baseline estimating that it would cost approximately $17.2 billion to complete construction of the MOX project by 2048 assuming an annual funding level of $350 million. DOE further estimated that it would cost about $14.3 billion to complete construction of the MOX project by 2035 assuming an annual funding level of $500 million. In October 2016, DOE rescinded the MOX contractor’s EVM system certification of compliance in response to an August 2016 surveillance review that identified material non- compliances such as the overstatement of earned value and percentage complete. Phases Details A February 2017 report by the U.S. Army Corps of Engineers found that there is likely to be a substantial amount of rework at the MOX project but noted that the magnitude of the likely rework has yet to be determined. The report stated that some of the rework is attributed to design constructability issues as well as procuring, fabricating, and completing work out of sequence. In May 2017, DOE’s fiscal year 2018 budget request reiterated for the second consecutive year, a plan to terminate the MOX project in favor of pursuing the dilute and dispose option for plutonium disposition. Also in May 2017, a DOE Office of Inspector General report stated that NNSA was not aware of the total cost of rework at the MOX project because the time and cost of rework were not definitively tracked prior to fiscal year 2014. In December 2017, section 3121 of the National Defense Authorization Act for Fiscal Year 2018 authorized the Secretary of Energy to terminate the MOX project if, among other things, the Secretary certified that the remaining life-cycle cost for an alternative option for carrying out plutonium disposition would be less than approximately half of the estimated remaining life-cycle cost of carrying out the plutonium disposition approach utilizing the MOX project. In February 2018, DOE’s fiscal year 2019 budget request reiterated for the third consecutive year a plan to terminate the MOX project in favor of pursuing the dilute and dispose option for plutonium disposition. In May 2018, the Secretary of Energy waived existing requirements to continue MOX construction, but the state of South Carolina obtained an injunction in federal district court temporarily blocking the waiver in June, which NNSA subsequently appealed. In October 2018, a federal appellate court granted a stay of the federal district court’s injunction that prohibited termination of the MOX contract and cessation of construction operations. NNSA subsequently issued a notice of termination to the MOX contractor. Appendix III: Selected GAO Recommendations from Prior Reports We have made numerous agency recommendations in prior reports to improve contract and project management in the Department of Energy (DOE) and the National Nuclear Security Administration (NNSA). Some reports contain recommendations for department and agency policies, and others address project management problems for specific projects or also address other agencies besides NNSA. A description of some of our key recommendations, with the status of implementation as of December 2018, is provided below in table 2. For the most up-to-date status of these agency recommendations, see our website: http://www.gao.gov. Appendix IV: Comments from the Department of Energy Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Hilary Benedict (Assistant Director), Rodney Bacigalupo, Antoinette Capaccio, Tara Congdon, Pamela Davidson, Richard P. Johnson, Eleni Orphanides, Kevin Remondini, Karen Richey, Sara Sullivan, and Tatiana Winger made key contributions to this report.
Why GAO Did This Study The MOX project, located at DOE's Savannah River Site in South Carolina and overseen by NNSA, experienced significant cost increases and schedule delays following the start of construction in 2007. After spending nearly $6 billion, NNSA terminated the project in October 2018. While DOE and NNSA have made some recent progress, they have historically struggled to complete, within their original cost and schedule estimates, other major construction projects intended to help maintain the nuclear security complex. GAO was asked to review issues related to oversight of the MOX project. This report examines (1) when NNSA's project management oversight processes recognized cost and schedule problems at the MOX project and the actions the agency took to address them and (2) the extent to which DOE requires that project management lessons learned from MOX and other projects be documented and shared. GAO reviewed agency documents, visited the MOX project, and interviewed DOE and NNSA officials and representatives of the MOX contractor. What GAO Found The Department of Energy's (DOE) National Nuclear Security Administration (NNSA) has strengthened its oversight of the Mixed Oxide Fuel Fabrication Facility (MOX) project since 2011 and, as a result, began recognizing cost and schedule problems. The project, begun in 1997, was intended to dispose of large quantities of weapons-grade plutonium no longer required for national security. Prior to 2011, NNSA's project staff failed to recognize signs that the project would not be completed on time or within its approved cost. An independently conducted analysis, prepared in 2014 in response to a GAO recommendation, determined that NNSA staff did not recognize early problems because they were inexperienced in project management. To strengthen oversight, NNSA in late 2010 and 2011 began actions, such as conducting additional reviews and transferring oversight of the project to a newly established office specializing in project management. NNSA continued to identify the contractor's performance problems, such as the lack of credible, reliable cost and schedule data. These continued problems contributed to NNSA's decision to terminate the project. DOE requires that project staff document and share project management lessons learned on capital asset projects like the MOX project, but not all lessons are to be documented consistently or shared in a timely manner. GAO found that DOE's and NNSA's offices document project management lessons learned differently and that not all of the documented lessons learned are readily accessible to other staff. Additionally, GAO found that DOE does not require that project staff share lessons learned for capital asset projects until the start of construction, which can occur many years after the start of the project. Under key practices, such lessons should be stored in a logical, organized manner, be easily retrievable, and be submitted in a timely manner (see fig.). By developing requirements that clearly define how and where project management lessons learned should be documented and requiring that the lessons be shared in a timely manner, DOE could improve its lessons-learned process and help improve the success of future capital asset projects. Also, for capital asset projects, DOE does not require the evaluation of the results of all corrective actions to respond to lessons learned to ensure that problems are resolved, consistent with key practices. By developing requirements to evaluate the effectiveness of corrective actions, DOE could better verify whether the actions had the intended outcome. What GAO Recommends GAO is making three recommendations, including that DOE and NNSA develop requirements for defining how and where project management lessons learned for capital asset projects should be documented and shared routinely and in a timely manner, and for evaluating the effectiveness of corrective actions taken in response to lessons learned. DOE agreed with GAO's recommendations.
gao_GAO-18-473
gao_GAO-18-473_0
Background This section provides an overview of the (1) San Francisco Bay Delta watershed, (2) multiple water demands in the watershed, (3) selected laws and agreements related to restoration efforts in the watershed, and (4) funding for restoration efforts in the watershed. San Francisco Bay Delta Watershed The San Francisco Bay Delta watershed is a single, complex ecosystem covering more than 75,000 square miles, almost entirely in California. It includes a diversity of fresh, brackish, and salt water ecosystems. Figure 1 shows the watershed and its three major geographic areas and their subregions. The watershed’s three major geographic areas contain unique, yet inherently interconnected environmental and cultural features and face similar water quality and other threats: San Francisco Bay and its local watershed (Bay). The San Francisco Bay is the large body of mostly salt water through which the local watershed, as well as the entire Bay Delta watershed, drains into the Pacific Ocean. According to U.S. Census data, more than 7 million people live in the nine-county Bay area containing the local watershed—an area with one of the nation’s densest populations. Large cities, such as San Jose, San Francisco, and Oakland; their suburbs, including Silicon Valley; and numerous other cities occupy much of the land surrounding the Bay. Since the California Gold Rush in the mid-1800s, most of the Bay’s historical wetlands have been filled for development or converted to farmland or industrial salt ponds, and the loss of these natural features has removed important barriers for flood and erosion control. Because of its urban setting and location at the downstream end of the watershed, the Bay’s water quality faces threats from numerous sources of pollution, including sewage, trash, urban and industrial runoff (e.g., metals, solvents, and inorganic chemicals), and runoff from agriculture and past mining activities upstream (e.g., nutrients, pesticides, and metals). Sacramento-San Joaquin Delta (Delta). The Sacramento-San Joaquin Delta comprises roughly 1,000 square miles where the fresh waters of the Sacramento and San Joaquin Rivers converge south of the city of Sacramento before flowing into the San Francisco Bay through a network of more than 50 islands. It is a largely rural area that is also home to more than 500,000 people living mostly on its suburban periphery, and its communities and farmland are protected from flooding by approximately 1,100 miles of levees. During the California Gold Rush, settlers diked the Delta’s channels and waterways and began building levees to create dry land, resulting in the loss of nearly all of the original wetlands in the area. As a result, the Delta has been converted from an historic plain of seasonally flooded brackish and freshwater wetlands to a mosaic of channelized waterways surrounding its islands. According to reports, many of these islands have subsequently subsided up to 25 feet below sea level due largely to the use of groundwater and farming, which can cause the islands’ rich peat soil to oxidize and erode. The Delta is a major outdoor recreation destination for activities such as fishing and boating. Its key water quality threats include agricultural, urban, and past mining runoff. In addition, the complex system of water supply infrastructure projects built throughout the watershed diverts fresh water from the Delta to other parts of the state, changing the saltwater content of much of the area’s wetlands and marshes. Upper watershed. The upper watershed is the vast area where the watershed’s rivers, streams, and tributaries originate at the crest of the Sierra Nevada and other mountain ranges and then travel hundreds of miles through California’s Central Valley, the nation’s most productive agricultural area, according to USDA. The upper watershed includes three subregions: the Sacramento River watershed in northern California, through which water generally flows south; the San Joaquin River watershed in central California, through which water generally flows west and then north; and the Tulare Lake Basin in southern California, through which water no longer drains naturally. About 5 million people live throughout the area in a mix of rural and urban communities, including large inland cities, such as Fresno and Sacramento. In the upper watershed, the Sierra Nevada snowpack serves as temporary storage for roughly one-third to one- half of California’s water, depending on the year. Most of the major rivers hold reservoirs to capture and store the snowmelt for longer- term use. As a result of mining, agriculture, and water infrastructure development, the area’s historic water flows have been highly modified, the Central Valley’s historic grasslands and flood plains have been converted to managed wetlands and are often threatened by land subsidence, and runoff from agriculture and past mining activities are dominant threats to water quality in low-lying areas. In the mountains and foothills, forest fires can threaten water quality, mostly by causing erosion that increases sediment in streams. The Bay and Delta together form the San Francisco Bay/Sacramento-San Joaquin Delta Estuary, often referred to as the Bay Delta, one of the largest estuaries in North America. The Bay Delta is the ecosystem created by the mixing of salt water from the Pacific Ocean and fresh water from the Sacramento and San Joaquin Rivers and their tributaries. It provides habitat for about 750 species of plants and animals, including more than 130 species of fish. It also contains more than 700,000 acres of farmland, and millions of users access it each year for recreational activities, such as hunting, boating, and fishing. In contrast to the managed wetlands of the upper watershed, the Bay Delta wetlands are tidal areas—brackish wetland influenced by the push and pull of ocean tides. Even with the tidal influence, the saltwater content of the Bay Delta is also heavily influenced by the amount of fresh water available, much of which is diverted by water supply infrastructure projects and can vary due to multiple water demands. Multiple Water Demands in the Watershed Because of the watershed’s economic, environmental, and cultural importance, it has been the subject of political and legal battles over multiple water demands for decades. Beginning in the 1930s, federal and then state water projects—two complex networks of dams, pumps, reservoirs, canals, and other facilities—have diverted water from the Sacramento and San Joaquin Rivers to agricultural, industrial, and urban consumers in the Bay area and southern parts of California. The federal Central Valley Project primarily diverts water for agricultural use, and the California State Water Project, which was developed in the 1960s, primarily diverts water for drinking and industrial use. Hundreds of water contractors, such as the Westlands Water District and the Metropolitan Water District of Southern California, purchase water from these projects, which can divert about 20 to 70 percent of the natural water flowing into the Bay Delta, depending on legal limits and seasonal levels of precipitation. Other water demands include habitat needs for threatened and endangered species such as the Delta smelt (a fish) and various salmon species. In particular, federal agencies have developed instream flow requirements for these species of fish that require water to be released from dams upstream to help maintain adequate water quality and temperature for the fish. As a result, most of the water in the watershed is managed by federal, state, and local water projects for use by private and investor-owned water agencies and districts and their customers, as well as for fish and habitat purposes. Any proposed changes to this complicated water allocation system—which accounts for California’s largest supply of fresh water—often raise concerns among water users about losing water, receiving reduced priority for water supplies, or obtaining water of poor quality. For example, according to one study, the state of California has allocated more water rights than what could be available naturally. Other concerns involve the system’s infrastructure— the system depends largely on a complex network of aging levees, many of which were first built in the mid-1800s—and the possible effects on water supply and quality. Specifically, earthquakes, floods, subsidence, or sea level rise could cause these levees to fail and put the state’s fresh water supply at risk from saltwater contamination. As a result of these and other concerns, many stakeholders in the watershed have been, and continue to be, involved in legal actions over multiple water demands. Selected Laws and Agreements Related to Restoration Efforts in the Watershed Construction and operation of the Central Valley Project and the State Water Project has fundamentally altered the physical environment of the Bay, Delta, and parts of the upper watershed, where nearly every tributary has been dammed to create reservoirs to supply these water projects. By the late 1980s, species decline and water quality problems became so critical in the Bay Delta that stakeholders raised concerns that the continued operation of these projects might be conflicting with federal and state water quality and endangered species laws (discussed below). In 1992, the Central Valley Project Improvement Act amended the Central Valley Project authorizations, which previously focused primarily on certain uses such as irrigation and power generation. The act specifies, among other things, a number of actions for the purposes of protecting, restoring, and enhancing fish, wildlife, and associated habitats in the Central Valley and Trinity River basins in California. The act’s stated purposes include, among other things, to achieve a reasonable balance among competing demands for use of Central Valley Project water, including the requirements of fish and wildlife, agriculture, municipal and industrial and power contractors. Under the act, Reclamation implements several programs, including those to restore habitat on Central Valley rivers and streams, improve diversion facilities to protect certain juvenile fish, and deliver water supplies for critical wetland habitat supporting resident and migratory waterfowl and threatened and endangered species. To address the increasingly complex issues surrounding the Bay Delta, the federal and California state governments reached an agreement to create the CALFED Bay-Delta Program (CALFED) in 1995 to restore ecological health, improve water quality, fortify water management infrastructure, and increase water supply reliability. From 1995 through 2009, about 20 federal and state agencies collaborated through this program, issuing a record of decision in 2000 outlining CALFED goals and programs and implementing federal and state legislation enacted in the early 2000s. Under the National Environmental Policy Act of 1969, agencies issue a record of decision at the end of the environmental impact statement process, which they are required to conduct for major federal actions that have a significant effect on the environment. The 2000 record of decision established a program with 12 components, including water quality and ecosystem restoration, to be managed by state and federal agencies. According to the record of decision, CALFED’s water quality goal was to provide good water quality for the millions of Californians who rely on the Delta for all or a part of their drinking water. CALFED’s goal for ecosystem restoration under the record of decision was to improve aquatic and terrestrial habitats and natural processes to support stable, self-sustaining populations of diverse and valuable plant and animal species through an adaptive management process. This process includes reevaluating or updating goals, activities, or performance measures based on the results of ongoing monitoring and progress assessments. Under the record of decision, the water quality and ecosystem restoration programs include activities throughout the Bay, Delta, and upper watershed. In 2002, California enacted the California Bay-Delta Act, which established the California Bay-Delta Authority to oversee CALFED. In 2004, the Calfed Bay-Delta Authorization Act (CALFED Act), a federal law, implemented the record of decision, directed federal agencies to coordinate CALFED activities with California state agencies, and authorized federal agencies to participate in the California Bay-Delta Authority as nonvoting members for the full duration of the period it continued to be authorized by state law. CALFED received federal appropriations to develop and implement ecosystem protection and restoration projects. Section 105 of the act requires Interior to report annually on the accomplishments of various program components, including those related to additional water storage and ecosystem restoration. Section 106 of the act requires OMB, in coordination with the governor of California, to report annually on all expenditures since 1998 to achieve the program’s objectives. However, in 2009, California repealed the California Bay-Delta Act and abolished the California Bay-Delta Authority, replacing it with the Delta Reform Act and the Delta Stewardship Council. The 2009 law focused state efforts more specifically on the Delta, in part by tasking the council with developing an enforceable Delta Plan for promoting a healthy Delta ecosystem and a more reliable water supply. According to a report by the California Legislative Analyst’s Office, the CALFED federal-state partnership ended due to several challenges, including uncertain financing, weak governance, and a lack of accountability. Although California state law was amended in 2009, the federal CALFED Act has not been significantly amended since its enactment in 2004. As we reported in June 2015, although the CALFED record of decision remains in effect, the state’s future direction for Bay Delta activities are likely to be coordinated through the Delta Plan. The Delta Plan was, under certain conditions, to incorporate a 50-year conservation plan initiated by the state, in cooperation with Reclamation, in 2006. The 50- year plan proposed restoring approximately 150,000 acres of wetlands, grasslands, and other areas in and around the Delta over 50 years and addressing water supply reliability concerns by building two large tunnels to transport fresh water under the Delta. In 2015, facing uncertainties in obtaining permits to implement the plan, the state replaced the 50-year plan with two separate initiatives managed by the California Natural Resources Agency: (1) California EcoRestore, which aims to begin restoring at least 30,000 Delta acres over 5 years, and (2) California WaterFix, which includes building the two tunnels from the 50-year plan. The ecosystem chapter of the Delta Plan is being amended, and the amended chapter is anticipated to be complete by early 2019, according to Delta Stewardship Council officials. While it does not directly incorporate EcoRestore, the Delta Plan ecosystem amendment currently under development acknowledges that EcoRestore’s successful implementation is needed to achieve the restoration objectives in the Delta Reform Act, according to Delta Stewardship Council officials. In addition to the CALFED Act and the Central Valley Project Improvement Act, other federal laws, including water quality and endangered species laws, are relevant to restoration efforts in the watershed. Some relevant laws include the following: The Clean Water Act. The objective of this act is to restore and maintain the chemical, physical, and biological integrity of the nation’s waters. A 1987 amendment to the act created the National Estuary Program to promote comprehensive planning for, and conservation and management of, estuaries of national significance. The National Estuary Program calls for the development of comprehensive conservation and management plans (CCMP) for these designated estuaries, including the Bay Delta estuary, which was designated under the program in 1987. Under the act, EPA also works with California to regulate water quality. In addition, section 404 of the Clean Water Act generally prohibits the discharge of dredged or fill material into waters of the United States without a permit from the Corps. The Corps administers the permitting responsibilities of the section 404 program while EPA develops, in conjunction with the Corps, the substantive environmental criteria that permit applicants must meet. The Endangered Species Act. This act was enacted to, among other things, provide a means to conserve the ecosystem upon which endangered species and threatened species depend and to provide a program for the conservation of such endangered species and threatened species. Under the act, species may be listed as endangered or threatened. Several species in the watershed are listed as threatened or endangered, including the Delta smelt, steelhead trout, spring- and winter-run Chinook salmon, Ridgway’s rail (a bird), salt marsh harvest mouse, red-legged frog, and California tiger salamander. NOAA’s National Marine Fisheries Service and the U.S. Fish and Wildlife Service, depending on the species, implement the act, including by issuing biological opinions regarding the potential effects of proposed federal actions on endangered and threatened species. The San Joaquin River Restoration Settlement Act. In conjunction with the settlement this act implements, it outlines, among other things, measures to achieve the goals of restoration of the San Joaquin River and the successful reintroduction of California Central Valley spring-run Chinook salmon. Under the act, Reclamation is to coordinate several actions, including the expansion of a segment of the San Joaquin River to provide habitat for juvenile salmon. Funding for Restoration Efforts in the Watershed Across the watershed, funding for restoration efforts typically comes from a variety of federal, state, local, nongovernmental, and private entities. According to Interior officials, federal funding includes approximately $37 million per year for CALFED overall and additional funding for implementation of the Central Valley Project Improvement Act, available for certain projects in the Delta and upper watershed. Also, according to Interior officials, the U.S. Geological Survey funds research and monitoring to support water quality management, water operations, and restoration. Additional federal sources of funding include grant programs from EPA, NOAA, and the U.S. Fish and Wildlife Service and projects funded through Reclamation, in addition to funding for water projects that can include a restoration component. For example, Reclamation has provided about $37 million annually since fiscal year 2015 for the San Joaquin River Restoration Program. A number of other federal entities, including USDA’s Natural Resources Conservation Service, also fund restoration projects in the watershed. For example, USDA’s Natural Resources Conservation Service has programs, such as the Environmental Quality Incentives Program and the Agricultural Conservation Easement Program, to support farm conservation efforts throughout the Central Valley. Funding from state sources primarily comes from state water and conservation agencies and is funded through statewide bonds and the state’s general fund. For example, in 2014, California voters authorized $7.5 billion in bonds to fund ecosystems and watershed protection and restoration; water supply infrastructure projects, including surface and groundwater storage; and drinking water protection across the state, including the San Francisco Bay Delta watershed. In addition to the bond funding, in 2016, voters from nine Bay area counties authorized an annual $12 parcel tax that is expected to raise approximately $500 million over 20 years for Bay wetlands restoration, as well as other multi-benefit projects. In the Delta, in addition to federal and state funding for restoration efforts, according to state officials, funding often comes from water contractors that pay for major restoration efforts through their obligations under the State Water Project to address biological opinions issued by federal regulatory agencies for endangered or threatened species. For example, water contractors are responsible for funding restoration efforts under the state’s California EcoRestore initiative, including at least $205 million to restore 8,000 acres of fish habitat and $171 million for 17,000 acres of floodplain improvements. EcoRestore began in 2015, and total costs for projects are expected to reach at least $300 million in the initiative’s first 4 years, according to the California Natural Resources Agency. According to officials from several federal and nonfederal entities, including EPA and the San Francisco Estuary Partnership, no official estimates exist for the expected total future costs to restore the entire watershed, though some estimates have been developed for limited types of activities. For example, regarding cost estimates, the San Francisco Estuary Partnership typically refers to Save the Bay’s 2007 Greening the Bay report, which estimates that it will cost almost $1.5 billion over 50 years to restore the 36,176 acres of Bay shoreline already set aside for restoration. Overall, according to related reports, investments on the order of tens of billions of dollars would likely be necessary to restore the entire watershed. Federal and Nonfederal Entities Coordinate Comprehensive Restoration Efforts in Specific Geographic Areas, but Federal Entities Do Not Coordinate Across the Watershed Federal and nonfederal entities, including state agencies and nongovernmental organizations, carry out and coordinate a wide range of restoration efforts in the watershed. These entities coordinate comprehensive restoration efforts in the Bay and Delta primarily through two coordinating bodies—the San Francisco Estuary Partnership and the Delta Plan Interagency Implementation Committee, respectively. In the upper watershed, federal and nonfederal entities do not have a coordinating body for comprehensive restoration efforts, but they do coordinate restoration efforts through plans specific to entities, projects, or restoration topics. In 2009, federal entities first developed an Interim Federal Action Plan for coordinating federal restoration efforts across the entire watershed, but not all of the entities are using the plan. Federal and Nonfederal Entities Carry Out A Wide Range of Restoration Efforts in the Watershed Federal and nonfederal entities carry out a wide range of restoration efforts—i.e., water quality improvement and ecosystem restoration—that can involve multiple entities, vary in geographic scope, span multiple years, and are intended to achieve multiple benefits. According to our review of reports and interviews with officials from federal and nonfederal entities, water quality improvement efforts include projects intended to improve the physical, chemical, or biological characteristics of water, and ecosystem restoration efforts include projects to restore degraded habitats. According to these interviews, restoration efforts can target a range of priorities, including conservation, resiliency, mitigation, monitoring, and enhancement. In addition, these efforts can directly or indirectly support water quality improvement and ecosystem restoration goals and objectives, and they can encompass a variety of activities, such as planning, project selection, project implementation, permitting, funding, technical assistance, and assessment. Figure 2 shows the locations and different habitat types for a number of the completed and ongoing restoration projects implemented by federal and nonfederal entities— partly under the CCMP, California EcoRestore, and other efforts—in the Bay Delta Estuary. Restoration efforts in the watershed can involve multiple levels of government, as well as nongovernmental organizations. For example, the South Bay Salt Pond Restoration Project near San Jose, California—the largest tidal wetland restoration project on the west coast of the United States, according to the project’s website—is a joint effort among the U.S. Fish and Wildlife Service, California Department of Fish and Wildlife, and the California State Coastal Conservancy, along with local governments, donors, consultants, and other participants. Similarly, the Hamilton Wetland Restoration Project near Novato, California, which involves the restoration of tidal and seasonal wetlands, is a joint effort among the Corps, California State Coastal Conservancy—the nonfederal sponsor and landowner—and other federal and nonfederal entities. Restoration efforts in the watershed also vary in geographic scope and can span jurisdictions. The South Bay Salt Pond Restoration Project includes federal and state land and, according to the project’s website, is expected to restore more than 15,000 acres of industrial salt ponds to tidal marsh and other wetland habitats in three counties located along the shores of the southern part of San Francisco Bay. (See fig. 3.) The Hamilton Wetland Restoration Project comprises state-owned land and, according to the California State Coastal Conservancy, has the purpose to restore approximately 2,600 acres to tidal wetland on a former army airfield and adjacent properties along the San Francisco Bay in an area 25 miles north of San Francisco. (See fig. 4.) In contrast, other efforts include project areas on farms. For example, under its Environmental Quality Incentives Program, USDA’s Natural Resources Conservation Service has focused on providing conservation planning, among other services, for farm operators and nonindustrial forestland owners, including tribes. Officials from several federal and nonfederal entities, including EPA, the San Francisco Estuary Partnership, the Central Valley Joint Venture, and the California State Coastal Conservancy, stated that the primary focus of restoration efforts varied from one geographic area to another. For example, according to some of these officials, efforts to restore tidal wetlands are prevalent in the Bay, and efforts to address land subsidence are prevalent in the Delta. (See fig. 5.) Restoration efforts in the watershed can span multiple years. For example, the South Bay Salt Pond Restoration Project is an ongoing, multi-phase, 50-year effort that began with the acquisition of former industrial salt ponds in 2003. Likewise, the Hamilton Wetland Restoration Project is an ongoing, multi-phase effort that began in 1999. In the upper watershed, planning began in 2012 for California EcoRestore’s ongoing Yolo Bypass Salmonid Habitat Restoration and Fish Passage Project, which aims to increase floodplain habitat for endangered and threatened fish species in the Sacramento River watershed. Restoration efforts in the watershed can also have multiple primary benefits. For example, the Hamilton Wetland Restoration Project was designed to reverse years of land subsidence, restore wetlands, reestablish historic habitat for wildlife and endangered species, and beneficially reuse dredged sediment. Multiple benefits could also accrue over time. For instance, according to the California State Coastal Conservancy, while the Hamilton Wetland Restoration Project currently provides habitat for migratory water birds and fish, it is expected to become thickly vegetated with a complex network of tidal channels that provide habitat for several threatened and endangered species. Restoration efforts can also provide multiple secondary benefits. For example, restoring wetlands may provide resilience against sea level rise, habitat for wildlife, and an area for recreation. Federal and Nonfederal Entities Coordinate Comprehensive Restoration Efforts in the Bay and Delta through Coordinating Bodies and Specific Restoration Efforts in the Upper Watershed Federal and nonfederal entities coordinate comprehensive restoration efforts in the Bay and Delta through the San Francisco Estuary Partnership and the Delta Plan Interagency Implementation Committee, respectively. In the upper watershed, federal and nonfederal entities coordinate specific restoration efforts through plans specific to entities, projects, or restoration topics. Specifically: Bay. In the Bay, federal and nonfederal entities coordinate comprehensive restoration efforts through the San Francisco Estuary Partnership. The partnership was established in 1987 and receives funding from EPA’s National Estuary Program to implement the CCMP for the San Francisco Estuary (i.e., the Bay Delta). The partnership’s members include federal, state, and local government entities; nongovernmental organizations, such as conservation groups; and a utility commission. The partnership’s members provided input on developing and revising the CCMP and have integrated goals into the CCMP from their own topic- or entity-specific strategic plans. Partnership members also coordinate restoration efforts guided by the CCMP. For example, the U.S. Fish and Wildlife Service, U.S. Geological Survey, the California State Coastal Conservancy, and the California Department of Fish and Wildlife work to coordinate on managed wetlands and ponds—one of the restoration efforts outlined in the CCMP. Furthermore, partnership members may carry out various activities for restoration projects in the Bay, such as project planning, regulating and permitting (e.g., for dredging and extracting sediment), on-the-ground project implementation, and scientific monitoring. Partnership members meet quarterly and participate in a conference every 2 years to provide updates on the status of projects, share scientific research, and present monitoring results. Delta. In the Delta, federal and nonfederal entities coordinate comprehensive restoration efforts through the Delta Plan Interagency Implementation Committee. This committee was created in 2013 by the Delta Stewardship Council, the state agency responsible for overseeing the Delta Plan—the state’s plan for promoting a more reliable water supply and a healthy ecosystem. The committee is made up of representatives from 7 federal and 11 state entities and helps implement the Delta Plan. Members of the committee may also carry out various activities for restoration projects in the Delta, such as scientific monitoring, on-the-ground project implementation, project planning, and regulating and permitting (e.g., for placing materials such as concrete structures or rocks into the water to support levees). The committee meets twice a year and participates in conferences to gather scientific consensus or to share recent research. Some committee members are also members of the San Francisco Estuary Partnership and coordinate separately through initiatives that may have predated the committee and that are specific to entities, projects, or restoration topics. Upper watershed. In the upper watershed, while federal and nonfederal entities do not have a coordinating body for comprehensive restoration efforts, they coordinate restoration efforts through plans specific to entities, projects, or restoration topics. For example, 20 federal, state, and nongovernmental entities coordinate through the Central Valley Joint Venture—a partnership with the mission to conserve migratory bird habitat—and its implementation plan. Likewise, dozens of federal, state, and local government entities coordinate to implement the Central Valley Flood Protection Plan, a plan adopted by California’s Central Valley Flood Protection Board for managing flood risk. In addition, NOAA, the U.S. Fish and Wildlife Service, and the California Department of Fish and Wildlife coordinate on implementing a conservation strategy in parts of the Central Valley. Federal Entities Developed a Plan for Coordinating Federal Restoration Efforts across the Watershed, but Not All of the Entities Are Using the Plan A federal memorandum of understanding and an Interim Federal Action Plan outline how federal entities are to coordinate the federal government’s restoration activities and support state efforts across the entire watershed. The California Bay-Delta Memorandum of Understanding among Federal Agencies, signed in September 2009, established a Federal Bay-Delta Leadership Committee to coordinate federal efforts related to restoration and water management across the entire watershed while the state structure was transitioning from the California Bay-Delta Authority to the Delta Stewardship Council, and the state therefore was no longer participating in the originally structured CALFED federal-state partnership. According to the memorandum, this federal committee was to be led by Interior and CEQ and to meet regularly. The signatories of the memorandum also agreed to develop a federal work plan to outline near-term federal actions and begin to identify and prioritize key longer-term federal actions for restoration efforts and water management across the watershed. The entities issued an Interim Federal Action Plan in December 2009. The Interim Federal Action Plan organizes federal actions into four priorities, including working with state and local authorities on joint project planning to ensure healthy Bay Delta ecosystems and to improve water quality. Specifically, the federal entities agreed to build projects to improve water supply, including through conservation efforts in municipal areas and on agricultural lands; to fund habitat restoration projects for threatened and endangered fish across the watershed; and to assess the effects of pollutants such as mercury and pesticides on water quality. According to the Interim Federal Action Plan, these priorities cut across different federal entities’ missions and activities in the watershed. Further, the Interim Federal Action Plan includes actions aimed at ensuring the effective and efficient use of federal resources, such as by leveraging nonfederal resources. In late 2010, the agencies that signed the memorandum provided a status update on the Interim Federal Action Plan that confirmed the federal government’s support of state efforts in the watershed. The status update directs the federal government to review the components of any proposed restoration plan and understand the costs and benefits such a plan would have on federal water resources and taxpayers. The President’s fiscal year 2019 budget, which sets the administration’s top- level priorities and was released in February 2018, reaffirmed the federal government’s commitment to the Interim Federal Action Plan and stated that the plan is under the leadership of CEQ, Interior, and the Delta Stewardship Council. OMB staff stated the Interim Federal Action Plan provides overall guidance to federal agencies and clarifies that the agencies should focus their various actions in the watershed on the plan’s four priorities, including while working with nonfederal entities through collaborative bodies. Nonetheless, not all federal entities are using the Interim Federal Action Plan. Officials from the USDA Natural Resources Conservation Service told us they use the plan to determine conservation funding levels and priorities in the watershed. However, a former official who was responsible for CEQ’s Bay Delta portfolio said that although the plan still matches the needs of the watershed, agencies had stopped following it in the past several years because the plan had become less of a priority for the administration. In addition, EPA and NOAA officials stated they were not aware of agencies following the plan in the past several years. According to the plan, its most important aspect is the federal government’s reaffirmation of its partnership with state and local entities and its commitment to coordinate actions with them. Yet, of the 31 nonfederal entities responding to our survey questionnaire, 11 indicated that they were not at all familiar with the Interim Federal Action Plan, and another 9 indicated that they were slightly familiar with it. Further, according to Interior officials, although restoration efforts described in the Interim Federal Action Plan have largely remained the same and its functions and activities are still relevant, the plan is outdated. In particular, according to these officials, the Interim Federal Action Plan refers to the state’s 50-year conservation plan, which California is no longer pursuing. Moreover, according to Interior and EPA officials, the Federal Bay-Delta Leadership Committee—the coordinating body for the Interim Federal Action Plan—has not convened since the Delta Plan was developed in May 2013, even though the memorandum called for the committee to meet on a regular basis. Instead, according to Interior officials, the state-led Delta Plan Interagency Implementation Committee has replaced the federal leadership committee as the coordinating body for federal efforts in the watershed. Interior and EPA officials we interviewed said the federal role outlined in the Interim Federal Action Plan is no longer relevant because of recent leadership and strategic changes in the watershed resulting from the state’s withdrawal from the originally structured CALFED program and increased focus on the Delta through the Delta Stewardship Council. According to OMB staff and Interior and Delta Stewardship Council officials, the Delta Plan Interagency Implementation Committee is the current approach for coordinating among and between federal and state entities, and according to Interior officials, federal participation in the committee is key. The committee, however, focuses specifically on the Delta, and the Delta Plan generally does not include restoration efforts in the Bay and the upper watershed. Restoration requires a robust watershed-wide approach, according to the Interim Federal Action Plan, because the Bay, Delta, and upper watershed systems are interconnected. Specifically, according to one respondent to our survey, actions in the upper watershed affect water quality improvement and ecosystem restoration success in the Delta and ultimately the Bay. For example, according to California state officials, carefully timed water releases from dams in the upper watershed are the only way to control saltwater content in the Delta, which is critical for agriculture and urban water supply. Further, a National Research Council report states that Delta planning cannot be successful if it is not integrated into statewide planning because the Delta is fed by large upstream watersheds and water from the Delta is used outside the region, such as in the Bay. In addition, federal funding supports efforts throughout the watershed. While the Interim Federal Action Plan is consistent with several of our leading practices for collaboration, it is not being used by all federal agencies. As we reported in 2012, key considerations for implementing interagency collaborative mechanisms include whether participating agencies have clarified roles and responsibilities, developed ways to continually update and monitor written agreements on how agencies coordinate, and identified how leadership will be sustained over the long term. We have found that agencies that articulate their agreements in formal documents, such as plans, can strengthen their commitment to working collaboratively and that transitions and inconsistent leadership can weaken coordination. A written document can incorporate agreements reached among participants in any or all of the following areas: leadership, accountability, roles and responsibilities, and resources. Although the Interim Federal Action Plan reflects several of these practices, it is not being used to lead overall federal efforts and has not been updated to reflect current roles and responsibilities in the watershed, in particular the transition of coordination from the plan’s federal leadership committee to the Delta Plan Interagency Implementation Committee and the state’s increased focus on the Delta. Further, the Delta Plan Interagency Implementation Committee is not an interagency coordination mechanism for the federal and state agencies to communicate complete information for the entire watershed. Updating, including revising or refocusing, the Interim Federal Action Plan could help federal entities more fully coordinate with and support nonfederal restoration efforts across the watershed. EPA and Interior officials stated that coordination among the regions is challenging because agency missions and activities can be siloed. Officials from the Delta Stewardship Council told us that without coordinating with federal entities, they found it difficult to plan resources and work with federal entities. In addition, 31 of the 48 federal and nonfederal entities that responded to our survey questionnaire indicated that coordination of goals for the entire watershed was a very great or great challenge. Moreover, according to our analysis of questionnaire responses, 29 of 48 federal and nonfederal entities indicated that coordination among partners at different levels of government was a very great or great challenge. For example, in narrative responses to our survey questionnaire, one respondent stated that restoration projects can be delayed because many federal and nonfederal entities focus narrowly on their own missions without considering those of other stakeholders. By updating or revising the plan to outline and reflect entities’ roles and responsibilities in light of the changes in the state’s role and other relevant developments since 2009, and notifying all participating entities to ensure they are aware of the plan and their role in it, Interior and CEQ could help clarify the federal government’s role in supporting restoration efforts in the watershed and help ensure the effective use of federal resources in these efforts. Federal and Nonfederal Entities Have Developed Measurable Goals and Approaches to Assess Progress for Restoration Efforts in the Watershed Federal and nonfederal entities have developed measurable goals for comprehensive restoration efforts in the Bay and Delta and for specific restoration efforts in the upper watershed. Federal and nonfederal entities have also developed approaches to assess progress for restoration efforts in the Bay and Delta and for some goals in the upper watershed. In the Bay and Delta, the San Francisco Estuary Partnership uses indicators to rate the goals as good, fair, or poor, and in 2015, the partnership rated the overall state of the Delta as in fair to poor condition and the Bay as healthier. Federal and Nonfederal Entities Have Developed Measurable Goals for Comprehensive Restoration Efforts in the Bay and Delta and for Specific Efforts in the Upper Watershed Federal and nonfederal entities have developed measurable goals for comprehensive restoration efforts in the Bay and Delta through the coordinating bodies for these areas and have developed measurable goals for specific restoration efforts in the upper watershed. The coordinating bodies have documented the goals in plans, which often contain action items aimed at achieving those goals. In addition, all three of the regions share some similar goals, such as ecosystem restoration, climate resilience, and water quality. Measurable Goals for the Bay Federal and nonfederal entities have developed measurable goals for comprehensive restoration efforts in the Bay through the San Francisco Estuary Partnership. The partnership documented these goals in the CCMP, which provides a 35-year vision for restoring the estuary. The most recent CCMP, updated in 2016, contains four long-term goals related to broad restoration efforts: ecosystem restoration, climate resilience, water quality and quantity, and governance. Each goal contains three objectives, which detail desired outcomes that make progress toward achieving goals. To achieve the goals and objectives, the plan also identifies 32 actions—each of which can be associated with multiple goals and objectives—that lay out 112 priority tasks for the next 5 years. Figure 6 shows an example of a priority task and how it relates to the actions, objectives, and goals. The 2016 CCMP also includes measurements to track progress for all actions and links the plan’s goals, objectives, and actions to 33 environmental indicators established by the partnership. Federal and nonfederal entities have developed measurable goals for comprehensive restoration efforts in the Delta through the Delta Stewardship Council and documented them in the Delta Plan, first published in 2013. The Delta Plan contains six goals and establishes funding principles to support implementation of the Delta Plan as a whole. Four of the goals—protecting, restoring, and enhancing the Delta ecosystem; reducing climate-related risks; improving water quality; and governance—are similar to those of the CCMP. To accomplish all six goals and meet the funding principles, the Delta Plan sets forth 87 provisions for various entities, such as local, state, and federal agencies. Fourteen of these provisions are legally enforceable regulatory policies. The Delta Plan also has 159 performance measures associated with these goals and provisions. For example, under improving water quality, the Delta Plan includes a provision related to priority habitat restoration areas. (See fig. 7.) Federal and nonfederal entities developed measurable goals for specific efforts in the upper watershed and documented these goals in plans specific to entities, projects, or restoration topics. These plans include goals similar to those outlined in the CCMP or the Delta Plan—such as ecosystem restoration, climate resilience, and improved water quality— and some of the goals have associated performance measures. For example, several federal and nonfederal entities documented in the Central Valley Joint Venture Implementation Plan the acreage they would like to enhance annually for conserving migratory bird habitat—a specific ecosystem restoration effort. Another group, California’s Central Valley Flood Protection Board, documented in the state’s Central Valley Flood Protection Plan that it would like to increase infrastructure performance in populous areas to result in a more resilient flood management system— an example of a specific resiliency goal. This goal contains tracking metrics, including measuring the miles of levees repaired or improved. In addition, Interior produces metrics and reports for activities under the Central Valley Project Improvement Act. Federal and Nonfederal Entities Have Developed Approaches to Assess and Report Progress toward Some Measurable Goals in the Bay, Delta, and Upper Watershed Federal and nonfederal entities have developed indicators to assess and report progress toward some of the measurable goals in the Bay, and have applied these in the Delta as well. In the Bay, the San Francisco Bay Regional Water Quality Control Board has implemented regional monitoring pilot studies since 1989, and in 1992 it established a regional monitoring program led by a nonprofit science center. In 1991, in addition to water quality, the science center began reporting on the monitoring and assessment of ecosystem restoration and resilience in the estuary, such as changes over time in pollution, dredging, and numbers of endangered and threatened fish and wildlife. The San Francisco Estuary Partnership then used the science center’s restoration and resilience assessments to create the 1993 CCMP goals. At the same time, partly in response to a recommendation from the CCMP, the science center became the San Francisco Estuary Institute, a nonprofit scientific organization that performs monitoring to inform watershed management. The San Francisco Estuary Partnership began reporting on water quality progress in 2011. The first of these reports, titled the State of San Francisco Bay, focused on the Bay. In the Delta, the Delta Stewardship Council in 2013 began working to coordinate scientific monitoring efforts based on the goals outlined in the Delta Plan. Scientific monitoring efforts in the Delta include a regional water quality monitoring program, begun by the Central Valley Regional Water Quality Control Board in 2015. The monitoring efforts also include the Interagency Ecological Program, a consortium of state and federal agencies that have collaborated to monitor and research ecological conditions in the Delta since the 1970s, including by contributing to the CALFED science program. Based on the results of these separate monitoring efforts, the Delta Stewardship Council has a process in place to periodically update the Delta Plan’s performance measures and goals. In 2015, the San Francisco Estuary Partnership updated its assessment and report to include both the Bay and the Delta and renamed it State of the Estuary. The partnership plans to update these reports approximately every 5 years and include both the Bay and the Delta. For the 2015 report, more than 100 scientists from entities such as the San Francisco Estuary Institute, the U.S. Geological Survey, and the Delta Stewardship Council collaborated to monitor and assess estuary health against environmental indicators established by the partnership. The report includes 17 indicators specifically for the Bay, 8 indicators specifically for the Delta, and 4 estuary-wide indicators (see table 1). The report rates the status of the indicators—such as the safety of water for swimming, the safety of fish to eat, and the level of harbor seal populations—as good, fair, or poor. For example, the State of the Estuary report assessed the regional extent of tidal marsh in the Bay as “fair” and “improving” and the Yolo Floodplain Flows in the Delta as “poor;” however, the report did not detail the partnership’s methodology for delineating between “fair” and “poor” assessments. On the basis of its assessment, the partnership rated the Delta and Suisun Bay ecosystems as being in fair to poor condition and the Bay as healthier. In the upper watershed, progress assessment is tied to entity- and topic- specific plans and is not summarized by any one group or in one report. For example, California’s Central Valley Flood Protection Board assigns agencies to keep track of data toward tracking metrics for the goals of the Central Valley Flood Protection Plan. In another example, the state’s California EcoRestore initiative provides progress reports on restoration projects to mitigate damage caused by water conveyance programs. The Status of All Restoration Efforts across the Watershed and Total Expenditures Is Unknown Information on the status of all restoration efforts across the watershed, including their accomplishments, is unknown because, while the information is being developed, complete and current information is not being fully collected or reported. Total expenditures for fiscal years 2007 through 2016 are unknown, in part because federal reports do not include complete or reliable data for federal and state expenditures in the watershed. Information on the Status of All Restoration Efforts across the Watershed Is Being Developed but Is Not Complete and Current Information on the status of all restoration efforts across the watershed, including their accomplishments, is unknown because complete and current information is not being fully collected or reported. At the state level, the San Francisco Estuary Institute and the Delta Stewardship Council each maintains a database with information about federal and nonfederal restoration efforts, including those implemented during fiscal years 2007 through 2016, but neither database contains data on all restoration efforts in the watershed. Specifically: EcoAltas. The San Francisco Estuary Institute, in cooperation with the San Francisco Bay Joint Venture, maintains the EcoAtlas database, which is the more comprehensive of the two databases. EcoAtlas integrates stream and wetland maps, restoration information, and monitoring results with land use, transportation, and other information important to the state’s wetlands. According to institute officials, the database was originally designed to focus on the Bay and includes information on nearly every restoration effort in the Bay. According to these officials, the institute is working to update EcoAtlas and gather information on all efforts across the watershed. Officials from several federal and nonfederal entities—including NOAA, the institute, the San Francisco Bay Joint Venture, and the Central Valley Joint Venture—told us that the completeness of EcoAtlas’s data on restoration efforts in the Delta is catching up to that for the Bay, but a lot of work remains to gather more complete data in the upper watershed, such as by gathering more complete project information from entities conducting restoration work there. DeltaView. The Delta Stewardship Council’s DeltaView database collects state and federal data on efforts directly related to implementing the state’s Delta Plan goals. As a result, DeltaView does not include information for all restoration efforts in the Delta since, for example, local government agencies and other nonfederal entities may also conduct restoration efforts in the Delta. According to its website, DeltaView is designed to track and report on Delta Plan progress and help the Delta Plan Interagency Implementation Committee make more informed decisions about implementing the Delta Plan. According to council officials, because it is designed to focus on the Delta, DeltaView does not include efforts in the Bay or upper watershed unless they directly affect the Delta. Further, while officials who manage EcoAtlas and DeltaView take steps to check the completeness of the data, such as using regional administrators to oversee project completeness for EcoAltas or following up with agency officials annually for DeltaView, they stated it is difficult to confirm their completeness because they largely rely on self-reporting by different federal and nonfederal entities. Council officials stated that while the information in EcoAtlas is generally more comprehensive, DeltaView’s information on restoration efforts in the Delta is more complete than EcoAtlas’s information about the Delta, and they are working with the institute on ways to merge the two databases to make more complete information available in a single database. On the federal level, section 105 of the CALFED Act requires Interior, in cooperation with the Governor of California, to submit a report annually to Congress that, among other things, describes the status of implementation of all CALFED components, such as water quality and ecosystem restoration across the watershed. Under the act, the report is to include the progress made in meeting certain goals as well as accomplishments in achieving certain CALFED objectives during the past fiscal year. However, according to Interior officials, the department issued the most recent of these reports in February 2009. Interior officials stated that the California Bay-Delta Authority used to collect information on all the projects in the watershed and prepare and submit these reports. However, since the California Bay-Delta Authority was abolished and replaced by the Delta Stewardship Council, Interior does not obtain this information from any state entity, although Interior is still required to submit the report annually to Congress. Because Interior has not issued a report since 2009, when the California Bay-Delta Authority was abolished, and because other sources of information on restoration efforts such as EcoAtlas are not yet fully developed, no complete or current information on the progress of restoration efforts is available. According to Interior officials, the requirement to report is outdated and the department does not have information to report because it stopped obtaining data from the California Bay-Delta Authority after it was abolished. However, Interior and other federal agencies continue to work with state agencies on the state’s current Delta Plan, which replaced the state’s CALFED plans. Also, according to Interior officials, the department has not reached out to the state to identify new sources of information, given the change in state plans or agency structure. Section 105 of the CALFED Act requires Interior, in consultation with California’s governor, to report annually on “the status of implementation of all components of the Calfed Bay-Delta Program.” The law goes on to identify the specific objectives on which Interior is to report, which include activities that Interior and other federal agencies are currently carrying out, such as research and wetland restoration. According to respondents to our survey questionnaire, having such information could help stakeholders make more informed decisions about these efforts. Specifically, according to our analysis of responses, 32 of 48 federal and nonfederal entities indicated that it would be very or extremely important to have reports on progress of federal and nonfederal entities in implementing restoration activities. In addition, according to our analysis of responses, 27 of 48 federal and nonfederal entities indicated that it would be very or extremely important to have reports on accomplishments of federal and nonfederal entities in achieving the objectives of restoration activities. Without attempting to obtain and report state information as required under section 105 of the CALFED Act, Interior will not have reasonable assurance that it is providing Congress, or others, with the information needed to monitor federal and nonfederal restoration activities. Total Expenditures for All Restoration Efforts in the Watershed Are Unknown in Part Because Federal Reporting is Incomplete Total expenditures for all restoration efforts in the watershed for fiscal years 2007 through 2016 are unknown in part because federal reports do not include complete or reliable expenditure data, and other tracking mechanisms are still developing this information. San Francisco Estuary Institute officials stated that EcoAtlas recently began to include expenditure data for the on-the-ground costs of implementing restoration projects, but overall expenditure data on these projects are still incomplete. In addition, as discussed earlier, EcoAtlas is still in the process of gathering complete information for efforts in the Delta and upper watershed. DeltaView includes federal and state expenditure data for efforts in the Delta; however, according to Delta Stewardship Council officials, it does not include data for all restoration efforts in the Delta, such as those funded by nongovernmental organizations. The institute’s plans to expand EcoAtlas to include expenditures and data on efforts across the watershed, including by working with the council to merge the two databases, indicates that entities are taking steps to gather more complete information. As they continue to do so, more information will be available to report on expenditures for restoration efforts in the watershed. One source of information on federal and state expenditures across the watershed is OMB’s interagency budget crosscut reports for CALFED activities; however, these reports do not contain complete or accurate expenditure data. Section 106 of the CALFED Act requires OMB to submit a financial report annually to Congress, in coordination with the Governor of California and certified by the Secretary of the Interior, that includes, among other things, an interagency budget crosscut report. The report is to display each participating federal agency’s proposed budget for the upcoming fiscal year to carry out CALFED activities and identify all expenditures since 1998 by the federal and state governments to achieve the objectives of CALFED, which, as noted previously, include water quality and ecosystem restoration components. The report is also to contain a detailed accounting of all funds received and obligated by all federal and state agencies responsible for implementing CALFED activities during the past fiscal year. According to OMB staff, since California abolished the California Bay- Delta Authority in 2009, the state no longer submits state data for the crosscut report, so the agency only includes data reported by federal agencies in the crosscut reports and tables. OMB staff said this is because the state no longer has an agency organized around reporting this information. The Delta Stewardship Council has responsibility for the former state agency’s activities, but given its narrower focus on the Delta, it is unclear whether the council could submit data to OMB for the entire watershed. According to OMB staff, OMB has not asked the Delta Stewardship Council or any other state entities to submit the data they do have to OMB; however, a council official told us the council would like an opportunity to work on the crosscut report. Survey responses indicate that the state crosscut data could be helpful to federal and nonfederal entities. We asked survey respondents to indicate how important, if at all, they thought reports on all federal or state expenditures and funding committed to be spent (i.e., obligations) on restoration activities would be when they carry out activities related to these responsibilities in the San Francisco Bay Delta watershed. According to our analysis of survey responses, 24 of 48 federal and nonfederal entities indicated that it would be very or extremely important to have reports on both federal and state expenditures. Also, according to our analysis of survey responses, 27 of 48 federal and nonfederal entities indicated that it would be very or extremely important to have reports on federal obligations, and 24 of the 48 entities indicated that it would be very or extremely important to have reports on state obligations. Without attempting to obtain and report state information as required under section 106 of the CALFED Act, OMB will not have reasonable assurance that it is providing Congress with the information it needs to monitor federal and nonfederal restoration expenditures. In addition, while there was written guidance for submitting crosscut data for fiscal years 1998 through 2011, OMB has not updated its written guidance on reporting data for the CALFED Act since the guidance expired in 2011 to reflect who should report what data. Instead, according to OMB staff, it has generally provided oral instruction to agencies on what data to submit. As a result, we found that federal agencies reported different types of data for OMB to include in the budget crosscut and that the budget crosscut was therefore not reliable for the purposes of reviewing total expenditures. Some federal agencies, including EPA and the U.S. Geological Survey, note in their crosscut submissions that the data provided are funding levels or allocations, rather than expenditures. In addition, Interior reported that it submits obligations, which are also different than expenditures. As a result, the crosscut reports and tables may include a mix of federal budget authority, obligations, and expenditures, depending on the type of data the agencies choose to submit. According to OMB staff, while OMB reports federal budget authority data for the most recent fiscal year in the crosscut report, OMB relies on agencies to submit data on prior year expenditures for inclusion in the crosscut. However, the crosscut report itself labels the data reported as “enacted” dollars—or budget authority—but does not mention expenditures. Some federal officials said that clearer guidance would be helpful. For example, USDA officials stated that it would be helpful for OMB to clarify whether to submit estimated funding allocations or actual obligations and to provide more specific information about the types of restoration projects to include because the data USDA currently submits provide a narrow scope for the agency’s restoration-related work in the watershed. The lack of updated guidance is inconsistent with federal standards for internal control, which call for an agency to design control activities to achieve objectives and manage risks. Such control activities include clearly documenting internal controls, and the documentation may appear in management directives, administrative policies, or operating manuals. Because OMB has not updated its written guidance on reporting data since the guidance expired in 2011 to clearly communicate what data agencies should report, its mechanism for tracking data—the crosscut reports and tables—does not include complete or reliable expenditure data. As a result, congressional and other federal and nonfederal decision makers may not have the information they need to determine that resources are being used efficiently or effectively. For example, in a September 2017 report, Interior’s Office of Inspector General found that Reclamation obtained $50 million over 7 years for CALFED-related purposes using a process that it did not disclose to Congress through available mechanisms, including OMB’s crosscut reports. According to the Inspector General’s report, these crosscuts assist the President in considering the necessary and appropriate level of funding for each of the agencies in carrying out its responsibilities under CALFED. By directing its staff to update its written guidance for federal and state agencies on submitting data for its budget crosscut reports, OMB will have more reasonable assurance that it is helping those agencies provide current, complete, and accurate data to help congressional and other decision makers achieve restoration objectives. Federal and Nonfederal Entities Identified Several Factors, such as Competing Interests, Coordination, and Climate Change, As Key Factors that May Limit Restoration Several factors may limit restoration progress or pose risks to the long- term overall success of such efforts in the San Francisco Bay Delta watershed, according to our analysis of questionnaire responses from 48 federal and nonfederal entities. These factors reflect characteristics of watersheds in other parts of the country that we have previously discussed, including funding constraints and the effects of climate change (see fig. 8). Federal and nonfederal entities also identified up to three factors that pose the greatest risks to the long-term overall success of water quality improvement and ecosystem restoration efforts in the San Francisco Bay Delta watershed. Specifically, based on our analysis of the survey results, we found that federal and nonfederal entities consistently identified the following risks: Competing interests of water users, including residential, commercial, agricultural, and environmental. According to our analysis of survey responses, this particular risk varies by geographic area in the watershed. For example, 20 of 25 entities that indicated they conduct restoration work in the Sacramento River Watershed— part of the upper watershed region—identified this factor as a greatest risk. By comparison, 19 of 34 entities that indicated they conduct restoration work in the Bay identified this factor as a greatest risk. In its survey responses, one nonfederal entity indicated that the distribution of water and other natural resources among competing interests is not clearly defined or not distributed in a method that satisfies all parties. Therefore, according to this entity, stakeholders who are not satisfied with natural resources distribution may be hesitant to invest time and money in conservation practices that benefit water quality. In another survey response, a federal entity described competing interests as one of the biggest roadblocks in planning and implementing water quality improvement and ecosystem restoration in the Bay Delta region. This entity explained that there is an extremely limited freshwater supply in the region and interests that compete for it have resulted in several lawsuits and delays for restoration projects. Obtaining sufficient federal funding for water quality improvement and ecosystem restoration activities. Of the 48 survey respondents, 24 indicated that this factor is one of the greatest risks to long-term overall success of water quality improvement and ecosystem restoration efforts. According to one nonfederal entity’s survey response, funding for ecosystem restoration in the Bay area traditionally has come from a mix of federal and state sources. For example, the entity said a local source that will provide nearly $500 million over 20 years recently was established but needs to be leveraged by significant state and federal dollars to meet the estimated $1.5 billion needed for restoration in the Bay area. In its response to our survey, one federal entity stated that federal funding is extremely limited for restoration activities that are not part of mitigation efforts. The federal entity also stated that federal funding for long-term monitoring of restoration success and water quality improvement is difficult to sustain because these efforts are not eye- catching and do not provide quick results. A nonfederal entity stated that many state entities rely on federal grants to perform activities that result in improved water quality and ecosystem restoration. Planning for the effects of climate change. In their survey responses, 24 of 48 entities indicated that this factor is one of the greatest risks to long-term overall success of water quality improvement and ecosystem restoration efforts. One nonfederal entity said expected reductions in the Sierra Nevada snow pack—the largest source of water supply for the watershed—will result in increased demand on limited local water sources. Other respondents noted a need to consider addressing the effects of climate change at a high level. For instance, one nonfederal entity said successfully planning for climate change includes planning and coordinating at the watershed level, not at the project or jurisdictional level. Another nonfederal entity said the potential impact of sea level rise is great and ecosystem restoration solutions will require much more regional planning and agreement than more traditional engineering solutions. However, entities also acknowledged the challenges associated with planning for the effects of climate change with incomplete information. For example, in its response to our questionnaire, one entity stated it is difficult to understand the impact on water quality resulting from conservation practices on working lands, at both the private landowner level and the watershed level, if the projects have not incorporated climate change impacts such as flooding and sediment erosion. The factors identified by federal and nonfederal entities that may limit or pose a risk to restoration efforts are generally consistent with our prior work on large-scale ecosystem restoration efforts in other parts of the country (see Related GAO Products at the end of this report). For example, we previously reported that similar factors, such as funding constraints and the effects of climate change, may limit restoration efforts in the Great Lakes and Chesapeake Bay. Survey responses also indicate that some of these risks can be interrelated. For example, one federal entity said that while certain shoreline restoration and levee stabilization projects could ameliorate the effects of climate change, finding adequate funding to plan for and implement such projects is extremely difficult. According to this entity, all the competing interests and limited freshwater supply in the watershed further exacerbates these difficulties. In response to our questionnaire, federal and nonfederal entities identified what they consider to be the most important action that could be taken at a federal level to help improve restoration efforts in the watershed. For example, seven entities mentioned actions related to streamlining or coordinating federal permitting processes. Half of the entities that responded to our questionnaire also indicated a need for actions related to federal funding, and four entities indicated a need to use the best available science to direct restoration efforts. Conclusions The complex nature of the restoration efforts in the San Francisco Bay Delta watershed demands a high level of coordination across a large number of entities and competing interests. The results of federal and nonfederal entities working together can be seen in parts of the watershed, such as the Bay, where this work has resulted in the development of comprehensive regional strategies, sources of funding for some restoration projects, an expanding regional database, and an inventory of potential projects. In other parts of the watershed, particularly the Delta, coordination has wavered. The CALFED Act was enacted in 2004 to implement, at the federal level, a federal-state partnership for restoring the San Francisco Bay Delta watershed. When the state of California withdrew from the originally structured CALFED federal-state partnership in 2009, the effort to coordinate across the entire watershed transitioned and the focus of coordination became the Delta Plan, a state-led effort. Key federal entities, including Interior and CEQ, continue to have interests across the watershed, such as coordinating or conducting programs and projects and expending resources. To that end, in 2009 they developed a unifying vision for the federal government through the Interim Federal Action Plan. However, as the state continues to change its focus within the watershed, the Interim Federal Action Plan has become outdated, and not all relevant federal entities are using it. By updating or revising the plan to outline and reflect entities’ roles and responsibilities in light of the changes in the state’s role and other relevant developments since 2009, and by notifying all participating entities to ensure they are aware of the plan and their role in it, Interior and CEQ could help clarify the federal government’s role in supporting restoration efforts in the watershed and help ensure the effective use of federal resources in these efforts. In addition, since California stopped participating in the originally structured CALFED partnership, information on projects and expenditures for restoration and other activities in the watershed have not been completely reported, or reported at all. Although California abolished the California Bay-Delta Authority, the requirements for Interior to report on the status of implementation of all CALFED components, including water quality and ecosystem restoration efforts, and for OMB to submit a financial report, including an interagency budget crosscut report, still exist, and information about related restoration efforts and expenditures remains unknown. By coordinating with the appropriate state entities to obtain and report the information available to meet the CALFED Act’s requirements, Interior and OMB would have more reasonable assurance that they are providing the information congressional and other decision makers need to monitor the restoration efforts and associated expenditures. Further, by directing staff to update OMB’s written guidance for federal and state agencies on submitting data for its budget crosscut reports, OMB would have more reasonable assurance that it is helping those agencies provide current, complete, and accurate data to help decision makers achieve restoration objectives. Recommendations for Executive Action We are making seven recommendations—two each to Interior and CEQ to address issues with the Interim Federal Action Plan; one each to Interior and OMB to obtain and report information; and one to OMB to update its budget crosscut guidance. Specifically: The Secretary of the Interior should work with the Chair of CEQ to update or revise the Interim Federal Action Plan for the California Bay-Delta to outline and reflect entity roles and responsibilities in light of changes in the state of California’s role and other relevant developments since 2009. (Recommendation 1) The Secretary of the Interior should notify all participating entities to ensure they are aware of the Interim Federal Action Plan and their role in it. (Recommendation 2) The Chair of CEQ should work with the Secretary of the Interior to update or revise the Interim Federal Action Plan for the California Bay-Delta to outline and reflect entity roles and responsibilities in light of changes in the state of California’s role and other relevant developments since 2009. (Recommendation 3) The Chair of CEQ should notify all participating entities to ensure they are aware of the Interim Federal Action Plan and their role in it. (Recommendation 4) The Secretary of the Interior should coordinate with appropriate state entities to obtain and report the information available to meet the requirements under section 105 of the CALFED Act. (Recommendation 5) The Director of OMB should coordinate with appropriate state entities to obtain and report the information available to meet the requirements under section 106 of the CALFED Act. (Recommendation 6) The Director of OMB should direct staff to update OMB’s written guidance for federal and state agencies on submitting data for the budget crosscut reports OMB is required to submit under section 106 of the CALFED Act. (Recommendation 7) Agency Comments, Third-Party Views, and Our Evaluation We provided a draft of this report for review and comment to CEQ, EPA, OMB, and the Departments of Agriculture, Commerce, Defense, and the Interior. We also provided the California Delta Stewardship Council a draft of this report for review and comment. Interior provided written comments and stated that it partially concurred with our three recommendations to the department; Interior also provided technical comments, which we incorporated into the report as appropriate. In an email from CEQ’s Deputy General Counsel, CEQ provided technical comments, which we incorporated into the report as appropriate, but the agency neither agreed nor disagreed with our recommendations to it. In oral comments provided on August 8, 2018, OMB neither agreed nor disagreed with our two recommendations to the agency, but OMB staff suggested some additional language to the recommendations. In addition, USDA and Commerce provided technical comments, which we incorporated into the report as appropriate. Defense and EPA informed us that they had no comments on the draft report. The California Delta Stewardship Council provided written comments stating that its staff generally agreed with the “sum” of the recommendations in the report. The council also provided technical comments, which we incorporated into the report as appropriate. In its written comments, reproduced in appendix IV, Interior stated that the department appreciated our review of the coordination of watershed restoration efforts among federal and nonfederal entities and that it partially concurred with our three recommendations to the department. Specifically, regarding our first two recommendations to update or revise the Interim Federal Action Plan and notify all participating entities of their role in the plan, Interior stated that the department believes revisiting the Interim Federal Action Plan is not the most efficient course of action because the state-led Delta Plan Interagency Implementation Committee now serves as the coordination group. Interior stated that it will continue to actively participate in the committee, which includes participation and leadership from federal agencies at the regional and Washington office levels. However, as we discuss in the report, the committee focuses on only one region of the watershed (the Delta), and federal agencies fund and carry out restoration efforts across all three regions of the watershed. Further, as we discuss in the report, the President’s fiscal year 2019 budget states that federal activities are coordinated through the Interim Federal Action Plan rather than the state-led committee. Also, Interior’s letter states that its bureaus are concurrently engaged with the state of California in multiple activities in the Bay Delta that span their respective mission areas. This provides further support for the plan to be updated or revised to include these types of activities. Thus, we continue to believe that Interior should update or revise the plan to better reflect changes in the state’s role and other relevant developments since 2009. Regarding our third recommendation to Interior that it coordinate with the state to meet reporting requirements, Interior stated that the California Delta Stewardship Council compiles and reports on funding information and progress for federal and state agencies and that Interior could coordinate with the state on information not reported by the council. As we discuss in the report, the council’s reporting efforts focus on only the Delta, although federal funding and efforts span the entire watershed; therefore, the council’s reporting efforts cannot fully address Interior’s reporting requirements. In addition, Interior has not reached out to state entities for this information since 2009, when the state agency from which Interior had previously obtained state data was abolished. Thus, we continue to believe that Interior should coordinate with the appropriate state entities to obtain and report the information available to meet the CALFED Act’s reporting requirements. We note that Interior said it would actively participate in the Delta Plan Interagency Implementation Committee and could seek to coordinate with the state on information not reported by the Delta Stewardship Council, and we are encouraged that the department recognizes the need to take these actions. In oral comments regarding our first recommendation to OMB that it coordinate with the state to meet reporting requirements, OMB staff said it is unclear whether the Director of OMB has the authority to require or compel the state or its agencies to provide data to OMB on restoration and other projects they are carrying out. The staff suggested that we revise the recommendation to state that the Director of OMB should “consider whether there are additional opportunities to” coordinate with appropriate state entities to obtain and report the available information. Our recommendation is for OMB to coordinate with appropriate state entities, not to require or compel them to do so. In addition, as stated in its written comments (reproduced in appendix V), the California Delta Stewardship Council—the state agency responsible for the activities of the abolished California Bay-Delta Authority—would welcome the opportunity to coordinate with OMB and contribute to the budget crosscut reports. Furthermore, Section 106 of the CALFED Act requires OMB to submit a financial report annually to Congress, in coordination with the Governor of California, that includes an interagency budget crosscut report. Thus, we believe that the recommendation is worded appropriately and captures the actions that OMB should take to coordinate with the appropriate state entities to obtain and report the information available to meet the CALFED Act's reporting requirements. In oral comments regarding our second recommendation to OMB that it update its written guidance for federal and state agencies on submitting data for the budget crosscut reports, OMB staff said that the agency does not have the expertise to validate or verify the quality of the information agencies submit and is not confident that the data collected will be reliable. The staff said that other entities with day-to-day experience with the programs and data and with the relevant statutory authority may be in a better position to obtain, report, and verify the quality of restoration data. The staff suggested that we revise the recommendation to state that the Director of OMB should “assess whether to” update OMB’s written guidance for federal and state agencies on submitting data for the budget crosscut reports. However, OMB’s current approach is resulting in the reporting of unreliable data. As reported above, OMB has generally provided oral instruction to agencies since its written guidance expired in 2011; as a result, the crosscut reports and tables may include a mix of federal budget authority, obligations, and expenditures. Further, Section 106 of the CALFED Act requires, among other things, that OMB identify all expenditures since 1998 by the federal and state governments to achieve CALFED objectives. Therefore, we continue to believe that OMB should update its written guidance to clarify the type of data that agencies should submit in order to ensure it is reporting the data required by the CALFED Act. We note that our recommendation does not direct OMB staff to validate or verify the quality of the information; instead, it states that OMB should clarify in guidance what data agencies should provide. In addition, if OMB determines it is appropriate, updated written guidance could advise agencies to validate and verify the data before submitting it to OMB. In its written comments, reproduced in appendix V, the California Delta Stewardship Council made four comments on the themes outlined in the recommendations of our report and two specific comments on the report’s description of the Delta. Commenting on the themes outlined in the recommendations, the council stated that: No entity in California has the sole responsibility or authority for managing water supply and the Delta ecosystem; instead, authority, expertise, and resources are spread out among a cadre of federal, state, and local agencies. The council further said that its Delta Plan Interagency Implementation Committee plays a vital coordination role for the 17 state and federal agencies operating in the Delta, that federal participation is critical to the committee’s success, and that it encourages federal agencies to continue to attend and actively participate in the committee. There is a history of coordination in the Bay Delta systems, as evidenced by events such as the State of the Estuary Conference and the Bay Delta Science Conference, as well as the CCMP. Given that the upper watershed currently lacks a collaborative structure such as the implementation committee, the council said that further exploration should be done as to how this gap could be filled. The council is not currently in contact with CEQ and OMB and would welcome the opportunity to coordinate with them should a revised Interim Federal Action Plan be pursued. The council also stated that, to the extent possible, such a revised plan should consider and build on existing planning frameworks such as the Delta Plan and the CCMP. As stated in the report, the council welcomes the opportunity to contribute to the CALFED budget crosscut reports. In addition, the council made two specific comments on the report’s description of the Delta. First, it stated that our report is thorough in discussing many aspects of the watershed, but it somewhat neglects the importance of levees, particularly in the Delta. While we provide an overview of levees in the background section, a more detailed discussion of these and other water infrastructure facilities is beyond the scope of this review, which is to examine restoration efforts in the watershed and does not include detailed examination of issues related to water supply. Second, the council stated that the report should mention and consider characteristics associated with the Delta as an evolving place, which refers to the council’s efforts to consider the interaction between environmental and social factors—such as cultural values and socio- economic issues—into decision making for the Delta. We believe our discussion of federal and nonfederal coordination roles within and across the watershed’s three major regions, including the Delta, appropriately considers the interaction between environmental and social factors, within the scope of this review. We are sending copies of this report to the appropriate congressional committees, the Chair of CEQ; the Secretaries of Agriculture, Commerce, Defense, and the Interior; the Administrator of EPA; the Director of OMB; the Executive Officer of the California Delta Stewardship Council; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Selected Federal and Nonfederal Entities with Restoration-Related Roles in the San Francisco Bay Delta Watershed Many federal and nonfederal entities, including state and local government agencies and nongovernmental organizations, have roles related to water quality improvement and ecosystem restoration efforts in the San Francisco Bay Delta watershed. Different combinations of federal and nonfederal entities work throughout the watershed and its three major geographic areas, which are the San Francisco Bay and its local watershed (Bay), the Sacramento-San Joaquin Delta (Delta), and the upper watershed, which includes California’s Central Valley and the western slope of the Sierra Nevada Mountains. See below for a list of federal and nonfederal entities and a brief description of some of their restoration-related roles in the watershed. We selected these entities based on our review of documents provided by, and interviews with, federal and nonfederal entities. Selected Federal Entities with Restoration-Related Roles in the Watershed Several federal entities have roles related to water quality improvement and ecosystem restoration efforts in the watershed. All federal agencies listed are signatories to the 2009 memorandum of understanding, unless otherwise noted. Federal agencies and some of their restoration-related roles include the following: Executive Office of the President. Council on Environmental Quality (CEQ). Under the 2009 memorandum of understanding, CEQ is to work with the Secretary of the Interior in coordinating the development and implementation of federal policy and initiatives in Bay-Delta matters and is the co- chair of the Federal Bay-Delta Leadership Committee. Office of Management and Budget (OMB). OMB is not a signatory to the 2009 memorandum of understanding, but under the Calfed Bay-Delta Authorization Act (CALFED Act), OMB is required to annually submit a financial report to Congress, in coordination with the Governor of California and certified by the Secretary of the Interior, that includes, among other things, an interagency budget crosscut report that identifies all expenditures since 1998 by the federal and state governments to achieve the objectives of the Calfed Bay-Delta Program (CALFED). CALFED program components include, among other things, water quality and ecosystem restoration. U.S. Army Corps of Engineers. According to Corps officials, the Corps plans and implements projects, including ecosystem restoration projects; participates in regional planning, while using its own return- on-investment analysis for prioritizing projects; and helps the state water agencies maintain levees. The Corps also issues permits for the discharge of dredged or fill material under section 404 of the Clean Water Act. U.S. Department of Agriculture (USDA). Natural Resources Conservation Service (NRCS). Through general conservation programs and also its targeted Bay Delta Initiative, NRCS and its local partners aim to address the critical water quantity, water quality, and habitat restoration needs of the Bay Delta region by implementing voluntary conservation practices on private lands. NRCS provides agricultural producers technical and financial assistance in the Bay Delta region to implement conservation practices and establish conservation easements that improve water quality and quantity and restore and protect wetland, riparian, and wet meadow habitat. U.S. Forest Service. The Pacific Southwest Region of the U.S. Forest Service manages 20 million acres of National Forest land in California. National forests supply 50 percent of the water in California and form the watershed of most major aqueducts and more than 2,400 reservoirs throughout the state. According to U.S. Forest Service officials, the agency’s management actions on National Forest land in California are focused on ecological restoration, with the goal of retaining and restoring the ecological resilience, including water quality, of terrestrial and aquatic ecosystems. According to these officials, this work is often accomplished using an “all lands” approach to restoration, by coordinating and collaborating across forests and wildlands regardless of ownership. Ecological restoration management actions that contribute to water quality include meadow, river, and riparian restoration to improve watershed function, as well as fuels reduction activities, such as forest thinning and prescribed fire. According to these officials, many forest lands have dense fuels and are highly susceptible to severe wildfire, which causes increased erosion rates and sedimentation and negatively affects water quality and delivery. U.S. Department of Commerce. National Oceanic and Atmospheric Administration (NOAA). NOAA implements the Endangered Species Act for certain species. Under section 7 of the act, federal agencies must ensure that any action they authorize, fund, or carry out is not likely to jeopardize the continued existence of any endangered or threatened species or result in the destruction or adverse modification of its critical habitat. To fulfill this responsibility, federal agencies must consult with NOAA’s National Marine Fisheries Service, depending on the affected species, to assess the potential effects of proposed actions. Formal consultations between federal agencies and the National Marine Fisheries Service or U.S. Fish and Wildlife Service are required where a proposed action could have an adverse effect on listed species or designated critical habitat; these consultations conclude with issuance of biological opinions by the National Marine Fisheries Service or U.S. Fish and Wildlife Service. NOAA also obtains, manages, and expends funding to conduct habitat restoration. According to NOAA officials, NOAA’s Restoration Center has directed federal funds toward restoration projects in the Bay Delta. In addition, funds from natural resource damage assessments have been used for habitat restoration in San Francisco Bay, according to NOAA officials. U.S. Department of the Interior. Under the 2009 memorandum of understanding, Interior is to serve as the lead for developing and coordinating federal policy and initiatives in Bay-Delta matters and is the co-chair of the Federal Bay-Delta Leadership Committee. Under the CALFED Act, Interior is required to annually submit a report to Congress, in cooperation with the Governor of California, that, among other things, describes the status of implementation of all CALFED components, which include water quality and ecosystem restoration components. Bureau of Reclamation. Reclamation administers the Central Valley Project, which has long-term contracts to supply water to more than 250 contractors in 29 of California’s 58 counties, and implements a number of actions under the Central Valley Project Improvement Act. The act was enacted for several purposes, including to protect, restore, and enhance fish, wildlife, and associated habitats. Reclamation also implements other actions, such as those under the San Joaquin River Restoration Settlement Act. U.S. Fish and Wildlife Service. The U.S. Fish and Wildlife Service implements the Endangered Species Act for certain species. According to agency officials, the U.S. Fish and Wildlife Service is also a major landowner, with several National Wildlife Refuges throughout the watershed where restoration efforts are implemented. Additionally, according to agency officials, the U.S. Fish and Wildlife Service provides funding through grant programs, such as the North American Wetlands Conservation, National Coastal Wetlands Conservation, and Wildlife and Sportfish Restoration programs, and provides technical assistance through efforts, such as the Partner for Fish and Wildlife, Coastal, and Tribal Wildlife programs. U.S. Geological Survey. According to U.S. Geological Survey officials, the agency’s role in the watershed includes conducting physical, chemical, and biological monitoring and scientific investigations to support water and water quality management, fish and wildlife management, and infrastructure management and protection. According to officials, the agency also provides policy- neutral technical support to Interior and other federal, state, and local entities. U.S. Environmental Protection Agency (EPA). EPA implements the Clean Water Act, including management of the National Estuary Program. According to agency officials, EPA also provides authorization, financial support, and oversight of the California State Water Resources Control Board, the partner state agency charged with implementing Clean Water Act programs in California, and provides direct funding, technical assistance, and oversight of programs and projects achieving Clean Water Act goals in the state. Selected State Government Entities with Restoration-Related Roles in the Watershed Several state government entities in California have roles related to water quality improvement and ecosystem restoration efforts in the watershed. A list of selected state agencies and information from the agencies summarizing their restoration-related roles follows: California Delta Stewardship Council. The Delta Stewardship Council is a planning and science agency, with some regulatory authority. The council develops and reviews the Delta Plan, the implementation of which is to further the restoration of the Delta ecosystem and a reliable water supply. The council also funds research, synthesizes and communicates scientific information to decision makers, and coordinates with Delta agencies to promote science-based adaptive management. In addition, the council establishes and oversees the Delta Plan Interagency Implementation Committee, a joint state-federal committee that implements the Delta Plan. California Natural Resources Agency. The Natural Resources Agency is a resource management agency, with some regulatory authority. Central Valley Flood Protection Board. The Central Valley Flood Protection Board establishes and enforces standards for the maintenance and operation of the flood control system; develops and implements the state’s flood protection plan for the Central Valley; and coordinates activities among the Corps and local flood control agencies. Department of Fish and Wildlife. The Department of Fish and Wildlife plans, collaborates on, enforces, and funds species management, habitat conservation, and wetlands restoration. According to agency officials, the department also is a major owner of land where restoration efforts take place, such as the Napa-Sonoma Marsh Wildlife Area and Eden Landing Ecological Reserve, and houses the California Wildlife Conservation Board, which provides funding for restoration projects. Department of Water Resources. The Department of Water Resources administers the California State Water Project, including sales to water contractors. The department also implements and funds—through the State Water Project—two fish habitat restoration projects in response to NOAA and U.S. Fish and Wildlife Service biological opinions. In addition, the department develops the California Water Plan, the state’s overall water resources plan. Sacramento-San Joaquin Delta Conservancy. The Sacramento- San Joaquin Delta Conservancy plans, collaborates on (with local communities), implements, and funds projects in the Delta and Suisun Marsh to protect, improve, and restore habitats and ecosystems, improve water quality, and support water-related agricultural sustainability, among other things. San Francisco Bay Conservation and Development Commission. The San Francisco Bay Conservation and Development Commission plans, collaborates on, and regulates the San Francisco Bay, Bay shoreline, and Suisun Marsh; it also permits projects that fill or extract materials from the Bay. Sierra Nevada Conservancy. The Sierra Nevada Conservancy plans, collaborates on, implements, and funds projects in parts of the upper watershed to protect, improve, and restore habitats and ecosystems, improve water quality, and prepare for climate change, among other things. State Coastal Conservancy. The State Coastal Conservancy plans, collaborates on, implements, and funds—partly through voter-approved bonds—projects around the Bay to protect and improve natural lands, improve water quality and wildlife habitats, and prepare for climate change, among other things. California Environmental Protection Agency. The California Environmental Protection Agency is a regulatory agency. State Water Resources Control Board. The State Water Resources Control Board allocates water rights, adjudicates water rights disputes, develops statewide protection plans, establishes water quality standards, and guides the nine regional water quality control boards. San Francisco Bay Regional Water Quality Control Board. One of nine regional water quality control boards in California, the San Francisco Bay Regional Water Quality Control Board exercises rulemaking and regulatory activities for the Bay. Central Valley Regional Water Quality Control Board. One of nine regional water quality control boards in California, the Central Valley Regional Water Quality Control Board exercises rulemaking and regulatory activities for the Central Valley (including the Delta) of the upper watershed. Other Selected Nonfederal Entities with Restoration- Related Roles in the Watershed Other nonfederal entities—including local and regional government agencies, nongovernmental organizations, private businesses, and private landowners—have roles related to water quality improvement and ecosystem restoration efforts in the watershed. Other nonfederal entities and some of their restoration-related roles include the following: Central Valley Joint Venture. The Central Valley Joint Venture is a cooperative, regional partnership—partially supported through the U.S. Fish and Wildlife Service and established under the North American Waterfowl Management Plan—that plans and coordinates migratory bird and other habitat restoration and conservation in the Central Valley. San Francisco Estuary Institute. The San Francisco Estuary Institute is a nonprofit science center that provides data and other technical tools for assessing the health of the waters, wetlands, wildlife, and landscapes of the Bay and Delta; manages the EcoAtlas database of restoration projects; and works closely with the California State Water Resources Control Board and the San Francisco Estuary Partnership. San Francisco Estuary Partnership. The San Francisco Estuary Partnership is a cooperative, regional partnership that develops and manages the comprehensive conservation and management plan for the San Francisco Estuary (i.e., the Bay Delta) under EPA’s National Estuary Program, including coordinating projects and leveraging funds. The partnership is staffed by the nine-county Association of Bay Area Governments and housed by the San Francisco Bay Regional Water Quality Control Board. San Francisco Bay Joint Venture. The San Francisco Bay Joint Venture is a cooperative, regional partnership—organized through the U.S. Fish and Wildlife Service and established under the North American Waterfowl Management Plan—that plans and coordinates migratory bird and other habitat restoration and conservation in the Bay. Other regional government agencies. Other regional government agencies have a variety of restoration-related roles, depending on the entity. In addition to the San Francisco Estuary Partnership, examples of regional government agencies with restoration roles in the watershed include the Bay Area Clean Water Agencies, Bay Area Flood Protection Agencies Association, and California Association of Resource Conservation Districts. Nongovernmental organizations. Other nongovernmental organizations have restoration-related roles in the watershed, including the Audubon Society, Bay Planning Coalition, Ducks Unlimited, Nature Conservancy, and Save the Bay. Local governments. Local governments have a variety of restoration-related roles, depending on the entity. For example, according to U.S. Fish and Wildlife officials, Marin and San Mateo Counties are recognized leaders in planning for climate resiliency in wetland restoration. Also, Alameda County uses sediment excavated from flood control district channels to build or create wetlands to provide vital wildlife habitat. In addition, water treatment facilities work with the California State Water Resources Control Board to help fund the San Francisco Estuary Institute’s water quality monitoring program. Dredging businesses. Dredging businesses work with the California State Water Resources Control Board to help fund the San Francisco Estuary Institute’s water quality monitoring program. Water contractors. Through obligations under the Central Valley Project and State Water Project, water contractors help fund certain restoration projects required under biological opinions by various regulatory agencies, including NOAA, the U.S. Fish and Wildlife Service, and the California Department of Fish and Wildlife, according to state officials. Private landowners. Some private landowners collaborate on or sell land for various restoration and conservation projects. Private landowners include businesses (e.g., technology companies and an industrial salt pond owner) and farmers in the Bay and farmers and ranchers throughout the Delta and upper watershed. Appendix II: Objectives, Scope, and Methodology In this report, we examine (1) the extent to which federal and nonfederal entities coordinate their San Francisco Bay Delta watershed restoration efforts, (2) the extent to which federal and nonfederal entities have developed measurable goals and approaches to assess progress for San Francisco Bay Delta watershed restoration efforts, (3) information on the status of San Francisco Bay Delta watershed restoration efforts and related expenditures for fiscal years 2007 through 2016, and (4) key factors that may limit San Francisco Bay Delta watershed restoration, according to federal and nonfederal entities. To address all four objectives, we reviewed relevant federal and state laws and documents. We also interviewed officials from more than 28 federal, state, and other entities we identified through our review of laws and documents, snowball sampling, and their participation in regional interagency groups conducting restoration work in the San Francisco Bay Delta watershed. During these interviews, we asked about, among other things, restoration plans that coordinate multiple aspects of water quality improvement and ecosystem restoration efforts on a regional level in the San Francisco Bay Delta watershed. Officials and representatives we interviewed identified the Comprehensive Conservation and Management Plan (CCMP) and the Delta Plan as the overarching regional strategies for the Bay and Delta, respectively. We considered these strategies “comprehensive regional plans” and reviewed them to address our objectives. To address our objectives, we obtained information from a questionnaire we sent to all 61 federal, state, and other entities that serve on the boards or implementation committees of regional interagency groups conducting restoration work in our geographic scope. These groups were the San Francisco Bay Joint Venture, San Francisco Estuary Partnership, Delta Plan Interagency Implementation Committee, and Central Valley Joint Venture. The survey group includes many of the entities listed above in appendix I. We also sent this questionnaire to federal agencies that are signatories of the CALFED record of decision and 4 other relevant organizations identified through snowball sampling. We initially identified and distributed our questionnaire to 78 entities. We sent a single questionnaire to each nonfederal entity (e.g., state agency, nongovernmental organization, local government agency, etc.) and sent more than one questionnaire, as appropriate, to federal agencies that have offices or officials working in different parts of the watershed. We determined which federal level to survey based on a review of agency organizational charts and inquiries with agency officials. We considered each office or federal designee to be a separate federal entity due to the distinct nature of their work based on geographic region. To ensure we got survey responses that reflect the opinions of an entity, we included instructions for survey points of contact to collaborate with colleagues, as needed, and indicated that we only wanted one survey response from each entity. After we began our survey effort, we identified 6 entities as out of scope for a variety of reasons, such as being a subgroup of another entity we surveyed. Our final population of surveyed entities was 72, of which 48 responded to our questionnaire, a response rate of 67 percent. In our questionnaire, we collected information on water quality improvement and ecosystem restoration efforts in the San Francisco Bay Delta watershed, including, among other things, (1) challenges that may limit restoration progress; (2) risks to the long-term overall success of water quality improvement and ecosystem restoration efforts; and (3) types of reports that entities could consider important when carrying out responsibilities related to water quality improvement and ecosystem restoration. To ensure that our survey questions were appropriate and that respondents could answer them in a reliable and meaningful way, we conducted survey pre-tests with 5 entities from the study population, had the questionnaire reviewed by an independent reviewer within GAO, and revised the questionnaire as appropriate based on the results of these efforts. The survey questionnaire used for this review is in appendix III. Our survey field period ran from December 4, 2017, through January 29, 2018. We distributed the questionnaire electronically through email. After the requested return date passed, we emailed or telephoned respondents who had not returned the questionnaire and asked them to respond. By January 29, 2018, we received 48 questionnaires. In order to minimize potential nonresponse bias, we reviewed the key characteristics of respondents to ensure we received completed questionnaires from each of our population subgroups. Because this was not a sample questionnaire, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as difficulties in interpreting a particular question or sources of information available to respondents, which can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize such nonsampling error. Survey questionnaires may also be subject to error in entering and analyzing data. We implemented quality control procedures on our data entry by verifying the accuracy of the process. We noted any missing, irregular, or incorrect responses by the respondent and resolved these responses, as needed, through email correspondence with the relevant entities. To examine the extent to which federal and nonfederal entities coordinate their San Francisco Bay Delta watershed restoration efforts, we interviewed officials from federal, state, and other entities to identify key regional plans and coordination efforts. We reviewed these plans and efforts and compared federal coordination efforts against a selection of our leading practices for collaboration to assess the extent to which federal entities followed these practices. The selected leading practices for collaboration include whether participating agencies have clarified roles and responsibilities, developed ways to continually update and monitor written agreements on how agencies coordinate, and identified how leadership will be sustained over the long-term. Our questionnaire discussed above also surveyed entities to identify coordination-related challenges, if any. To understand what restoration projects were being carried out, we obtained information from the San Francisco Estuary Institute’s EcoAtlas database and the Delta Stewardship Council’s DeltaView database on restoration projects. We also conducted site visits to a nonprobability sample of four projects selected to provide illustrative examples of a variety of restoration activities in different locations in the watershed. We identified these sites by asking knowledgeable stakeholders about restoration projects in each region of the watershed that involved a variety of partners, including federal agencies, that were at various stages of completion. We then arranged visits that would allow us to observe projects in each region that illustrated a range of these selection criteria. We also conducted site visits to water project facilities, including a reservoir, dam, and pumping station. In addition, we attended the State of the San Francisco Estuary Conference in Oakland, California, on October 10 and 11, 2017, and observed many presentations and panel discussions on topics ranging from Delta restoration planning to pesticides in the estuary, by a wide range of officials from federal and nonfederal entities conducting restoration efforts across the watershed. To examine the extent to which federal and nonfederal entities have developed measurable goals and approaches to assess progress for San Francisco Bay Delta watershed restoration efforts, we reviewed comprehensive regional plans and related goals and progress reports, including the technical appendix for the State of the Estuary report. To do so, we looked for factors such as goals with quantifiable metrics and targets, as well as indicators used to assess and report progress. We also interviewed officials from federal, state, and other entities, including scientific groups, about efforts to develop measurable goals and assess restoration progress. To examine information on the status of San Francisco Bay Delta watershed restoration efforts and related expenditures for fiscal years 2007 through 2016, we obtained and analyzed available data—collected from the EcoAtlas and DeltaView databases—that included information about projects, expenditures, and cost estimates for this period. This period covers the time before and after the state withdrew from the CALFED federal-state partnership, as originally structured, and includes the last full fiscal year for which the most recent data were available at the time of our review. We assessed the reliability of these data by interviewing knowledgeable officials and reviewing database documentation and determined that they were not reliable for purposes of identifying all restoration projects across the entire watershed and for reporting related expenditure data. We also reviewed federal and state reports on budget requests and authority for that period and interviewed officials from federal, state, and other entities about available sources of data on projects, expenditures, and cost estimates. We also obtained and reviewed OMB’s Bay Delta budget crosscuts, which include financial information for San Francisco Bay Delta watershed restoration efforts reported by federal and state agencies, for fiscal years 2007 through 2019. We assessed the reliability of the data in the federal budget crosscut reports and tables by interviewing federal agency officials about what data they provided for the reports and tables and analyzing the data provided in the crosscut reports. We determined that the data were reliable only to report examples of the magnitude of funding for individual agencies. We determined that these data were not reliable to aggregate funding levels across programs and agencies or to compare funding levels of the various agencies, as we discuss in this report. We then compared OMB’s written guidance on submitting data for the crosscut reports with federal standards for internal control to assess the extent to which federal agencies followed the standard for design of control activities. To determine key factors that may limit San Francisco Bay Delta watershed restoration, according to federal and nonfederal entities, we sent the survey questionnaire described above to federal, state, and other entities to obtain views on (1) challenges that may limit restoration progress and (2) risks to the long-term overall success of water quality improvement and ecosystem restoration efforts. We also interviewed officials from federal, state, and other entities about factors that may limit restoration progress, as well as reviewed progress reports and studies exploring these factors. We conducted this performance audit from April 2017 to August 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Summary Results of GAO Survey Questionnaire of Federal and Nonfederal Entities We distributed this survey questionnaire to 72 federal and nonfederal entities that work in the San Francisco Bay Delta watershed. In this survey, we collected information on water quality improvement and ecosystem restoration efforts in the San Francisco Bay Delta watershed, including, among other things, (1) challenges that may limit restoration progress; (2) risks to the long-term overall success of water quality improvement and ecosystem restoration efforts; and (3) types of reports that entities could consider important when carrying out responsibilities related to water quality improvement and ecosystem restoration. The following copy of this survey questionnaire includes summary information for the responses provided by federal and nonfederal entities. It does not include information for narrative responses. Appendix V: Comments from the California Delta Stewardship Council Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Janet Frisch (Assistant Director), Susan Iott (Assistant Director), Chad M. Gorman (Analyst-in- Charge), Chuck Bausell, Stephen Betsock, Mark Braza, Marissa Dondoe, Ellen Fried, Carol Henn, Karen Howard, Richard Johnson, Gwen Kirby, Ben Licht, John Mingus, Tricia Moye, Rebecca Parkhurst, Sara Sullivan, Sarah Veale, Michelle R. Wong, Elizabeth Wood, and Edith Yuh made key contributions to this report. Related GAO Products Great Lakes Restoration Initiative: Improved Data Collection and Reporting Would Enhance Oversight. GAO-15-526. Washington, DC: July 21, 2015. Bureau of Reclamation: Financial Information for Three California Water Programs. GAO-15-468R. Washington, DC: June 4, 2015. Great Lakes Restoration Initiative: Further Actions Would Result in More Useful Assessments and Help Address Factors That Limit Progress. GAO-13-797. Washington, DC: September 27, 2013. Chesapeake Bay: Restoration Effort Needs Common Federal and State Goals and Assessment Approach. GAO-11-802. Washington, DC: September 15, 2011. Recent Actions by the Chesapeake Bay Program Are Positive Steps Toward More Effectively Guiding the Restoration Effort, but Additional Steps Are Needed. GAO-08-1131R. Washington, DC: August 28, 2008. Coastal Wetlands: Lessons Learned from Past Efforts in Louisiana Could Help Guide Future Restoration and Protection. GAO-08-130. Washington, DC: December 14, 2007. South Florida Ecosystem: Restoration Is Moving Forward but Is Facing Significant Delays, Implementation Challenges, and Rising Costs. GAO-07-520. Washington, DC: May 31, 2007. Chesapeake Bay Program: Improved Strategies Are Needed to Better Assess, Report, and Manage Restoration Progress. GAO-06-96. Washington, DC: October 28, 2005. Great Lakes: Organizational Leadership and Restoration Goals Need to Be Better Defined for Monitoring Restoration Progress. GAO-04-1024. Washington, DC: September 28, 2004. Watershed Management: Better Coordination of Data Collection Efforts Needed to Support Key Decisions. GAO-04-382. Washington, DC: June 7, 2004. Great Lakes: An Overall Strategy and Indicators for Measuring Progress Are Needed to Better Achieve Restoration Goals. GAO-03-515. Washington, DC: April 30, 2003.
Why GAO Did This Study The San Francisco Bay Delta watershed—which drains a vast area of California from the Sierra Nevada Mountains to the Pacific Ocean—supplies drinking water for 25 million people and provides irrigation for about half the nation's fruit and vegetable production. Decades of development and agriculture have led to large reductions in water quality and supply, natural flood protection, and habitats across the watershed's three major regions: the Bay, the Delta, and the upper watershed. Federal entities have been working with nonfederal entities for decades to protect and restore the watershed. GAO was asked to review restoration efforts in the watershed. This report examines, among other objectives, (1) the extent to which federal and nonfederal entities coordinate watershed restoration efforts and (2) information on the status of these efforts and related expenditures for fiscal years 2007 through 2016, the most recent data available. GAO reviewed laws; regional databases, plans, and reports; and budget documents. It also surveyed the 72 members of interagency groups (48 responded) and interviewed federal and nonfederal officials. What GAO Found Federal entities, including the Department of the Interior, and nonfederal entities, such as California state agencies and nonprofits, carry out and coordinate a wide range of restoration efforts in the San Francisco Bay Delta watershed. These efforts have multiple benefits, such as improved water quality and habitat in restored marshland (see fig. below). The entities coordinate comprehensive efforts in the San Francisco Bay area (Bay) and Sacramento-San Joaquin Delta (Delta) through two groups. Federal efforts across the watershed are to be led and coordinated by Interior and the Council on Environmental Quality (CEQ) through a 2009 Interim Federal Action Plan, but not all federal entities are using the plan. Interior officials said the plan is no longer relevant because state and federal roles have changed. For example, they said a state-led committee acts as the coordinating body for federal entities; however, this committee focuses on one region of the watershed, while federal funding supports efforts in all three regions. By updating or revising the Interim Action Plan, Interior and CEQ could help clarify federal roles in supporting restoration efforts in the watershed. Information on the status of all restoration efforts across the watershed, including their accomplishments, is unknown because information is not being fully collected or reported. Also, related expenditures for fiscal years 2007 through 2016 are unknown, in part because federal reports do not include complete or reliable data for restoration efforts in the watershed. The 2004 CALFED Bay-Delta Authorization Act requires Interior and the Office of Management and Budget (OMB) to report annually to Congress on restoration accomplishments and federal and state expenditures in the watershed, respectively. Interior has not issued these reports since 2009, when the state agency from which Interior had obtained the state data was abolished. OMB has issued its reports with federal, but not state, data for the same reason. However, Interior and OMB have not reached out to other state entities for this information. Without obtaining and reporting available information, as required by law, Interior and OMB will not have reasonable assurance that they are providing Congress with the information needed to monitor federal and nonfederal restoration efforts and expenditures. What GAO Recommends GAO made seven recommendations, including that Interior and CEQ update or revise the Interim Federal Action Plan and that Interior and OMB coordinate with the state to meet the CALFED Act's reporting requirements. Interior partially concurred with the recommendations, and CEQ and OMB neither agreed nor disagreed with them. GAO maintains its recommendations are valid.
gao_GAO-18-452T
gao_GAO-18-452T_0
Background The Freedom of Information Act establishes a legal right of access to government information on the basis of the principles of openness and accountability in government. Before FOIA’s enactment in 1966, an individual seeking access to federal records faced the burden of establishing a “need to know” before being granted the right to examine a federal record. FOIA established a “right to know” standard, under which an organization or person could receive access to information held by a federal agency without demonstrating a need or reason. The “right to know” standard shifted the burden of proof from the individual to a government agency and required the agency to provide proper justification when denying a request for access to a record. Any person, defined broadly to include attorneys filing on behalf of an individual, corporations, or organizations, can file a FOIA request. For example, an attorney can request labor-related workers’ compensation files on behalf of his or her client, and a commercial requester, such as a data broker who files a request on behalf of another person, may request a copy of a government contract. In response, an agency is required to provide the relevant record(s) in any readily producible form or format specified by the requester, unless the record falls within a permitted exemption that provides limitations on the disclosure of information. FOIA Amendments and Guidance Call for Improvements in How Agencies Process Requests Various amendments have been enacted and guidance issued to help improve agencies’ processing of FOIA requests, including: The Electronic Freedom of Information Act Amendments of 1996 (e- FOIA amendments) strengthened the requirement that federal agencies respond to a request in a timely manner and reduce their backlogged requests. The amendments, among other things, made a number of procedural changes, including allowing a requester to limit the scope of a request so that it could be processed more quickly and requiring agencies to determine within 20 working days whether a request would be fulfilled. This was an increase from the previously established time frame of 10 business days. The amendments also authorized agencies to multi-track requests— that is, to process simple and complex requests concurrently on separate tracks to facilitate responding to a relatively simple request more quickly. In addition, the amendment encouraged online, public access to government information by requiring agencies to make specific types of records available in electronic form. Executive Order 13392, issued by the President in 2005, directed each agency to designate a senior official as its chief FOIA officer. This official was to be responsible for ensuring agency-wide compliance with the act by monitoring implementation throughout the agency and recommending changes in policies, practices, staffing, and funding, as needed. The chief FOIA officer was directed to review and report on the agency’s performance in implementing FOIA to agency heads and to Justice on an annual basis. (These are referred to as chief FOIA officer reports.) The OPEN Government Act, which was enacted in 2007, made the 2005 executive order’s requirement for agencies to have a chief FOIA officer a statutory requirement. It also required agencies to submit an annual report to Justice outlining their administration of FOIA, including additional statistics on timeliness. Specifically, the act called for agencies to adequately track their agency’s FOIA request processing information throughout the reporting year and then produce reports on that topic to comply with FOIA reporting requirements and Justice guidance for reporting. The FOIA Improvement Act of 2016 addressed procedural issues, including requiring that agencies: (1) make records available in an electronic format if they have been requested three or more times; (2) notify requesters that they have a minimum of 90 days to file an administrative appeal, and (3) provide dispute resolution services at various times throughout the FOIA process. This act also created more duties for chief FOIA officers, including requiring them to offer training to agency staff regarding FOIA responsibilities. The act also revised and added new obligations for OGIS, and created the Chief FOIA Officers Council to assist in compliance and efficiency. Further, the act required OMB, in consultation with Justice, to create a consolidated online FOIA request portal that allows the public to submit a request to any agency through a single website. FOIA Authorizes Agencies to Use Other Federal Statutes to Withhold Information Prohibited from Disclosure In responding to requests, FOIA authorizes agencies to utilize one of nine exemptions to withhold portions of records, or the entire record. Agencies may use an exemption when it has been determined that disclosure of the requested information would harm an interest related to certain protected areas. These nine exemptions can be applied by agencies to withhold various types of information, such as information concerning foreign relations, trade secrets, and matters of personal privacy. One such exemption, the statutory (b)(3) exemption, specifically authorizes withholding information under FOIA on the basis of a law which: requires that matters be withheld from the public in such a manner as to leave no discretion on the issue; or establishes particular criteria for withholding or refers to particular types of matters to be withheld; and if enacted after October 28, 2009, specifically refers to section 552(b)(3) of title 5, United States Code. To account for agencies use of the statutory (b)(3) exemptions, FOIA requires each agency to submit, in its annual report to Justice, a complete listing of all statutes that the agency relied on to withhold information under exemption (b)(3). The act also requires that the agency describe for each statute identified in its report (1) the number of occasions on which each statute was relied upon; (2) a description of whether a court has upheld the decision of the agency to withhold information under each such statute; and (3) a concise description of any information withheld. Further, to provide an overall summary of the statutory (b)(3) exemptions used by agencies in a fiscal year, Justice produces consolidated annual reports that list the statutes used by agencies in conjunction with (b)(3). Processing a FOIA Request As previously noted, agencies are generally required by the e-FOIA amendments of 1996 to respond to a FOIA request within 20 working days. Once received, the request is to be processed through multiple phases, which include assigning a tracking number, searching for responsive records, and releasing the records response to the requester. Also, as relevant, FOIA allows a requester to challenge an agency’s final decision on a request through an administrative appeal or a lawsuit. Specifically, a requester has the right to file an administrative appeal if he or she disagrees with the agency’s decision on their request. Agencies have 20 working days to respond to an administrative appeal. Figure 1 provides a simplified overview of the FOIA request and appeals process. In a typical agency, as indicated, during the intake phase, a request is logged into the agency’s FOIA tracking system, and a tracking number is assigned. The request is then reviewed by FOIA staff to determine its scope and level of complexity. The agency then typically sends a letter or email to the requester acknowledging receipt of the request, with a unique tracking number that the requester can use to check the status of the request. Next, FOIA staff (non-custodian) begin the search to retrieve the responsive records by routing the request to the appropriate program office(s).This step may include requesting that the custodian (owner) of the record search and review paper and electronic records from multiple locations and program offices. Agency staff then process the responsive records, which includes determining whether a portion or all of any record should be withheld based on FOIA’s exemptions. If a portion or all of any record is the responsibility of another agency, FOIA staff may consult with the other agency or may send (“refer”) the document(s) to that other agency for processing. After processing and redaction, a request is reviewed for errors and to ensure quality. The documents are then released to the requester, either electronically or by regular mail. In addition, FOIA allows requesters to sue an agency in federal court if the agency does not respond to a request for information within the statutory time frames or if the requesters believe they are entitled to information that is being withheld by the agency. Further, the act requires the Office of Special Counsel (OSC) to initiate a proceeding to determine whether disciplinary action is warranted against agency personnel in cases involving lawsuits where a court has found, among other things that agency personnel may have acted arbitrarily or capriciously in responding to a FOIA request. The act requires Justice to notify OSC when a lawsuit meets this requirement. FOIA Oversight and Implementation Responsibility for the oversight of FOIA implementation is spread across several federal offices and other entities. These include Justice’s OIP, NARA’s OGIS, and the Chief FOIA Officers Council. These oversight agencies and the council have taken steps to assist agencies to address the provisions of FOIA. Justice’s OIP is responsible for encouraging agencies’ compliance with FOIA and overseeing their implementation of the act. In this regard, the office, among other things, provides guidance, compiles information on FOIA compliance, provides FOIA training, and prepares annual summary reports on agencies’ FOIA processing and litigation activities. The office also offers FOIA counseling services to government staff and the public. Issuing guidance. OIP has developed guidance, available on its website, to assist federal agencies by instructing them in how to ensure timely determinations on requests, expedite the processing of requests, and reduce backlogs. The guidance also informs agencies on what should be contained in their annual FOIA reports to Justice’s Attorney General. The office also has documented ways for federal agencies to address backlog requests. In March 2009 the Attorney General issued guidance and related policies to encourage agencies to reduce their backlogs of FOIA requests. In addition, in December 2009, OMB issued a memorandum on the OPEN Government Act, which called for a reduction in backlogs and the publishing of plans to reduce backlogs. Further, in August 2014, OIP held a best practices workshop and issued guidance to agencies on reducing FOIA backlogs and improving timeliness of agencies’ responses to FOIA requests. The OIP guidance instructed agencies to obtain leadership support, routinely review FOIA processing metrics, and set up staff training on FOIA. Overseeing agencies’ compliance. OIP collects information on compliance with the act by reviewing agencies’ annual FOIA reports and chief FOIA officer reports. These reports describe the number of FOIA requests received and processed in a fiscal year, as well as the total costs associated with processing and litigating requests. Providing training. The office offers an annual training class that provides a basic overview of the act, as well as hands-on courses about the procedural requirements involved in processing a request from start to finish. In addition, it offers a seminar outlining successful litigation strategies for attorneys who handle FOIA cases. Preparing administrative and legal annual reports. OIP prepares two major reports yearly—one related to agencies’ annual FOIA processing and one related to agencies’ FOIA litigation and compliance. The first report, compiled from agencies’ annual FOIA reports, contains statistics on the number of requests received and processed by each agency, the time taken to respond, and the outcome of each request, as well as other statistics on FOIA administration such as number of backlogs, and the use of exemptions to withhold information from a requestor. The second report describes Justice’s efforts to encourage compliance with the act and provides a listing of all FOIA lawsuits filed or determined in that year, the exemptions and/or dispositions involved in each case, and any court-assessed costs, fees, and penalties. NARA’s OGIS was established by the OPEN Government Act of 2007 to oversee and assist agencies in implementing FOIA. OGIS’s responsibilities include reviewing agency policies and procedures, reviewing agency compliance, recommending policy changes, and offering mediation services. The 2016 FOIA amendments required agencies to update response letters to FOIA requesters to include information concerning the roles of OGIS and agency’s FOIA public liaisons. As such, OGIS and Justice worked together to develop a response letter template that includes the required language for agency letters. In addition, OGIS, charged with reviewing agency’s compliance with FOIA, launched in 2014 a FOIA compliance program. OGIS also developed a FOIA compliance self- assessment program, which is intended to help OGIS look for potential compliance issues across federal agencies. The Chief FOIA Officers Council is co-chaired by the Director of OIP and the Director of OGIS. Council members include senior representatives from OMB, OIP, and OGIS, together with the chief FOIA officers of each agency, among others. The council’s FOIA-related responsibilities include: developing recommendations for increasing compliance and disseminating information about agency experiences, ideas, best practices, and innovative approaches; identifying, developing, and coordinating initiatives to increase transparency and compliance; and promoting the development and use of common performance measures for agency compliance. Selected Agencies Collect and Maintain Records That Can Be Subject to FOIA Requests. The 18 agencies selected for our review are charged with a variety of operations that affect many aspects of federal service to the public. Thus, by the nature of their missions and operations, the agencies have responsibility for vast and varied amounts of information that can be subject to a FOIA request. For example, the Department of Homeland Security’s (DHS) mission is to protect the American people and the United States homeland. As such, the department maintains information covering, among other things, immigration, border crossings, and law enforcement. As another example, the Department of the Interior’s (DOI) mission includes protecting and managing the Nation’s natural resources and, thus, providing scientific information about those resources. Table 1 provides details on each of the 18 selected agencies’ mission and the types of information they maintain. The 18 selected agencies reported that they received and processed more than 2 million FOIA requests from fiscal years 2012 through 2016. Over this 5-year period, the number of reported requests received fluctuated among the agencies. In this regard, some agencies saw a continual rise in the number of requests, while other agencies experienced an increase or decrease from year to year. For example, from fiscal years 2012 through 2014, DHS saw an increase in the number of requests received (from 190,589 to 291,242), but in fiscal year 2015, saw the number of requests received decrease to 281,138. Subsequently, in fiscal year 2016, the department experienced an increase to 325,780 requests received. In addition, from fiscal years 2012 through 2015, the reported numbers of requests processed by the selected agencies showed a relatively steady increase. However, in fiscal year 2016, the reported number of requests processed by these agencies declined. Figure 2 provides a comparison of the total number of requests received and processed in this 5-year period. Selected Agencies Implemented the Majority of FOIA Requirements Reviewed Among other things, the FOIA Improvement Act of 2016 and the OPEN Government Act of 2007 calls for agencies to (1) update response letters, (2) implement tracking systems, (3) provide FOIA training, (4), provide required records online, (5) designate chief FOIA officers, and (6) update and publish timely and comprehensive regulations. As part of our ongoing work, we determined that the 18 selected agencies included in our review had implemented the majority of the six FOIA requirements evaluated. Specifically, 18 agencies updated response letters, implemented tracking systems, 15 agencies provided required records online, and 12 agencies designated chief FOIA officers. However, only 5 of the agencies published and updated their FOIA regulations in a timely and comprehensive manner. Figure 3 summarizes the extent to which the 18 agencies implemented the selected FOIA requirements. Beyond these selected agencies, Justice’s OIP and OMB also had taken steps to develop a government-wide FOIA request portal that is intended to allow the public to submit a request to any agency from a single website. Selected Agencies Have Updated Their FOIA Response Letters The 2016 amendments to FOIA required agencies to include specific information in their responses when making their determinations on requests. Specifically, agencies must inform requesters that they may seek assistance from the FOIA Public Liaison, file an appeal to an adverse determination within a period of time that is not less than 90 days after the date of such adverse determination; and seek dispute resolution services from the FOIA Public Liaison of the agency or OGIS. Among the 18 selected agencies, all had updated their FOIA response letters to include this required information. All Selected Agencies Had Implemented FOIA Tracking Systems Various FOIA amendments and guidance call for agencies to use automated systems to improve the processing and management of requests. In particular, the OPEN Government Act of 2007 amended FOIA to require that federal agencies establish a system to provide individualized tracking numbers for requests that will take longer than 10 days to process and establish telephone or Internet service to allow requesters to track the status of their requests. Further, the President’s January 2009 Freedom of Information Act memorandum instructed agencies to use modern technology to inform citizens about what is known and done by their government. In addition, FOIA processing systems, like all automated information technology systems, are to comply with the requirements of Section 508 of the Rehabilitation Act (as amended). This act requires federal agencies to make their electronic information accessible to people with disabilities. Each of the 18 selected agencies had implemented a system that provides capabilities for tracking requests received and processed, including an individualized number for tracking the status of a request. Specifically, Ten agencies used commercial automated systems, (DHS, EEOC, FDIC, FTC, Justice, NTSB, NASA, Pension Benefit Guaranty Corporation, and USAID). Three agencies developed their own agency systems (State, DOI, and TVA). Five agencies used Microsoft Excel or Word to track requests (Administrative Conference of the United States, American Battle Monuments Commission, Broadcasting Board of Governors, OMB, and U.S. African Development Foundation). Further, all of the agencies had established telephone or Internet services to assist requesters in tracking the status of requests; and they used modern technology (e.g., mobile applications) to inform citizens about FOIA. For example, the commercial systems allow requesters to submit a request and track the status of that request online. In addition, DHS developed a mobile application that allows FOIA requesters to submit requests and check the status of existing requests. All Reviewed Agencies’ Chief FOIA Officers Have Offered FOIA Training The 2016 FOIA amendments require agencies’ chief FOIA officers to offer training to agency staff regarding their responsibilities under FOIA. In addition, Justice’s OIP has advised every agency to make such training available to all of their FOIA staff at least once each year. The office has also encouraged agencies to take advantage of FOIA training opportunities available throughout the government. The 18 selected agencies’ chief FOIA officers offered FOIA training opportunities to staff in fiscal years 2016 and 2017. For example: Eleven agencies provided training that gave an introduction and overview of FOIA (the American Battle Monuments Commission, EEOC, Justice, FDIC, FTC, NARA, Pension Benefit Guaranty Corporation, State, TVA, U.S. African Development Foundation, and USAID). Three agencies offered training for their agencies’ new online FOIA tracking and processing systems (DOI, NTSB, and Pension Benefit Guaranty Corporation). Three agencies provided training on responding to, handling, and processing FOIA requests (DHS, DOI, and State). Three agencies offered training on understanding and applying the exemptions under FOIA (FDIC, FTC, and U.S. African Development Foundation). Two agencies offered training on the processing of costs and fees (NASA and TVA). The Majority of Selected Agencies Post Required Records Online Memorandums from both the President and the Attorney General in 2009 highlight the importance of online disclosure of information and further direct agencies to make information available without a specific FOIA request. Further, the 2016 FOIA amendments require online access to government information and require agencies to make information available to the public in electronic form for up to four categories: agency final opinions and orders, administrative staff manuals of interest to the public, and frequently requested records. While all 18 agencies that we reviewed post records online, only 15 of them had posted all categories of information, as required by the FOIA amendments. Specifically, 7 agencies—the American Battle Monuments Commission, the Pension Benefit Guaranty Corporation, and EEOC, FDIC, FTC, DOJ, and State—had, as required, made records in all four categories publicly available online. In addition, 5 agencies that were only required to publish online records in three of the categories—the Administrative Conference of the United States, Broadcasting Board of Governors, DHS, OMB, and USAID— had done so. Further, 3 agencies that were only required to publish online records in two of the categories—U.S. African Development Foundation, NARA, and TVA— had done so. The remaining 3 agencies—DOI, NASA, and NTSB—had posted records online for three of four required categories. Regarding why the three agencies did not post all of their four required categories of online records, DOI officials stated that the agency does not make publically available all FOIA records that have been requested 3 or more times, as it does not have the time to post all such records that have been requested. NASA officials explained that, while the agency issues final opinions, it does not post them online. As for NTSB, while its officials said they try to post information that is frequently requested, they do not post the information on a consistent basis Making the four required categories of information available in electronic form is an important step in allowing the public to easily access to government documents. Until these agencies make all required categories of information available in electronic form, they cannot ensure that they are providing the required openness in government. Most Reviewed Agencies Designated a Senior Official as a Chief FOIA Officer In 2005, the President issued an executive order that established the role of a Chief FOIA Officer. In 2007, amendments to FOIA required each agency to designate a chief FOIA officer who shall be a senior official at the Assistant Secretary or equivalent level. Of the 18 selected agencies, 12 agencies have Chief FOIA Officers who are senior officials at the Assistant Secretary or equivalent level. The Assistant Secretary level is comparable to senior executive level positions at levels III, IV, and V. Specifically, State has designated its Assistant Secretary of Administration, Bureau DOI and NTSB had designated its Chief Information Officers; Administrative Conference of the United States, Broadcasting Board of Governors, FDIC, NARA, and U.S. African Development Foundation have designated their general counsels; and Justice, NASA, TVA, and USAID designated their Associate Attorney General, Associate Administrator for Communications, the Vice President for Communications, and the Assistant Administrator for the Bureau of Management, respectively. However, 6 agencies — American Battle Monuments Commission DHS, EEOC, Pension Benefit Guaranty Corporation, FTC, and OMB — do not have chief FOIA officers that are senior officials at the Assistant Secretary or equivalent level. According to officials from 5 of these agencies, the agencies all have chief FOIA officers and officials believed they had designated the appropriate officials. Officials at FTC acknowledged that the chief FOIA officer position is not designated at a level equivalent to an Assistant Secretary but a senior position within the agency. However, while there are chief FOIA officers at these agencies, until the chief FOIA officers are designated at the Assistant Secretary or equivalent level, they will lack assurance regarding the necessary authority to make decisions about agency practices, personnel, and funding. Most Reviewed Agencies Have Not Updated Regulations as Required to Inform the Public of Their FOIA Operations FOIA requires federal agencies to publish regulations in the Federal Register that inform the public of their FOIA operations. Specifically, in 2016, FOIA was amended to require agencies to update their regulations regarding their FOIA operations. To assist agencies in meeting this requirement, OIP created a FOIA regulation template for agencies to use as they update their regulations. Among other things, OIP’s guidance encouraged agencies to: describe their dispute resolution processed, describe their administrative appeals process for response letters of notify requesters that they have a minimum of 90 days to file an inform requesters that the agency may charge fees for requests determined as “unusual” circumstances ; and update the regulations in a timely manner (i.e., update regulations by 180 days after the enactment of the 2016 FOIA amendment.) Five agencies in our review—DHS, DOI, FDIC, FTC, and USAID— addressed all five requirements in updating their regulations. In addition, seven agencies addressed four of the five requirements: the Administrative Conference of the United States, EEOC, Justice, NARA, NTSB, Pension Benefit Guaranty Corporation, and TVA did not update their regulations in a timely manner. Further, four agencies addressed three or less requirements (U.S. African Development Foundation, State, NASA, and Broadcasting Board of Governors) and two agencies (American Battle Monuments Commission and OMB) did not address any of the requirements. Figure 4 indicates the extent to which the 18 agencies had addressed the five selected requirements. Agencies that did not address all five requirements provided several explanations as to why their regulations were not updated as required: American Battle Monuments Commission stated that while they updated their draft regulation in August 2017, it is currently unpublished due to internal reviews with the General Counsel in preparation for submission to the Federal Register. No new posting date has been established. American Battle Monuments Commission last updated its regulation in February 26, 2003. State officials noted that their regulation was updated two months prior to the new regulation requirements but did not provide a specific reason for not reissuing its regulation. As such, they explained that they have a working group reviewing their regulation for updates, with no timeline identified. State last updated its regulation on April 6, 2016. NASA officials did not provide a reason for not updating its regulation as required. Officials did, however, state that its draft regulation is with the Office of General Counsel for review. NASA last updated its regulations on August 11, 2017. Broadcasting Board of Governors officials did not provide a reason for not updating its regulation as required. Officials did, however, note that the agency is in the process of updating its regulation and anticipates it will complete this update by the end of 2018. The Broadcasting Board of Governors last updated its regulation on February 2, 2002. OMB officials did not provide a reason for not updating the agency’s regulation as required. Officials did, however, state that due to a change in leadership they do not have a time frame for updating their regulation. OMB last updated its regulation on May 27, 1998. The chief FOIA officer at the U.S. African Development Foundation stated that, while the agency had updated and submitted their regulation to be published in December 2016, they were unpublished due to an error that occurred with the acknowledgement needed to publish the regulation on the federal register. The regulation was subsequently published on February 3, 2017. The official further noted that when the agency responds to FOIA requests it has not charged a fee for unusual circumstances, and therefore they did not believe they had to disclose information regarding fees in its regulation. Until these six agencies publish updated regulations that address the necessary requirements, as called for in FOIA and OIP guidance, they likely will be unable to provide the public with required regulatory and procedural information to ensure transparency and accountability in the government. Justice and OMB Have Taken Steps to Develop an Online FOIA Request Portal The 2016 FOIA amendments required OMB to work with Justice to build a consolidated online FOIA request portal. This portal is intended to allow the public to submit a request to any agency from a single website and include other tools to improve the public’s access to the benefits of FOIA. Further, the act required OMB to establish standards for interoperability between the consolidated portal and agency FOIA systems. The 2016 FOIA amendments did not provide a time to develop the portal and standards. With OMB’s support, Justice developed an initial online portal. Justice’s OIP officials stated that they expect to update the portal to provide basic functionality that aligns with requirements of the act, including the ability to make a FOIA request, and technical processes for interoperability amongst agencies’ various FOIA systems. According to OIP officials, in partnership with OMB, OIP was able to identify dedicated funding source to operate and maintain the portal to ensure its success in the long term, with major agencies sharing in the costs to operate, maintain, and fund any future enhancements designed to improve FOIA processes. The first iteration of the National FOIA portal launched on Justice’s foia.gov website on March 8, 2018. Agencies Have Methods to Reduce Backlogged Requests, but Their Efforts Have Shown Mixed Results In our draft report, we determined that the 18 selected agencies in our review had FOIA request backlogs of varying sizes, ranging from no backlogged requests at some agencies to 45,000 or more requests at other agencies. Generally, the agencies with the largest backlogs had received the most requests. In an effort to aid agencies in reducing their backlogs, Justice’s OIP identified key practices that agencies can use. However, while the agencies reported using these practices and other methods, few of them managed to reduce their backlogs during the period from fiscal year 2012 through 2016. In particular, of the four agencies with the largest backlogs, only one—NARA—reduced its backlog. Agencies attributed their inability to decrease backlogs to the number and complexity of requests, among other factors. However, agencies also lack comprehensive plans to implement practices on an ongoing basis. Agencies Have FOIA Request Backlogs of Varying Sizes, and Most Increased from Fiscal Year 2012 through 2016 The selected agencies in our review varied considerably in the size of their FOIA request backlogs. Specifically, from fiscal year 2012 through 2016, of the 18 selected agencies 10 reported a backlog of 60 or fewer requests, and of these 10 agencies, 6 reported having no backlog in at least 1 year. 4 agencies had backlog numbers between 61 and 1,000 per year; and 4 agencies had backlogs of over 1,000 requests per year. The four agencies with backlogs of more than 1,000 requests for each year we examined were Justice, NARA, State and DHS. Table 2 shows the number requests and the number of backlogged request for the 18 selected agencies during the 5-year period. Over the 5-year period, 14 of the 18 selected agencies experienced an increase in their backlogs in at least 1 year. By contrast, 2 agencies (Administrative Conference of the United States and the U.S. African Development Foundation) reported no backlogs, and 3 agencies (American Battle Monument Commission, NASA and NARA) reported reducing their backlogs. Further, of the four agencies with the largest backlogs (DHS, State, Justice, and NARA) only NARA reported a backlog lower in fiscal year 2016 than in fiscal year 2012. Figure 5 shows the trends for the four agencies with the largest backlogs, compared with the rest of the 18 agencies. In most cases, agencies with small or no backlogs (60 or fewer) also received relatively few requests. For example, the Administrative Conference of the United States and the U.S. African Development Foundation reported no backlogged requests during any year but also received fewer than 30 FOIA requests a year. The American Battle Monuments Commission also received fewer than 30 requests a year and only reported 1 backlogged request per year in 2 of the 5 years examined. However, the Pension Benefit Guaranty Corporation and FDIC received thousands of requests over the 5-year period, but maintained zero backlogs in a majority of the years examined. PBGC received a total of 19,120 requests during the 5-year period and only reported a backlog of 8 requests during one year, fiscal year 2013. FDIC received a total of 3,405 requests during the 5-year period and reported a backlog of 13 requests in fiscal year 2015 and 4 in fiscal year 2016. The four agencies with backlogs of 1,000 or more (Justice, NARA, State, and DHS) received significantly more requests each year. For example, NARA received between about 12,000 and 50,000 requests each year, while DHS received from about 190,000 to 325,000 requests. In addition, the number of requests NARA received in fiscal year 2016 was more than double the number received in fiscal year 2012. DHS received the most requests of any agency—a total of 1,320,283 FOIA requests over the 5- year period. Agencies Identified a Variety of Methods to Reduce Backlogs, but Few Saw Reductions The Attorney General’s March 2009 memorandum called on agency chief FOIA officers to review all aspects of their agencies’ FOIA administration and report to Justice on steps that have been taken to improve FOIA operations and disclosure. Subsequent Justice guidance required agencies are to include in their chief FOIA officer reports information on their FOIA request backlogs, including whether the agency experienced a backlog of requests; whether that backlog decreased from the previous year; and, if not, reasons the backlog did not decrease. In addition, agencies that had more than 1,000 backlogged requests in a given year were required to describe their plans to reduce their backlogs. Beginning in fiscal year 2015, these agencies were to describe how they implemented their plans from the previous year and whether that resulted in a backlog reduction. In addition, Justice’s OIP identified best practices for reducing FOIA backlogs. The office held a best practices workshop on reducing backlogs and improving timeliness. The office then issued guidance in August 2014 which highlighted key practices to improve the quality of a FOIA program. OIP identified the following methods in its best practices guidance. Utilize resources effectively. Agencies should allocate their resources effectively by using multi-track processing, making use of available technology, and shifting priorities and staff assignments to address needs and effectively manage workloads. Routinely review metrics. Agencies should regularly review their FOIA data and processes to identify challenges or barriers. Additionally, agencies should identify trends to effectively allocate resources, set goals for staff, and ensure needs are addressed. Emphasize staff training. Agencies should ensure FOIA staff are properly trained so they can process requests more effectively and with more autonomy. Training and engagement of staff can also solidify the importance of the FOIA office’s mission. Obtain leadership support. Agencies should ensure that senior management is involved in and supports the FOIA function in order to increase awareness and accountability, as well as make it easier to obtain necessary resources or personnel. Agencies identified a variety of methods that they used to address their backlogs. These included both the practices identified by Justice, as well as additional methods. Ten agencies maintained relatively small backlogs of 60 or fewer requests and were thus not required to develop plans for reducing backlogs. However, 2 of these 10 agencies, who both received significant numbers of requests, described various methods used to maintain a small backlog: PBGC officials credits its success to training, not only for FOIA staff, but all Incoming personnel, while also awarding staff for going above and beyond in facilitating FOIA processing. Pension Benefit Guaranty Corporation has incorporated all the best practices identified by OIP, including senior leadership involvement that supports FOIA initiatives and program goals, routine review of metrics to optimize workflows, effective utilization of resources and staff training. According to FDIC officials, its overall low backlog numbers are attributed to a trained and experienced FOIA staff, senior management involvement, and coordination among FDIC divisions. However, FDIC stated the reason for the increase in backlogs in fiscal year 2015 was due to increased complexity of requests. The 4 agencies with backlogs greater than 60 but fewer than 1,000 (EEOC, DOI, NTSB, and USAID) reported using various methods to reduce their backlogs. However, all 4 showed an increase over the 5-year period. EEOC officials stated that it had adopted practices recommended by OIP such as multi-track processing, reviewing workloads to ensure sufficient staff, and using temporary assignments to address needs. However, it has seen a large increase in its backlog numbers, going from 131 in fiscal year 2012 to 792 in fiscal year 2016. EEOC attributed the rise in backlogs to an increase in requests received, loss of staff, and the complex and voluminous nature of requests. DOI, according to agency officials, has also tried to incorporate reduction methods and best practices, including proactively releasing information that may be of interest to the public, thus avoiding the need for a FOIA request; enhanced training for its new online FOIA tracking and processing system; improved inter-office collaboration; monthly reports on backlogs and weekly charts on incoming requests to heighten awareness among leadership; and monitoring trends. Yet, DOI has seen an increase in its backlog, from 449 in fiscal year 2012 to 677 in fiscal year 2016, an increase of 51 percent. DOI attributed the increase to loss of FOIA personnel, increase in the complexity of requests, increase in FOIA-related litigation, increase in incoming requests, and staff having additional duties. Officials at NTSB stated that it utilized contractors and temporary staff assignments to augment staffing and address backlogs. Despite the effort, NTSB saw a large increase in backlogs, from 62 in fiscal year 2012 to 602 in fiscal year 2016. Officials stated that reason for the increase was due to increased complexity of requests, including requests for “any and all” documentation related to a specific subject, often involving hundreds to thousands of pages per request. According to USAID officials, the agency conducts and reviews inventories of its backlog and requests to remove duplicates and closed cases, group and classify requests by necessary actions and responsive offices, and initiate immediate action. In addition, USAID seeks to identify tools and solutions to streamline records for review and processing. However, its backlog numbers have continually increased, from 201 in fiscal year 2012 to 318 in fiscal year 2016. USAID attributes that increase to increase in the number of requests, loss of FOIA staff, increased complexity and volume of requests, competing priorities, and world events that may drive surges in requests. Of the four agencies with the largest backlogs, all reported taking steps that in some cases included best practices identified by OIP; however, only NARA successfully reduced its backlog by the end of the 5-year period. Justice made noted that it efforts to reduce its backlog by incorporating best practices. Specifically, OIP worked with components within Justice through the Component Improvement Initiative to identify causes contributing to a backlog and assist components in finding efficiencies and overcoming challenges. The Chief FOIA Officer continued to provide top-level support to reduction efforts by convening the department’s FOIA Council to manage overall FOIA administration. In addition, many of the components created their own reduction plans, which included hiring staff, utilizing technology, and providing more training, requester outreach, and multitrack processing. However, despite these efforts, the number of backlogs steadily increased during the 5-year period, from 5,196 in fiscal year 2012 to 10,644 in fiscal year 2016, an overall increase of 105 percent. Justice attributes the increase in backlogs to several challenges, including an increase of incoming requests and an increase in the complexity of those requests. Other challenges that Justice noted were staff shortages and turnover, reorganization of personnel, time to train incoming staff, and the ability to fill positions previously held by highly qualified professionals. NARA officials stated that one key step NARA took was to make corrections in its Performance Measurement and Reporting System. They noted that this system previously comingled backlogged requests with the number of pending FOIA requests, skewing the backlog numbers higher. The improvements included better accounting for pending and backlogged cases, distinguishing between simple and complex requests, and no longer counting as open cases that were closed within 20 days, but not until the beginning of the following fiscal year. In addition, officials also stated that the FOIA program offices have been successful at working with requesters to narrow the scope of requests. NARA also stated that it was conducting an analysis of FOIA across the agency to identify any barriers in the process. Officials also identified other methods, including using multi-track processing, shifting priorities to address needs, improved communication with agencies, proactive disclosures, and the use of mediation services. NARA has shown significant progress in reducing its backlog. In fiscal year 2012 it had a backlog of 7,610 requests, which spiked to 9,361 in fiscal year 14. However, by fiscal year 2016 the number of backlogged requests had dropped to 2,932 despite an more than doubling of requests received for that fiscal year. However, NARA did note challenges to reducing its backlog numbers, namely, the increase in the number of requests received. State developed and implemented a plan to reduce its backlog in fiscal year 2016. The plan incorporated two best practices by focused on identifying the extent of the backlog problem and developing ways to address the backlog with available resources. According to State officials, effort was dedicated to improve how FOIA data was organized and reported. Expedited and litigation cases were top priorities, whereas in other cases a first in first out method was employed. Even with these efforts, however, State experienced a 117 percent increase in its backlog over the 5-year period. State’s backlog doubled from 10,045 in fiscal year 2014 to 22,664 in fiscal year 2016. Among the challenges to managing its backlog, State reported an increase in incoming requests, a high number of litigation cases, and competing priorities. Specifically, the number of incoming requests for State increase by 51 percent during the 5-year period. State has also reported that it has allocated 80 percent of its FOIA resources to meet court-ordered productions associated with litigation cases, resulting in fewer staff to work on processing routine requests. This included, among other efforts, a significant allocation of resources in fiscal year 2015 to meet court-imposed deadlines to process emails associated with the former Secretary of State, resulting in a surge of backlogs. In 2017 State began an initiative to actively address its backlogs. The Secretary of State issued an agency-wide memorandum stating the department’s renewed efforts by committing more resources and workforce to backlog reduction. The memo states new processes are to be implemented for both the short and long-term, and the FOIA office has plans to work with the various bureaus to outline the tasks, resources, and workforce necessary to ensure success and compliance. With renewed leadership support, State has reported significant progress in its backlog reduction efforts. DHS, in its chief FOIA officer reports, reported that it implemented several plans to reduce backlogs. The DHS Privacy office, which is responsible for oversight of the department’s FOIA program, worked with components to help eliminate the backlog. The Privacy Office sent monthly emails to component FOIA officers on FOIA backlog statistics, convened management meetings, conducted oversight, and reviewed workloads. Leadership met weekly to discuss the oldest pending requests, appeals, and consultations, and determined needed steps to process those requests. In addition, several other DHS components implemented actions to reduce backlogs. Customs and Border Protection hired and trained additional staff, encouraged requesters to file requests online, established productivity goals, updated guidance, and utilized better technology. U.S. Citizenship and Immigration Services, National Protection and Programs Directorate, and Immigration and Customs Enforcement increased staffing or developed methods to better forecast future workloads ensure adequate staffing. Immigration and Customs Enforcement also implemented a commercial off-the-shelf web application, awarded a multi-million dollar contract for backlog reduction, and detailed employees from various other offices to assist in the backlog reduction effort. Due to efforts by the Privacy Office and other components, the backlog dropped 66 percent in fiscal year 2015, decreasing to 35,374. Yet, despite the continued efforts in fiscal year 2016, the backlog numbers increased again, to 46,788. DHS attributes the increases in backlogs to several factors, including an increase in the number of requests received, increased complexity and volume of responsive records for those requests, loss of staff and active litigation with demanding production schedules. One reason the eight agencies with significant backlogs may be struggling to consistently reduce their backlogs is that they lack documented, comprehensive plans that would provide a more reliable, sustainable approach to addressing backlogs. In particular, they do not have documented plans that describe how they will implement best practices for reducing backlogs over time, including specifying how they will use metrics to assess the effectiveness of their backlog reduction efforts and ensure that senior leadership supports backlog reduction efforts, among other best practices identified by OIP. While agencies with backlogs of 1,000 or more are required to describe backlog reduction efforts in their chief FOIA officer reports, these consist of a high-level narrative and do not include a specific discussion of how the agencies will implement best practices over time to reduce their backlog. In addition, agencies with backlogs of fewer than 1,000 requests are not required to report on backlog reduction efforts; however, the selected agencies in our review with backlogs in the hundreds still experienced an increase over the 5-year period. Without a more consistent approach, agencies will continue to struggle to reduce their backlogs to a manageable level, particularly as the number and complexity of requests increase over time. As a result, their FOIA processing may not respond effectively to the needs of requesters and the public. Various Types of Statutory Exemptions Exist and Many Have Been Used by Agencies FOIA requires agencies report annually to Justice on their use of statutory (b)(3) exemptions. This includes specifying which statutes they relied on to exempt information from disclosure and the number of times they did so. To assist agencies in asserting and accounting for their use of these statutes, Justice instructs agencies to consult a running list of all the statutes that have been found to qualify as proper (b)(3) statutes by the courts. However, agencies may also use a statute not included in the Justice list, because many statutes that appear to meet the requirements of (b)(3) have not been identified by a court as qualifying statutes. If the agency uses a (b)(3) statute that is not identified in the qualifying list, Justice guidance instructs the agency to include information about that statute in its annual report submission. Justice reviews the statute and provides advice to the agency, but does not make a determination on the appropriateness of using that statute under the (b)(3) exemption. Based on data agencies reported to Justice, during fiscal years 2010 to 2016, agencies claimed 237 statutes as the basis for withholding information. Of these statutes, 75 were included on Justice’s list of qualifying statutes under the (b)(3) exemption. Further, we identified 140 additional statutes that were not identified in our 237 statutes claimed by agencies during fiscal years 2010 to 2016, but have similar provisions to other (b)(3) statutes authorizing an agency to withhold information from the public. We found that the 237 statutes cited as the basis for (b)(3) exemptions during the period from fiscal year 2010 to 2016 to fell into eight general categories of information. These categories were (1) personally identifying information, (2) national security, (3) commercial, (4) law enforcement and investigations, (5) internal agency, (6) financial regulation, (7) international affairs, and (8) environmental. Figure 6 identifies the eight categories and the number of agency-claimed (b)(3) statutes in each of the categories. Of the 237 (b)(3) statutes cited by agencies, the majority—178—fell into four of the eight categories: Forty-nine of these statutes related to withholding personally identifiable information including, for example, a statute related to withholding death certificate information provided to the Social Security Administration. Forty-five statutes related to the national security category. For example, one statute exempted files of foreign intelligence or counterintelligence operations of the National Security Agency. Forty-two statutes were in the law enforcement and investigations category, including a statute that exempts from disclosure information provided to Justice pursuant to civil investigative demands pertaining to antitrust investigations. Forty-two statutes fell into the commercial category. For example, one statute in this category related to withholding trade secrets and other confidential information related to consumer product safety. The remaining 59 statutes were in four categories: internal agency functions and practices, financial regulation, international affairs, and environmental. The environmental category contained the fewest number of statutes and included, for example, a statute related to withholding certain air pollution analysis information. As required by FOIA, agencies also reported the number of times they used each (b)(3) statute. In this regard, 33 FOIA-reporting agencies indicated that they had used 10 of the 237 (b)(3) statutes more than 200,000 times. Of these 10 most-commonly used statutes, the single most-used statute (8 U.S.C § 1202(f)) related to withholding records pertaining to the issuance or refusal of visas to enter the United States. It was used by 4 agencies over 58,000 times. Further, of the 10 most-commonly used statutes, the statute used by the greatest number of agencies (26 U.S.C § 6103) related to the withholding of certain tax return information; it was used by 24 FOIA-reporting agencies about 30,000 times. By contrast, some statutes were only used by a single agency. Specifically, the Department of Veterans Affairs used a statute related to withholding certain confidential veteran medical records (38 U.S.C. § 7332) more than 16,000 times. Similarly, EEOC used a statute related to employment discrimination on the basis of disability (42 U.S.C. § 12117) more than 10,000 times. Table 4 shows the 10 most-used statutes under the (b)(3) exemption, the agency that used each one most frequently, and the number of times they were used by that agency for the period covering fiscal years 2010 through 2016. Most Statutes Enacted after 2009 That Were Used by Agencies Did Not Specifically Cite the (b)(3) Exemption The OPEN FOIA Act of 2009 amended FOIA to require that any federal statute enacted subsequently must specifically cite paragraph (b)(3) of FOIA to qualify as a (b)(3) exemption statute. Prior to 2009, a federal statute qualified as a statutory (b)(3) exemption if it (1) required that the matters be withheld from the public in such a manner as to leave no discretion on the issue, or (2) established particular criteria for withholding or refers to particular types of matters to be withheld. In response to the amendment, in 2010, Justice released guidance to agencies stating that any statute enacted after 2009 must specifically cite to the (b)(3) exemption to qualify as a withholding statute. Further, the guidance encouraged agencies to contact Justice with questions regarding the implementation of the amendment. Even with this guidance, we found that a majority of agency-claimed statutes during fiscal years 2010 through 2016 did not specifically cite the (b)(3) exemption. Specifically, of the 237 (b)(3) statutes claimed by agencies, 103 were enacted or amended after 2009 and, thus, were subject to the requirement of the OPEN FOIA Act. Of those 103 statutes, 86 lacked the required statutory text that cited exemption (b)(3) of FOIA. Figure 7 shows the number of agency-claimed statutes subject to the OPEN FOIA Act of 2009 requirement that did not cite the (b)(3) exemption. Agencies are using these statutes as the basis for withholding information when responding to a FOIA request. This is despite these statutes not having a reference to the (b)(3) exemption as required by the 2009 FOIA amendments. Federal Court Decisions Have Not Required the Office of Special Counsel to Initiate Disciplinary Actions for the Improper Withholding of Records In our report, being issued today, we found that, according to the available information and Justice and OSC officials, since fiscal year 2008, no court orders have been issued that have required OSC to initiate a proceeding to determine whether disciplinary action should be taken against agency FOIA personnel. Specifically, officials in Justice’s Office of Information Policy stated that there have been no lawsuits filed by a FOIA requester that have led the courts to conduct all three requisite actions needed for Justice to refer a court case to OSC. Justice’s litigation and compliance reports identified six court cases (between calendar years 2013 and 2016) in which the requesters sought a referral from the courts in an attempt to have OSC initiate an investigation. However, in all six cases, the courts denied those requests, finding that each case did not result in the courts taking the three actions necessary to involve OSC. Thus, given these circumstances, Justice has not referred any court orders to OSC to initiate a proceeding to determine whether disciplinary action should be taken against agency FOIA personnel. For its part, OSC officials confirmed that the office has neither received, nor acted on, any such referrals from Justice. As such, OSC has not had cause to initiate disciplinary actions for the improper withholding of FOIA records. In summary, the 18 agencies we selected for review fully implemented half of the six FOIA requirements reviewed and the vast majority of agencies implemented two additional requirements. However, 5 agencies published and updated their FOIA regulations in a timely and comprehensive manner. Fully implementing FOIA requirements will better position agencies to provide the public with necessary access to government records and ensure openness in government. The selected agencies in our review varied considerably in the size of their backlogs. While 10 reported a backlog of 60 or fewer requests, 4 had backlogs of over 1,000 per year. Agencies identified a variety of methods that they used to address their backlogs, including practices identified by Justice, as well as additional methods. However, the selected agencies varied in the success achieved for reducing their backlogs. This was due, in part, to a lack of plan that describes how the agencies will implement best practices for reducing backlogs over time. Until agencies develop plans to reduce backlogs, they will be limited in their ability to respond effectively to the needs of requesters and the public. Accordingly, our draft report contains 23 planned recommendations to selected agencies. These recommendations address posting records online, designating chief FOIA officers, updating regulations consistent with requirements, and developing plans to reduce backlogs. Implementation of our recommendations should better position these agencies to address FOIA requirements and ensure the public is provided with access to government information. Chairman Grassley, Ranking Member Feinstein, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you have any questions on matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or at pownerd@gao.gov. Individuals who made key contributions to this testimony are Anjalique Lawrence (assistant director), Lori Martinez (analyst in charge), Gerard Aflague, David Blanding, Christopher Businsky, Rebecca Eyler, James Andrew Howard, Carlo Mozo, David Plocher, and Sukhjoot Singh. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study FOIA requires federal agencies to provide the public with access to government records and information based on the principles of openness and accountability in government. Each year, individuals and entities file hundreds of thousands of FOIA requests for information on numerous topics that contribute to the understanding of government actions. In the last 9 fiscal years, federal agencies subject to FOIA have received about 6 million requests. GAO was asked to summarize its draft report on federal agencies' compliance with FOIA requirements. GAO's objectives, among others, were to (1) determine the extent to which agencies have implemented selected FOIA requirements; (2) describe the methods established by agencies to reduce backlogged requests and the effectiveness of those methods; and (3) identify any statutory exemptions that have been used by agencies as the basis for withholding (redacting) information from requesters. To do so, GAO selected 18 agencies based on their size and other factors and assessed their policies against six FOIA requirements. GAO also reviewed the agencies' backlog reduction plans and developed a catalog of statutes that agencies have used to withhold information. What GAO Found In its draft report, GAO determined that all 18 selected agencies had implemented three of six Freedom of Information Act (FOIA) requirements reviewed. Specifically, all agencies had updated response letters to inform requesters of the right to seek assistance from FOIA public liaisons, implemented request tracking systems, and provided training to FOIA personnel. For the three additional requirements, 15 agencies had provided online access to government information, such as frequently requested records, 12 agencies had designated chief FOIA officers, and 5 agencies had published and updated their FOIA regulations to inform the public of their operations. Until these agencies address all of the requirements, they increase the risk that the public will lack information that ensures transparency and accountability in government operations. The 18 selected agencies had backlogs of varying sizes, with 4 agencies having backlogs of 1,000 or more requests during fiscal years 2012 through 2016. These 4 agencies reported using best practices identified by the Department of Justice, such as routinely reviewing metrics, as well as other methods, to help reduce their backlogs. Nevertheless, these agencies' backlogs fluctuated over the 5-year period (see figure). The 4 agencies with the largest backlogs attributed challenges in reducing their backlogs to factors such as increases in the number and complexity of FOIA requests. However, these agencies lacked plans that described how they intend to implement best practices to reduce backlogs. Until agencies develop such plans, they will likely continue to struggle to reduce backlogs to a manageable level. Agencies used various types of statutory exemptions to withhold information when processing FOIA requests during fiscal years 2010 to 2016. The majority of these fell into the following categories: personally identifiable information, national security, law enforcement and investigations, and confidential and commercial business information. What GAO Recommends GAO's draft report contains recommendations to selected agencies to post records online, designate chief FOIA officers, update regulations consistent with requirements, and develop plans to reduce backlogs.
gao_GAO-18-410
gao_GAO-18-410_0
Background Long Island Sound is an estuary, a body of water where fresh water from rivers draining from the land mixes with salt water from the ocean, in this case the Atlantic Ocean. The Sound is 113 miles long and 21 miles across at its widest point, with an average depth of 63 feet and a deepest point of 320 feet. The Sound’s coastline is 583 miles and includes more than 60 bays, with beaches and harbors where people interact most frequently with the Sound. As shown in figure 1, the Sound is bordered by Connecticut to the north and New York to the south and west, and its watershed includes parts of Massachusetts, New Hampshire, Rhode Island, and Vermont. Nearly all of Connecticut’s waters drain into the Sound, as do waters from the northern portion of Long Island and the New York City metropolitan area. New York City is the most populous city in the United States. In 1985, congressional committees directed EPA to work with states to research, monitor, and assess estuaries including the Sound. Around the same time, Connecticut, New York, and EPA raised concerns about pollution in the Sound due to the presence of a large population living near it, as well as 44 wastewater treatment plants and other industries that discharged into the Sound. In addition, they also raised concerns about pollution coming from sources that were not easily identified, such as runoff from land surrounding the Sound. To restore the health of the Sound, EPA partnered with the two states in 1985 to form the Long Island Sound Study, a partnership consisting of federal and state agencies, nonprofit and public organizations, and individuals dedicated to restoring and protecting the Sound. The Study has several committees and work groups that help to develop and implement the comprehensive conservation and management plan for the Sound. These groups include the Science and Technical Advisory Committee and the Citizens Advisory Committee, as well as the Water Quality Monitoring Work Group and the Habitat Restoration and Stewardship Work Group, which are responsible for facilitating improved collection, coordination, management, and interpretation of water quality, and promoting restoration of the Sound through an improved understanding of current threats. In 1987, the National Estuary Program was established under amendments to the Clean Water Act; the act further required EPA to give priority consideration to Long Island Sound, among others. According to EPA, the National Estuary Program is a community-based program designed to restore and maintain the ecological integrity of estuaries of national significance. One year after the program was established, EPA designated the Sound as such an estuary. Under the program, each estuary of national significance has a management conference that is required to develop a comprehensive conservation and management plan to restore and maintain the chemical, physical, and biological integrity of the estuary, including water quality, among other things. In 1990, the Long Island Sound Improvement Act required EPA to establish the Office of the Management Conference of the Long Island Sound Study, to be directed by an EPA official and to assist the Long Island Sound Study in carrying out its goals. The act required the Long Island Sound Study Office, as directed by EPA, to provide administrative and technical support to the management conference, or the Study. The act also required the Long Island Sound Study Office to report biennially on progress made in implementing the comprehensive conservation and management plan starting no more than 2 years after issuing the final plan. The Study, assisted by the Office, developed two reports—the Protection and Progress report and Sound Health report—to show progress toward the 1994 plan and issued the reports about every 2 years from 2001 through 2013. According to the Study, the purpose of the Protection and Progress report was to highlight regional efforts to restore and protect Long Island Sound, and the purpose of the Sound Health report was to provide a snapshot of the environmental health of Long Island Sound. In addition, the Study collects, tracks, and publishes information about environmental indicators on its website periodically, and has produced reports that summarized work done to carry out the 1994 plan. In its 1994 plan, the Study identified six priority problems and created associated goals (see table 1). In the 1994 plan, the Study identified hypoxia as the major water quality problem in the Sound, defining hypoxia as dissolved oxygen concentrations of less than 3 milligrams of oxygen per liter of water and noting that levels less than that are inadequate to support healthy populations of estuarine organisms. The Study noted that hypoxia caused significant, adverse ecological effects in the bottom water habitats of the Sound, such as reducing the abundance and diversity of adult fish and possibly reducing other species’ resistance to disease. According to the National Oceanic and Atmospheric Administration, the most common cause of hypoxia is nutrient pollution, specifically discharges of nitrogen and phosphorus. As shown in figure 2, sources of nutrient pollution include wastewater discharged from wastewater treatment plant pipes and runoff from agricultural fields, stormwater, and groundwater. Excess nutrients can cause algae—which occur naturally in oceans, lakes, rivers, and other water bodies—to rapidly multiply, resulting in algal blooms that can discolor the water or accumulate as thick scums and mats. When the algae die they sink and decompose, and this decomposition consumes oxygen that is dissolved in water and used by fish and shellfish to live. Reduced oxygen levels, in turn, can lead to increased mortality for fish, shellfish, and other aquatic populations, or can drive some species to relocate to more oxygenated waters. Water in estuaries is naturally stratified, with less dense fresh warmer water generally staying on top, and denser salty cool water on the bottom. In 2000, Connecticut and New York developed a total maximum daily load (TMDL) to achieve water quality standards for dissolved oxygen in Long Island Sound. In the TMDL, the states described efforts to manage hypoxia and identified nitrogen as the key contributor to hypoxia and identified the sources and amounts of nitrogen contributed to the Sound. These include wastewater treatment plants in Connecticut and New York; combined sewer overflows (CSO); nonpoint source pollution, or runoff from sources such as residences and farms that includes stormwater and groundwater; and atmospheric deposition. The TMDL set a 15-year nitrogen reduction goal for Connecticut and New York, from both point and nonpoint sources of nitrogen, to be achieved by August 2014. The TMDL also calls for implementing management actions for nitrogen entering the Sound from other states where feasible. In the TMDL, Connecticut and New York identified the need for an adaptive management approach because it would require nitrogen reduction beyond the limits of technology current at the time. The states also agreed to reassess the nitrogen reduction goals and revise the TMDL as necessary. Although a Comprehensive Assessment of Progress Has Not Been Conducted, Study Members Believe Moderate Progress Has Been Made Since 1994 The Study Collected a Wide Range of Data and Issued Progress Reports, but Did Not Conduct a Comprehensive Assessment of Progress Toward Achieving the 1994 Plan Although the Study has collected a wide range of data to measure the health of Long Island Sound and has issued periodic progress reports since 2001, these progress reports have not contained a comprehensive assessment of progress toward the goals of the 1994 plan. In the absence of a comprehensive assessment of progress, Study members we interviewed said that they believe that moderate progress has been made toward goals associated with five of the six priority problems identified in the 1994 plan. The Study has collected a wide range of data used to measure the health of Long Island Sound. According to a Study member, the Study began identifying and collecting these data in 1998 with the purpose of evaluating progress toward achieving the goals of the 1994 plan. The data were gathered by federal and state agencies and universities, and were provided to the Study, which published the data on its website. As of November 2017, the data on the website were organized into groups of environmental indicators including water quality, marine and coastal animals, land use and population, and habitats. We found that many of the indicators and their data could be linked to goals associated with the six priority problems in the 1994 plan. Examples of these indicators and the related data and associated goals are shown in table 2. As required by the Long Island Sound Improvement Act, since 2001, the Study has issued periodic progress reports—five Protection and Progress reports and six Sound Health reports, available on the Study’s website— that have focused on specific examples of the restoration effort. The most recent of these reports were organized into sections that can be linked to the priority problems identified in the 1994 plan. For example, the most recent Protection and Progress report, issued in 2013, included sections on water quality and habitat restoration efforts that can be linked to the priority problems “hypoxia” and “management and conservation of living resources and their habitats.” The most recent progress reports also included examples of progress using indicator data that we could link to some of the goals and priority problems in the 1994 plan, such as the following: Both reports included examples of progress that could be linked with the priority problem “hypoxia.” The Protection and Progress report identified pounds of nitrogen discharged into the Sound from 2001 through 2012 and provided data showing reduced nitrogen discharges over time, which the Study stated it expected to result in decreased hypoxic areas and increased dissolved oxygen. The Sound Health report identified both the area, in square miles, and duration, in days, of hypoxia in the Sound from 1987 through 2012. The Protection and Progress report included examples of progress that could be linked to the goal to increase the abundance and distribution of harvestable species, which is associated with the priority problem “management and conservation of living resources and their habitats.” For example, the Protection and Progress report included examples of progress in the number of river miles restored from 1998 through 2012 as well as the number of fish returning to the rivers. The Sound Health report included examples of progress that could be linked to both goals associated with the priority problem “pathogen contamination.” These goals were to (1) increase the amount of area certified or approved for shellfish harvesting while adequately protecting the public health and (2) eliminate public bathing beach closures while adequately protecting the public health. The Sound Health report identified the number of beach closure and advisory days from 1993 through 2011 and the number of acres approved for shellfish harvesting from 2005 through 2011. However, the Study’s progress reports did not contain a comprehensive assessment of the progress toward the goals of the 1994 plan. Specifically, the progress reports included examples of progress using indicator data and they did not include a comparison of that progress against a specific amount to be achieved—a numerical goal. For example, the Protection and Progress report included an example of progress on pathogen contamination, but the report did not include a comparison of the data on acres of shellfish harvesting areas against a numerical goal for the amount of acres of shellfish approved for harvesting. In addition, the Sound Health report included examples of progress on toxic substances, but the report did not include a comparison of the reduction of toxics discharged into the Sound against a numerical goal for the reduction of toxic inputs. As we have previously reported, having a numerical goal permits expected performance to be compared with actual results. Part of the challenge for the Study to conduct such an assessment arises from the fact that only one of the goals in the 1994 plan had numerical goals against which the Study could compare progress. According to a Study member, because the rest of the goals were not numerical goals, a comprehensive assessment of progress toward achieving the 1994 plan was not conducted. Although such an assessment was not conducted, the Study has made available a comprehensive assessment of available science and data about the environmental dynamics of the Sound in the 2014 publication Long Island Sound: Prospects for the Urban Sea. The book—written by scientists from federal and state agencies and universities—includes sections on the geology and chemistry of the Sound; development patterns in the area surrounding the Sound; metals, contaminants, and nutrients discharged to the Sound; and management options for the Sound. Prospects for the Urban Sea identified science gaps and research needs and made several recommendations, including better characterizing the relationship between smaller bays and inlets and the Sound, integrating climate change across programs, prioritizing management of existing pollution sources and impairments, and improving data management and interpretation. According to Study members, the book served as a reference for scientists conducting research in Long Island Sound and as the basis for the 2015 plan. Study Members Believe Moderate Progress Has Been Made Toward Goals Associated with Five Priority Problems, but Not Toward the Goal Associated with Hypoxia In the absence of a comprehensive assessment of progress, we asked study members for their views regarding progress made since 1994. Nearly all of the Study members we interviewed who provided a response about progress made toward the goals of the 1994 plan agreed that the restoration effort has made moderate progress, and they cited various data to support their views. Specifically, Study members believed that moderate progress has been made toward achieving goals for five of the six priority problems: (1) toxic substances, (2) pathogen contamination, (3) floatable debris, (4) management and conservation of living resources and their habitats, and (5) land use and development. However, Study members agreed that they have not made similar progress toward the goal associated with the priority problem hypoxia because they had not observed the reductions in hypoxia that they expected; representatives from the New York State Department of Environmental Conservation said that the defined hypoxia goals have been met. Table 3 shows the number of Study members we interviewed who said moderate progress has been made toward goals associated with five of the priority problems in the 1994 plan and the number of Study members who provided views about progress. Although the Study members we interviewed cited various data to support their views, without a comprehensive assessment of that data it is not possible to definitively determine to what extent their assessment of progress reflects actual progress made. The following summarizes Study members’ views about all six of the priority problems and data they cited. Toxic Substances The goal in the 1994 plan associated with the priority problem “toxic substances” was to protect and restore the Sound from the adverse effects of toxic substance contamination by reducing toxic inputs, cleaning up contaminated sites, and effectively managing risk to human users. Toxic substances include metals, such as mercury and lead, and chlorinated hydrocarbons, such as the pesticide dichlorodiphenyltrichloroethane, commonly known as DDT. These substances were released from industrial and wastewater treatment plants into the air and into rivers and streams that flow to the Sound. The Study reported in a 2012 progress report that bans of toxic substances, stricter regulation of industrial facilities, and a decline in manufacturing contributed to the reduction of toxic substances. All nine Study members who provided a response about progress toward this goal said that moderate progress has been made. As evidence that moderate progress has been made, Study members cited data from EPA’s Toxics Release Inventory. For example, two Study members said that the EPA data showed that toxic releases into the Long Island Sound watershed have been reduced. In addition, two Study members identified concerns about new toxic substances identified in the Sound. Specifically, they said that monitoring and research is needed to understand how toxic substances found in pharmaceutical and personal products may affect the Sound. One program that monitors toxic substances in the Sound is the Mussel Watch program, run by the National Oceanic and Atmospheric Administration’s National Centers for Coastal Ocean Science. The program examines tissues of shellfish, such as oysters, to measure toxic substances that were previously unknown or unidentified that may negatively affect the Sound or human health. The research includes monitoring of substances found in everyday products including pharmaceuticals, personal care products, furniture, and plastics. Pathogen Contamination The two goals in the 1994 plan associated with the priority problem “pathogen contamination” were (1) to increase the amount of area certified or approved for shellfish harvesting while adequately protecting the public health and (2) to eliminate public bathing beach closures while adequately protecting the public health. Pathogens include bacteria or viruses from animal waste or inadequately treated sewage discharge that can accumulate in shellfish. Human consumption of contaminated shellfish can lead to illness and disease. Nine of the 10 Study members who provided a response about progress toward these goals said that moderate progress has been made. As evidence that moderate progress has been made, some Study members cited data on the number of acres approved for shellfish harvesting and on the number of beach closures and advisory days. For example, according to one Study member, since 2010 there has been an increase in the number of acres certified for shellfishing in New York’s portion of Long Island Sound. Seven of the nine Study members who said that moderate progress has been made toward this priority problem also said that improvements in wastewater treatment plants and regulation of sewage discharge from boats have reduced the amount of pathogens in the Sound, such as by reducing the amount of waste discharged into the Sound. Several of the Study members said that these improvements have included municipalities investing in wastewater treatment plant upgrades to address combined sewer overflow (CSO) pollution. For example, New York City officials said that the city spent $2.5 billion on infrastructure projects, such as improvements in wastewater treatment plants and CSO retention tanks. As a result, the officials said that New York City’s wastewater treatment plants can manage more stormwater, leading to fewer CSOs and reduced pathogen discharges overall. Floatable Debris The two goals in the 1994 plan associated with the priority problem floatable debris were (1) to reduce the flow of litter from its major sources and (2) to collect and pick it up once it is in the Sound. Floatable debris in the Sound mostly consists of plastic bags, plastic bottles, and food wrappers. This debris is washed into the Sound through stormwater and CSOs. In the 1994 plan, the Study proposed actions to reduce the flow of floatable debris into the Sound in two ways, engaging volunteers in cleanup efforts and collecting it from combined sewers before it enters the Sound. Nine of the 10 Study members who provided a response about progress toward these goals said that moderate progress has been made. Three Study members said that recycling or public outreach programs may have contributed to progress made in part by increasing public awareness of the problem. As evidence that moderate progress has been made, Study members cited data from coastal cleanups and from New York City’s boom and skim program. For example, one Study member said that beach cleanup data show a reduction in debris collected from beach cleanups and another Study member stated that New York City has installed screens at some CSO outflows to capture debris in runoff released to the waters of Long Island Sound. Management and Conservation of Living Resources and Their Habitats The three goals in the 1994 plan associated with the priority problem “management and conservation of living resources and their habitats” were to (1) assure a healthy ecosystem with balanced and diverse populations of indigenous plants and animals, (2) increase the abundance and distribution of harvestable species, and (3) assure that edible species are suitable for unrestricted human consumption. In the 1994 plan, the Study reported that it would focus on managing water quality, habitats, and species to address these goals. In particular, the Study reported in the 1994 plan that the destruction of coastal habitats has had a major impact on the diversity and abundance of plants and animals in and along the Sound. Eleven of the 12 Study members who provided a response about progress toward these goals said that moderate progress has been made. As evidence that moderate progress has been made, Study members cited data on several indicators, including acres of coast habitat and acres of eelgrass restored, marine mammal sightings, and the number of nesting pairs of coastal birds. For example, one Study member cited an increase in the abundance of eelgrass beds as support for moderate progress toward that type of habitat. Two other Study members cited increased sightings of dolphins and whales in the Sound as an indicator of improved habitat. Land Use and Development The five goals in the 1994 plan associated with the priority problem “land use and development” were to: (1) reduce the impacts from existing development to improve water quality, (2) minimize the impacts from new development to prevent further degradation of water quality, (3) expand information, training, and education for land use decisions to effectively incorporate water quality and habitat protection, (4) conserve natural resources and open space, and (5) improve public access so that the public can use and enjoy Long Island Sound. According to EPA, impervious cover—land cover that does not allow water to infiltrate into the ground—increases the amount of stormwater that runs off into streams, rivers, and other water bodies. Stormwater runoff can carry pollutants such as pathogens, toxic substances, and nutrients to storm drains, rivers, and streams that flow into the Sound. According to the 1994 plan, one way to reduce impervious cover and control stormwater runoff is through the use of green infrastructure. Green infrastructure includes practices and structures to manage stormwater that use or mimic natural processes to slow stormwater runoff, filter pollutants from the runoff, and facilitate stormwater storage for future use or to replenish groundwater. An example of a green infrastructure project implemented around the Sound is a bioswale, a vegetated area adjacent to a road, designed to collect and filter stormwater, cleaning the water and improving water quality by allowing it to seep into the soil. Figure 3 shows a bioswale developed for use in New Haven, Connecticut, as part of a Long Island Sound restoration project. Eleven of the 12 Study members who provided a response about progress toward these goals said that moderate progress has been made. As evidence, Study members cited data on changes in impervious cover. Study members also cited data on open space acquisitions as showing progress toward the goals related to this problem. According to Study members, one way that the Study protected open space was by identifying locations around the Sound that should be acquired and protected from development. Specifically, in 2006, the Study designated 33 locations, called Stewardship Areas, to protect habitat and wildlife from encroaching development. Stewardship Areas are locations within the Long Island Sound region that have significant ecological, educational, open space, public access, or recreational value and are protected from development. Figure 4 shows the locations of the 33 Stewardship Areas in the Long Island Sound region. The goal in the 1994 plan associated with the priority problem “hypoxia” was to increase dissolved oxygen levels in the Sound to eliminate adverse impacts of hypoxia resulting from human activities. All 11 of the Study members who provided a response about progress toward this goal agreed that nitrogen has been reduced in the Sound since the 1994 plan, while 4 said that they have not observed the expected reduction in hypoxia. According to the 1994 plan, Study members based their expectation on a water quality model they used at the time. As evidence for nitrogen reduction in the Sound, Study members said that both Connecticut and New York met their 15-year TMDL wasteload allocation target to reduce nitrogen discharged into the Sound by 58.5 percent. To achieve their nitrogen targets, the Study reported that the states upgraded wastewater treatment plants. For example, communities in both states upgraded their plants with biological nutrient removal, a process in which bacteria break down and remove the reactive nitrogen found in human waste. According to EPA officials, recovery from hypoxia in coastal waters will not be rapid or predictable and evidence shows that dissolved oxygen levels in the Sound are recovering because of nitrogen reductions. According to Study members, hypoxia is a complex phenomenon affected by a number of factors that help to explain characteristics of hypoxia in the Sound. For example, three Study members said that an increase in water temperature can exacerbate hypoxia; warmer water holds less oxygen than cold water. As a result, in summer months the combination of temperature and salinity contributes to the isolation of the bottom layer of water from the usually well-oxygenated surface layer. Two Study members said that another factor that affects hypoxia is precipitation. For example, heavy rainfall could increase the amount of stormwater runoff that carries nutrients, such as nitrogen, into the Sound, which could lead to an increase in algal blooms and hypoxia. According to the 2012 Sound Health report, in 2012, Hurricane Sandy’s storm surge overwhelmed many wastewater treatment plants, and stormwater runoff entered the Sound. In addition, four Study members said that there may be a lag between a reduction in nitrogen and a reduction in levels of hypoxia. Several Study members said that the water quality model they used in 1994 to predict the relationship between hypoxia and nitrogen may have incorrectly predicted the effect of reducing nitrogen on hypoxia or could be improved to better show the relationship between the two. Beginning in 2005, the Study conducted an evaluation of its water quality model that identified fundamental weaknesses with how the model captured the dynamics of hypoxia and mixing of water layers in the Sound. Subsequently, the Study has funded the development of a new model that it expects will more accurately reflect the relationship of the various sources of nitrogen and hypoxia. A Study member said that it was not possible to predict when the new model would be ready because of the nature of the work. However, the Study member added that it may be 10 to 20 years before the data show if and how nitrogen reduction efforts based on the new model reduce hypoxia. The 2015 Plan Has Four Goals to Improve Water Quality and Ecosystem Functions, but Study Members Identified Various Factors that May Hinder Progress The 2015 Plan Has Four Goals and Associated Themes to Improve Water Quality and Other Ecosystem Functions The 2015 plan has four goals to improve water quality and restore and protect ecosystem functions, among others. Each goal is associated with one of four broad themes: clean water and healthy watersheds, thriving habitats and wildlife, sustainable and resilient communities, and sound science and inclusive management. To achieve the goals, the Study developed specific outcomes, objectives, strategies, and action plans but stated that factors such as insufficient funding and climate change may hinder restoration efforts. In addition, most Study members stated that even if the goals of the 2015 plan are met, new and emerging challenges will require restoration efforts to continue, at a minimum, to monitor the Sound. The 2015 plan has four goals, associated with four themes to improve water quality and other ecosystem functions in the Sound while creating sustainable communities and using sound science as a basis for restoration. According to the 2015 plan, the goals and associated themes were developed by building upon the progress already made toward the 1994 plan and years of research and monitoring of the Sound. As previously mentioned, Study members said that the book they published with many scientists helped to develop the 2015 plan. The book Long Island Sound: Prospects for the Urban Sea, synthesized the advances in science made over the past decades in understanding the Sound. Study members also said that an update of the plan was needed to incorporate an improved understanding of the Sound and to address new issues that might affect restoration of the Sound. The four goals and their associated themes are as follows. Clean water and healthy watersheds. The goal associated with this theme addresses improving water quality through reducing contaminant and nutrient loads from the land and waters impacting the Sound. According to the 2015 plan, the condition of the Sound depends on the quality of the water draining from the land around it and, although progress has been made, the issues affecting water quality in the 1994 plan remain. These issues include hypoxia, pathogens, and development. Eelgrass Eelgrass (Zostera marina) is a rooted underwater plant with ribbon-like strands that form beds and meadows in estuaries. These beds are a haven for crabs, scallops, numerous species of fish, and other wildlife because the beds provide for them a habitat, protection from predators, nursery grounds, food, and oxygen. Additionally, eelgrass improves water clarity by filtering pollutants from runoff and by absorbing nutrients such as nitrogen and phosphorus. It also protects shorelines from erosion by absorbing wave energy. Eelgrass health can be negatively affected by excessive nutrients, limited sunlight exposure, and high water temperatures. For these reasons, the Long Island Sound Study uses eelgrass growth as an indicator for good water quality. Thriving habitats and abundant wildlife. The goal associated with this theme addresses restoring and protecting the Sound’s ecological balance, including fish and shellfish populations and ecologically significant shorelines and habitats along the Sound, to benefit both people and the environment. According to the 2015 plan, the 1994 plan identified habitats and living resources to manage and protect and the Study identified 12 types of coastal habitats for restoration, including beaches and dunes, cliffs and bluffs, estuarine embayments, coastal and island forests, freshwater wetlands, coastal grasslands, intertidal flats, rocky intertidal zones, riverine migratory corridors, submerged aquatic vegetation such as eelgrass, shellfish reefs, and tidal wetlands. While progress has been made through acquiring thousands of acres of land, according to the 2015 plan, habitat connectivity and riverine migratory corridor reconnection can be improved. Sustainable and resilient communities. The goal associated with this theme addresses supporting communities to use, appreciate, and help protect the Sound. According to the 2015 plan, local government leadership, private sector engagement, community organizations, and individual stewardship will be needed to restore the Sound. The theme focuses efforts on communities, which was not a focus of the 1994 plan. Sound science and inclusive management. The goal associated with this theme seeks to ensure the Study is using sound science and cross-jurisdictional governance that is inclusive, adaptive, innovative, and accountable throughout its restoration efforts in the Sound. According to the 2015 plan, the Sound and its watershed covers more than 16,000 square miles in six states and includes hundreds of local watersheds. Management of the Sound involves collaboration and governance among numerous partners and stakeholders who need thorough understanding of the issues. According to the plan, such understanding comes from research, monitoring, assessment, mapping, and modeling programs. To achieve the goals associated with the plan’s four themes, the Study also developed outcomes, objectives, strategies, and implementation actions and published these in the 2015 plan and supplemental documents. The 2015 plan defines outcomes as “broad results needed to achieve the goals.” For example, as shown in table 4, an outcome associated with the “clean water and healthy watersheds” theme is “to improve research, monitoring, and modeling for water quality.” Each outcome has multiple associated objectives, which are the accomplishments needed to achieve each outcome, and each objective has multiple strategies. To carry out each strategy, the Study has developed 139 implementation actions, which are specific actions such as estimating future phosphorus loads or promoting eelgrass management. The Study also developed four supplemental documents, one for each theme, that describe the 139 implementation actions and steps to be taken in 2015 through 2019 and the expected outcomes. Study Members Cited Numerous Factors, Including Insufficient Funding, Climate Change, and Development and Growth That May Hinder Progress Study members we interviewed said numerous factors may hinder Long Island Sound restoration progress, including insufficient funding, climate change, insufficient scientific understanding or data-related issues, development and population growth, and insufficient public appreciation of the Sound. (See app. II for a list of all the factors Study members identified that may hinder progress.) Of the 17 Study members we interviewed about factors that may hinder progress, 14 said that insufficient funding can, for example, hinder their ability to manage restoration efforts, mitigate the effects of development and population growth, implement new projects, or effectively conduct existing projects. One Study member said that development and population growth can be overcome with mitigation activities, but that these require funding. Another Study member said that insufficient funding leads to vacant staff positions and that the Study member’s organization is strained with small staff numbers. This limits the Study’s ability to coordinate among the many agencies and programs working on restoration. Another Study member identified the effects of insufficient funding on a restoration project. Specifically, a town received a Study grant for a green infrastructure project near the Sound, but the town modified the project because the grant was smaller than what the project needed. The project plan included constructing the building with permeable parking surfaces and green features, such as rain gardens, to help improve water quality. According to a town official, the town wanted to do more green features but because it received a smaller grant, the number of permeable surfaces and green features the town could build were limited. Nine of the 17 Study members we interviewed said that climate change can hinder restoration progress. Study members discussed different types of effects that may be possible, such as affecting water temperature, weather, and sea level. For example, two Study members said that warmer waters caused by climate change could increase the Sound’s susceptibility to hypoxia by increasing the risk of potential harmful algal blooms and the length of time low-dissolved oxygen remained at hypoxic levels. Another Study member stated that warmer waters can cause outbreaks of the naturally occurring bacterium Vibrio parahaemolyticus, which accumulates in shellfish and affects the shellfishing industry. In addition, two Study members said that changes in weather caused by climate change could cause an increase in stormwater and therefore the amount of pathogens washed into the Sound; another Study member said that increased storm activity could destroy marshes. According to the Study, salt marsh vegetation in tidal wetlands helps protect against erosion and typically manages to accumulate enough sediment and organic matter to keep up with naturally-occurring, gradual sea level rise. However, the Study reported that tidal wetlands in the Sound may not be able to keep up with the rise in sea level projected to result from climate change. One Study member said that marshes are already being affected by increased coastal flooding that may be caused by sea level rise. As we reported in November 2013, changes in the climate—including warmer temperatures, changes in precipitation patterns, rising sea levels, and more frequent and intense storms—affect water resources in a number of ways, such as erosion and inundation in coastal areas. In particular, we reported that a 2011 federal agency review of the potential impacts of climate change on water resources identified four interrelated areas of concern for water resource managers. One of the four is protecting coastal and ocean resources as rising sea levels and changes in storm frequency, intensity, and duration impact coastal infrastructure. Also, in September 2014, we reported that ocean acidification—the increased absorption of carbon dioxide emitted by humans into the oceans—is resulting in chemical changes in the oceans that may pose risks for some marine species and ecosystems, as well as for the human communities that rely upon them for food and commerce. Tidal wetlands and salt marshes Wetlands are areas that are inundated or saturated by surface or groundwater and that have a prevalence of vegetation adapted for life in saturated soil conditions. Tidal wetlands are specifically linked to estuaries—locations where sea water mixes with fresh water to form an environment of varying salinity. Tidal wetlands are among the most productive ecosystems in the world, providing food, shelter, and breeding or nursery grounds for many species of wildlife. Salt marshes are a type of tidal wetlands that have been flooded and drained by salt water brought in by the tides. Salt marshes help protect the land from flooding and erosion in stormy weather, and filter pollutants contained in storm water runoff. Tidal wetlands are threatened by changes in the climate causing sea levels to rise more rapidly, which can cause tidal wetlands to convert to open water. In addition, one expert we interviewed said that gains in restoring marshes and wetlands already made by the Study may be lost due to rising sea levels. To address this problem, another expert we interviewed said that techniques such as spraying material dredged from the Sound, such as sand and silt, across these areas for the purpose of raising wetlands or marshes are being tested to keep up with sea level rise. One expert also said that increased water temperatures around the Sound may make the water uninhabitable for shellfish. EPA officials said that while increased water temperatures will affect the relative abundance and distribution of shellfish in the Sound, it cannot be concluded that the Sound will become uninhabitable for shellfish because of increased water temperatures. In addition, as we reported in October 2016, unusually high water temperatures may enhance the growth of harmful algal blooms that produce toxins causing neurological and other damage in fish populations. Warming waters will also increase the Sound’s susceptibility to hypoxia because the solubility of oxygen decreases as water temperature increases. Five of the 17 Study members we interviewed said that insufficient scientific understanding and data related issues would hinder progress toward restoration of the Sound. For example, one Study member highlighted the need to better understand the relationship between nutrients and hypoxia. That Study member also said that incomplete data on nutrients, particularly from nonpoint sources, may hinder progress. Another Study member said that obtaining data is difficult, in particular for areas such as embayments and tributaries that are still affected with nonpoint source pollution. Three of the 17 study members we interviewed said that development and population growth will also hinder the progress of restoration. In addition, 7 of the 17 Study members said that the Sound cannot be restored to past conditions, and a key reason why is that development and increased human population have led to changes in the Sound that hinder full restoration. For example, one Study member said that increased population and development can negatively affect water quality because it resulted in a greater amount of impervious cover such as highways and roads, which in turn increases the nutrient and sediment pollution in runoff. Microbeads Microbeads are pieces of manufactured polyethylene plastic 5 millimeters or less in size that are added as exfoliants to health and beauty products, such as some cleansers and toothpastes. These tiny particles may pass through some water filtration systems and end up in the oceans and the Great Lakes, posing a potential threat to aquatic life. For example, microbeads can look like food to fish and other marine organisms. Once ingested, microbeads can obstruct an animal’s digestive system. In addition, microbeads can absorb contaminants that can be hazardous to animals that eat the microbeads, and, in turn, can harm the animals and people that consume them. Three of the 17 Study members we interviewed said that insufficient public appreciation of the Sound would hinder progress toward restoration. In this context, two Study members highlighted that much of the land along the Sound is privately owned, which makes it difficult for some to travel to the Sound or to appreciate it. Nearly all of the Study members who we interviewed said that even if the goals associated with the four themes of the 2015 plan are achieved, restoration efforts will need to continue into the future because the Sound will continue to face new challenges and threats and that the Study will need to continue monitoring the Sound to understand them. For example, microbeads are an emerging issue that was not addressed in the 2015 plan. In 2015, after the Study issued the 2015 plan, a Southern Connecticut State University research team reported that it had found microbeads in New Haven Harbor, Connecticut. Microbeads are small pieces of plastic found in common household products that can make their way into waterbodies and threaten aquatic life. In December 2015, the federal government enacted the Microbead-Free Water Act of 2015, which banned the manufacturing, distribution, and offer for sale into interstate commerce of rinse-off cosmetics that contain intentionally- added plastic microbeads. In addition, in June 2015, Connecticut had enacted legislation that phased in bans on the manufacturing, import, sale, or offer for sale of personal care products and over-the-counter drugs that contain microbeads in that state. New York had proposed legislation to address the issue of microbeads in early 2015 but did not enact it. Study Members Have Identified Long-Term Targets and Indicators to Measure Progress, but Have Not Yet Fully Incorporated Leading Practices for Performance Reporting Study members said that they plan to use 20 long-term targets with associated indicators to measure progress toward the goals associated with the four themes of the 2015 plan. While 18 of the long-term targets currently have numerical goals, they do not yet have associated intermediate targets that can be used to monitor progress; but EPA officials said that the Study is working to establish them. In March 2018, the Study issued web pages for each of the 20 targets to report on such progress, but, as of June 2018, these pages do not yet fully incorporate leading practices of performance reporting. Twenty Long-Term Targets and Associated Indicators Will Be Used to Measure Progress and Intermediate Targets Are Being Developed Study members said that they have identified and plan to use 20 long- term targets with associated indicators to measure progress toward the goals of the 2015 plan (see app. III for a complete list of the 20 long-term targets and their associated indicators). The 20 targets are grouped by the four themes in the 2015 plan. All of the targets include indicators that describe how the targets will be achieved, and all but two of those indicators currently have numerical goals, with a value to be achieved by 2035. For example, the indicator for the target “approved shellfish areas” in the “clean waters and healthy watersheds” theme has a numerical goal to upgrade the percentage of shellfish acreage restricted or closed for shellfishing in 2014 in Connecticut and New York by 5 percent by 2035. According to the 2015 plan, to achieve a 5 percent increase, the states would need to upgrade 17,400 of the 349,000 acres of closed or conditionally closed shellfish areas. Of the 20 targets in the 2015 plan, the 2 that do not yet have indicators with numerical goals are “habitat connectivity” and “public engagement and knowledge.” Two of the Study members responsible for updating the indicators said that the Study is developing numerical goals for each target. According to these Study members, the main reason that these targets do not yet have numerical goals is that presently there are insufficient data that can be analyzed and interpreted to establish them. Study members are in the process of collecting data that will be used to finalize a numerical goal. These Study members said that it may take a year or more to collect the necessary data. Generally, the 19 experts we interviewed agreed that the indicators used by the Study were valid, accurate, and reliable ways to measure progress for the 20 long-term targets, but some experts also suggested improvements. For 12 of the 20 indicators, all of the experts we interviewed agreed that they were valid, accurate, and reliable. For example, one expert pointed out that the indicator for the riparian buffer extent target is the only practical way to measure progress. Another expert said that the indicator for the coastal habitat extent target is a good choice because it can show progress that the public can easily understand. A few experts suggested improvements to make some of the indicators more useful for measuring progress. For example, one expert said that the indicator for the target “extent of hypoxia” would be better if the focus were on the Western Sound, where hypoxia is a greater problem. The expert also questioned why the Study is concerned with hypoxia across the entire Sound when some areas are only slightly hypoxic and not big enough to have a great impact on the overall level of hypoxia in the Sound. EPA officials responded that the target “extent of hypoxia” is focused on the Western Sound. They added that it must be noted that target applies everywhere in the Study because changes in water quality could occur anywhere in the Sound. For the other eight indicators, not all experts we interviewed agreed on these indicators. For example, for the tidal wetlands indicator—the acreage of tidal wetlands restored to help restore tidal flow—eight of nine experts we interviewed said that the indicator was valid, accurate, and reliable, but one expert said that it was too simplistic. This expert said that a better indicator would focus on the amount and health of marsh grasses that are planted to restore the tidal wetlands. This is because marsh grass health is affected by nitrogen levels and sea level rise, which also impact tidal wetlands. For the approved shellfish area indicator—the acreage of approved shellfishing areas—six of eight experts we interviewed said that the indicator was valid, accurate, and reliable, but two experts disagreed. One of these experts said that the target is part of the theme to improve water quality and that shellfishing areas can be approved for administrative reasons that are not related to water quality improvement. The other expert added that certain shellfish areas in New York are closed because budget constraints limit the number of reviews that can be conducted to reopen shellfishing areas. The use of numerical goals to monitor progress toward the 20 long-term targets is consistent with leading practices for performance management that we have identified in our previous work. We have found that a key attribute of successful performance measures is that they have quantifiable numerical goals or other measurable values that permit expected performance to be compared with actual results. Additionally, we have reported that intermediate goals and measures can be used to show progress or contribution to intended results. During the course of our work, we shared with Study members our concern that only 2 of the 20 long-term targets have intermediate targets. In response, in web pages for the 20 targets available in June 2018, the Study had established intermediate targets for an additional 10 of the 18 long-term targets that did not have intermediate targets. For these 10 targets, the Study identified how much progress would need to be made each year to achieve each target’s numerical goal by 2035. For example, for the approved shellfish areas target, the intermediate target is “to approve more than 850 acres of currently closed shellfish areas per year to reach the goal of approving 17,400 acres by 2035.” For the remaining 13 targets without intermediate targets, EPA officials said that the Study is working to establish intermediate targets using the indicator data collected by federal and state agencies. By incorporating intermediate targets into its web pages to report on progress, the Study can better ensure its members, the public, and Congress have important information on whether the Study is making progress toward achieving its long-term targets or whether additional actions need to be taken. Progress Reports Do Not Yet Fully Incorporate Leading Practices As previously mentioned, the Long Island Sound Improvement Act of 1990 required the Study to report every 2 years on progress made in implementing the comprehensive conservation and management plan. The Study reported through 2013, using the Protection and Progress and Sound Health reports but did not report again until it issued web pages for the 20 long-term targets in March 2018. According to an EPA official, the Study did not report on the evaluation of progress during that 5-year period because EPA was working with Study members to adapt the Study’s reports to the 2015 plan indicators and to update the format of its web pages to report on progress. An EPA official said that the Study plans to use the web pages the agency issued in March 2018 to report progress on each of the 20 long-term targets. Our previous work on performance management states that reporting on performance should involve leading practices such as (1) evaluating performance compared to a plan, (2) reviewing performance for a preceding period of time (for example, 5 years), and (3) evaluating actions for unmet goals. We have found the following benefits of these leading practices: Evaluating performance compared to a plan allows agencies to describe the performance indicators established in the plan and the performance achieved to meet them. In addition, evaluating performance could help agencies understand the relationship between their activities and the results they hope to achieve. Reviewing performance for a preceding period of time, including baseline and trend data, can help agencies ensure that individuals using the report review the information in context and identify whether performance targets are realistic given the past performance. In addition, the data can assist individuals who use the report to draw more informed conclusions than they would by comparing only a single year’s performance against a target. Evaluating actions for unmet goals explains why the goal was not met, provides plans and schedules to achieve the goal, and, if the goal is impractical, why it is impractical. Explaining the reasons for any unmet goals allows agencies to recommend actions that can be taken to achieve the goals, or needed changes to the goals. In our review of the Study’s web pages in June 2018, we found that the Study has not yet fully incorporated the three leading practices for reporting on performance. The Study used the three practices to varying extents, as described below. Evaluating performance compared to the 2015 plan for 19 targets. We believe that the Study fully incorporated this practice by creating a status bar on the web pages for 19 of the 20 ecosystem targets to indicate if progress toward a target’s numerical goal was behind schedule, on track, ahead of schedule, or if the numerical goal was met. For example, the Study reported that progress for the target “approved shellfish areas” was behind schedule. Reviewing performance for a preceding period of time for 11 targets. We believe that the Study partially incorporated this practice by reporting progress data for 5 or more preceding years for 11 targets but not the remaining 9. For example, on the web page for the tidal wetlands extent target, the Study reported progress data for each year from 1998 to 2017. Evaluating actions for unmet goals for four targets. We believe that the Study partially incorporated this practice by explaining why the goal was not met for 4 targets but did not explain why the goal was not met for 15 targets. For example, for the target “public access to beaches and waterways,” the Study reported that increasing the number of public access points may be difficult because there are many privately owned properties along the Long Island Sound coast. However, the Study provided plans and schedules to achieve unmet goals for only two targets. For example, the Study reported that to achieve the numerical goal for protected open space, an average of 200 acres of Connecticut land and 150 of New York land needs to be protected each year. An EPA official said that the web pages may undergo further modifications and that the Study plans to update information about the targets annually or according to how frequently the underlying data are collected. By working with the Study as it finalizes its reporting format to incorporate the leading practices of performance reporting, EPA could help ensure that the Study provides the public and Congress with the information they need to determine whether the Study is making progress toward achieving the long-term targets associated with the goals of the 2015 plan, or whether the Study should take additional action to meet the targets. Study Members Expended at Least $466 Million on Restoration Activities, but the Study’s Estimate of At Least $18.9 Billion for Future Restoration Is Not Comprehensive Seven Study members who provided expenditure data to us expended at least $466 million on restoration activities in the Sound from fiscal years 2012 through 2016, although the total expenditures by all Study members over this period are unknown. In the 2015 plan, the Study estimated that future activities will cost at least $18.9 billion over 20 years, but these estimates may not reflect all future restoration costs because they address only some of the plan’s long-term targets. Four Study Members Expended At Least $466 Million to Restore Long Island Sound, and Three Others Funded Activities that Contributed to Restoration Of the seven Study members who provided expenditure data to us, four Study members said that they provide funding for restoration activities specifically for the Sound. Officials from EPA, the states of Connecticut and New York, and the U.S. Fish and Wildlife Service said that they expended at least $466 million on activities to restore Long Island Sound from fiscal years 2012 through 2016. Table 5 shows their reported expenditures on restoration activities in Long Island Sound from fiscal years 2012 through 2016. The states of Connecticut and New York expended the majority of the $466 million to restore Long Island Sound from fiscal years 2012 through 2016. According to a Connecticut Department of Energy and Environmental Protection official, Connecticut expended about $106 million on restoration activities from fiscal years 2012 through 2016. These activities included more than $10 million for habitat restoration, more than $14 million for land acquisition, and more than $81 million for nitrogen reduction. According to the official, Connecticut expended more than $21 million in fiscal year 2012 to upgrade equipment at three wastewater treatment plants to reduce nitrogen discharged from the plants into the Sound. New York State Department of Environmental Conservation officials said that the agency could not provide us with the total amount the agency expended on Sound restoration activities in fiscal years 2012 through 2016 because the agency does not track expenditures specific to Long Island Sound restoration. However, they provided examples of activities for which they expended about $337 million. The three activities for which officials provided examples of expenditures were to upgrade wastewater treatment plants. From fiscal years 2012 through 2016, EPA reported expending about $22 million to operate the Long Island Sound Study, including about $19 million from the agency’s Long Island Sound program and about $3 million from the National Estuary Program. On average, EPA reported expending about $4.5 million per year on Study operations, such as public outreach and education, monitoring, modeling, research, and activities to achieve the 1994 and 2015 plans. Of the $4.5 million per year, the Study provided an average of $1.3 million per year to the Long Island Sound Futures Fund. The Long Island Sound Futures Fund is a grant program that, according to the Study, funds activities in local communities that aim to protect and restore the Sound. For example, the Long Island Sound Futures Fund awarded $150,000 to the New York City Department of Parks and Recreation in 2016 to construct a living shoreline in Douglaston, New York. The purpose of this project was to stop the continued loss of urban salt marsh by reestablishing up to one acre of salt marsh and enhancing nearby forest, upland, and coastal grassland habitat. A U.S. Fish and Wildlife Service official said that the agency expended about $1 million in 39 activities from fiscal years 2012 through 2016. According to Long Island Sound Futures Fund documents, funds provided to the Long Island Sound Futures Fund are used to pay for restoration projects. For example, the U.S. Fish and Wildlife Service provided $55,392 in fiscal year 2016 to a project to restore a 12-acre coastal forest in the Village of Mamaroneck, New York. The focus of the project is to reverse forest fragmentation and degradation by removing non-native plants and planting native trees, shrubs, and herbs. In addition to the funds expended by the four Study members above, officials from three other Study members—the Natural Resources Conservation Service, the U.S. Geological Survey, and the U.S. Army Corps of Engineers—also said that they expended funds for restoration activities in the region around the Sound but do not isolate expenditures made specifically for the Sound. For example, officials from these Study members said that the agencies expended funds for activities in the region that contributed to restoration but were not intended solely to restore the Sound. They each provided examples of restoration expenditures or costs for fiscal years 2012 through 2016: the National Resource Conservation Service expended $54 million through programs such as the Environmental Quality Incentives Program; the U.S. Geological Survey expended about $3.8 million on data monitoring and other activities; and the U.S. Army Corps of Engineers expended $27 million for 13 projects. The 2015 Plan Estimated that Future Activities May Cost At Least $18.9 Billion, but the Estimates Address Only Some of the Plan’s Long-Term Targets Study members estimated in the 2015 plan that future restoration activities would cost at least $18.9 billion over 20 years. Nearly all the amount was for activities addressing the goal to achieve clean waters and healthy watersheds. As shown in table 6, Study members estimated that activities under that goal could cost at least $18.1 billion from 2015 through 2035. The cost estimate included $5.5 billion specifically for work on wastewater treatment plants in New York, Connecticut, and the upper watershed states, which may include upgrading the plants with available technologies for nutrient removal. Study members also estimated that activities to reduce nitrogen by addressing CSOs and urban stormwater in Connecticut may cost at least $4.4 billion and $700 million. Finally, the cost estimate included $12.4 billion to complete ongoing work in New York and Connecticut to reduce overflows from combined sewer systems as well as sewer systems that are not combined with stormwater systems. The remainder of the $18.9 billion was for activities related to goals to achieve thriving habitats and other restoration themes. As shown in table 7, Study members estimated that these other activities could cost $778 million from 2015 through 2035. According to the 2015 plan, activities to address the goals to achieve thriving habitats and abundant wildlife, such as by protecting open space, may cost $650 million—$500 million in New York and $150 million in Connecticut. These activities could include acquiring properties that the Study has identified as high priority for conservation to minimize coastal development in the future. Study members also estimated in the 2015 plan that Connecticut and New York would spend about $4 million each on education activities. These activities could include volunteer and outreach efforts for the general public at the 33 Long Island Sound Stewardship Areas, such as how human disturbance can affect wildlife. Economic guidance generally states that investment decisions should be informed by a consideration of both benefits and costs of relevant alternatives. For example, the Office of Management and Budget (OMB) has issued guidance on estimating costs and benefits to help federal agencies efficiently allocate resources through well-informed decision making about activities. This guidance includes OMB Circular A-94, which we have previously identified as providing leading practices for economic analysis. OMB Circular A-94 directs agencies to follow certain economic guidelines for estimating costs and conducting cost- effectiveness analyses of federal programs or policies to promote efficient resource allocation through well-informed decision making in certain circumstances. The guidance applies to federal agencies and programs, but we have previously found that it provides leading practices for economic analysis of investment decisions. Under OMB Circular A-94, a cost estimate is to include a comprehensive assessment of the costs. By developing its $18.9 billion estimate, the Long Island Sound Study has taken steps to assess the potential costs of future restoration activities. However, the 2015 plan includes 20-year cost estimates for activities related to 10 of the 20 long-term targets that the Study plans to achieve. These cost estimates focus primarily on activities to achieve clean waters and healthy watersheds and thriving habitats and abundant wildlife. These include restoration activities that address wastewater treatment plants to help achieve the long-term target nitrogen loading, and restoration activities to conserve open space to achieve the long-term target protected open spaces. However, the total does not include the cost of activities to achieve other long-term targets such as river miles restored for fish passage, tidal wetlands extent, marine debris, and public access to beaches and waterways. A Study member said that the Study completed 20-year estimates for proposed restoration activities where feasible and included them in the 2015 plan. The Study member also said that EPA worked with Study members to develop cost estimates using costs for past restoration activities. However, the Study member said that the exact course of action, and therefore costs, for many of the long-term targets were not defined and were still uncertain. For example, the Study only recently invested funds to evaluate nitrogen reduction targets to attain water quality standards, which can be used to determine the scope of work needed and costs to inform a cost estimate associated with achieving the nitrogen loading target. OMB Circular A-94 recognizes that estimates of costs are typically uncertain because of imprecision in underlying data and assumptions and states that this uncertainty can and should be part of the analysis and estimate. According to the circular, because such uncertainty is basic to many analyses, its effects should be analyzed and reported. One way to handle such uncertainty in a cost estimate is to perform a sensitivity analysis, which will result in a range of possible cost estimates. By working with Study members to develop cost estimates that include analyses of uncertainties for each of the targets in the plan, EPA and the Study could better estimate the comprehensive costs for Long Island Sound restoration and could better allocate resources and make decisions about their financial investments in the Sound. In addition to the 20-year cost estimates, the 2015 plan contained four supplemental documents that described the 139 implementation actions for carrying out the strategies for the plan’s four themes in greater detail as well as estimated costs for carrying out those implementation actions for fiscal years 2015 through 2019. EPA’s funding guidance for comprehensive conservation and management plans states that agencies should estimate the range of potential costs of all actions to implement the plan. For the four 5-year supplemental documents that it developed, EPA worked with the Study to create four cost ranges: (1) $0 to $25,000; (2) $25,000 to $150,000; (3) $150,000 to $1 million; and (4) greater than $1 million. The Study then assigned these ranges to the implementation actions in the four 5-year implementation plans for each theme. However, the Study only assigned 75 percent of the 139 implementation actions in the 2015 plan to these four ranges. Instead of a cost range, the Study identified the funding needs for more than a third of the remaining 25 percent of the actions as staff time or not applicable. A Study member said that the Study did not assign a range of costs for staff time and identified some action costs as not applicable because, for example, the work required would be intermittent or the associated costs were accounted for in other implementation actions. According to Circular A-94, uncertainty, such as staff time, should be included in a cost estimate. In addition, implementation actions for which costs are accounted for elsewhere could be assigned to the Study’s first cost range, $0 to $25,000. According to the Study member, estimates of potential cost ranges for the implementation actions could be included in future supplements to the 2015 plan. By working with the Study to estimate the range of potential costs for all the implementation actions and including the estimates in future supplements to the 2015 plan, EPA would have better assurance that Study members have complete information to guide resource allocation decisions about activities to achieve the goals of the 2015 plan. Conclusions By identifying six priority problems and associated goals in the 1994 plan and taking actions to achieve these goals, the Study, with EPA as director, has provided a long-standing focus on improving the water quality and other ecosystem functions in the Sound and its surrounding watershed. In its updated 2015 plan, the Study identifies further actions to be taken and has identified numerical goals for almost all of the 20 long- term targets in the 2015 plan, which unlike the 1994 plan, will enable the Study to do a comprehensive assessment of progress toward the numerical goals of the 2015 plan. As of June 2018, the Study has not yet fully incorporated leading practices for performance reporting, such as evaluating actions for unmet goals, in the web pages the Study plans to use to report progress for the 20 long-term targets. By working with the Study as it finalizes its reporting format, EPA can ensure that the leading practices of performance reporting are fully incorporated, which in turn will help ensure that the Study is providing information to the public and Congress about its restoration progress. In addition, the 2015 plan includes 20-year cost estimates for some, but not all the activities related to the 20 long-term targets that the Study plans to achieve. By working with Study members to develop cost estimates that include analyses of uncertainties for each of the targets in the plan, EPA and the Study could better estimate the comprehensive costs for Long Island Sound restoration and ensure better resource allocation decisions for the Sound. In addition, the Study has not estimated the range of potential costs of all 139 implementation actions in the 2015 plan. By working with the Study to estimate the range of potential costs for all the implementation actions and including the estimates in future supplements to the 2015 plan, EPA would have reasonable assurance that Study members have considered complete cost information when making resource allocation decisions about activities to achieve the goals of the 2015 plan. Recommendations for Executive Action We are making the following three recommendations to the Environmental Protection Agency in its capacity as the Director of the Long Island Sound Study, in coordination with Study members: The Director, working with the Study, should ensure that as the Study finalizes its reporting format, it fully incorporates leading practices of performance reporting. (Recommendation 1) The Director, working with the Study, should develop cost estimates that include analyses of uncertainties for each of the targets in the 2015 plan. (Recommendation 2) The Director, working with the Study, should estimate the range of potential costs for all implementation actions and include the estimates in future supplements to the 2015 plan. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to EPA and the departments of Agriculture, Commerce, Defense, and the Interior for their review and comment. We also provided a draft of the report to the Connecticut Department of Energy and Environmental Protection and the New York State Department of Environmental Conservation for their review and comment. EPA provided written comments, which are reproduced in appendix V, and stated that it agreed with the conclusions and recommendations in our report. EPA also provided technical comments, which we incorporated as appropriate. The departments of Agriculture, Defense, and the Interior, and the Connecticut Department of Energy and Environmental Protection responded by email that they did not have comments on the draft report. The Department of Commerce and the New York State Department of Environmental Conservation provided technical comments, which we incorporated as appropriate. In a letter signed by the Regional Administrators of EPA Region 1 and Region 2, EPA stated that the report is timely because the Study is working to transition from the 1994 plan to evaluating and reporting on the 2015 plan and highlighted steps the agency will take to meet our recommendations. EPA stated that working with the Study the agency: plans to further evaluate, develop, and apply leading practices of performance reporting as it finalizes its reporting format, estimating enhancements to the reporting format will be available on the Study’s website by the end of 2019; will evaluate the range of costs needed to attain each of the targets and include cost estimates with uncertainty bounds in future updates of the plan, expecting the enhanced cost information will be available on the Study’s website by the end of 2019; and will ensure that the planned update to implementation actions includes a range of costs for all implementation actions, estimating actions will be completed in 2020. In its written comments, EPA suggested two specific revisions to our report. First, EPA stated that the Study has established more intermediate goals than we included in our report. In our report, we said that as of March 2018, the Study had established intermediate targets for 7 of the 20 long-term ecosystem targets. According to EPA’s comments, applying the methodology that we used in the report to the 20 ecosystem targets results in 11 targets having intermediate goals. EPA also stated that the agency will work with the Study to better communicate these existing intermediate goals on the web pages reporting ecosystem progress. In response to this information, we analyzed the Study’s web pages that were available in June 2018 and agreed that five additional ecosystem targets had intermediate goals as of that date. We revised the report to include this information. Second, EPA stated that the report’s statement that the 2015 plan estimates that future implementation activities may cost nearly $21.9 billion is a misleading interpretation of the 2015 plan’s implementation costs because the plan does not present that figure. EPA stated that table 6 in our report appeared to double count Connecticut’s combined sewer overflow costs in the 2015 plan by including both the $4.4 billion taken from text and $3 billion taken from a table in the plan. Although we presented these data to EPA during our review, the error was not caught until the draft report was reviewed. EPA stated that the 2015 plan is admittedly unclear in attributing costs to specific categories and that the agency will work with the Study to clarify the estimated implementation costs in future updates. In response to EPA’s comments, we reviewed the 2015 plan and removed the $3 billion cost estimate for Connecticut’s combined sewer overflow from table 6 and revised the total cost estimate for future restoration activities to $18.9 billion. We are sending copies of this report to the appropriate congressional committees, Administrator of EPA, Secretary of Agriculture, Secretary of Commerce, Secretary of Defense, Secretary of the Interior, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This appendix provides information on the scope of work and the methodology used to examine (1) what is known about the progress made toward achieving the 1994 Long Island Sound Comprehensive Conservation and Management Plan (1994 plan); (2) the goals of the 2015 Long Island Sound Comprehensive Conservation and Management Plan (2015 plan) and factors that may hinder progress according to Long Island Sound Study (the Study) members; (3) how Study members plan to measure and report on progress toward the goals of the 2015 plan; and (4) what Study members expended on restoration activities in fiscal years 2012 through 2016 and cost estimates for future activities. To examine what is known about the progress toward achieving the 1994 plan, we analyzed the plan to gain a better understanding of it and identify any goals associated with the six priority problems. We also analyzed data from the Study’s website in November 2017, the Study’s most recent progress reports, and the book Long Island Sound: Prospects for the Urban Sea—a summary of available science and environmental data for the Long Island Sound (the Sound). We analyzed the Study’s most recent progress reports—Protection and Progress and Sound Health. We analyzed data that were on the Study’s website in November 2017 because the time frame coincided with the time frames of our review. These data, reports, and the book included examples of progress but did not assess performance toward the goals associated with the priority problems in the 1994 plan. Therefore, we asked Study members for their responses on progress and the data that supported their responses. To do so, we interviewed Study members to obtain their views about progress toward the 1994 plan. For our interviews with Study members, we contacted all 16 members of the Study and representatives of the 5 Study work groups that were active at the time of this review. Of the 16 Study members, 14 agreed to participate in this review: (1) Department of Agriculture’s Natural Resources Conservation Service; (2) Department of Commerce’s National Marine Fisheries Service; (3) Department of Defense’s U.S. Army Corps of Engineers; the Department of the Interior’s (4) U.S. Fish and Wildlife Service and (5) U.S. Geological Survey; (6) Environmental Protection Agency (EPA); (7) Connecticut Sea Grant; (8) Connecticut Department of Energy and Environmental Protection; (9) New York State Department of Environmental Conservation; (10) New York Department of State; (11) New York City Department of Environmental Protection; (12) the New England Interstate Water Pollution Control Commission; (13) the Study’s Citizens Advisory Committee; and (14) the Study’s Science and Technical Advisory Committee. The 5 Study work groups are (1) Climate Change and Sentinel Monitoring Work Group, (2) Habitat Restoration and Stewardship Work Group, (3) Public Involvement and Education Work Group, (4) Water Quality Monitoring Work Group, and (5) Watersheds and Embayment Work Group. Representatives from all 5 work group agreed to participate in this review. We asked the following question for each priority problem: “Since 1994, how much progress has been made addressing the priority problem in Long Island Sound: no progress, little progress, moderate progress, or goal has been met?” For purposes of reporting responses to this question, we refer to Study members and work group representatives collectively as Study members. The New York State Departments of Environmental Conservation and State provided their responses together, and therefore we counted the two agencies as one Study member. The New England Interstate Water Pollution Control Commission did not provide a response to this question. As a result, 17 Study members provided responses to this question. As part of the interviews, we also asked Study members, “What evidence are you basing your response on?” We did not independently assess the reliability of the data they cited for the purpose of evaluating if the data showed progress toward addressing the priority problems. Instead, we noted the limitations the Study associated with the data to better interpret Study members’ views. For some priority problems, Study members said that they were unable to provide a response because they did not have sufficient knowledge or data about progress toward the associated goals. As a result, the total number of Study members who answered these questions varied by priority problem and, for each priority problem, we identified the total who provided a response. In addition, we visited two Long Island Sound restoration projects to observe restoration activities and learn how these activities may contribute to progress toward the goals of the 1994 plan. To examine the goals of the 2015 plan and factors that may hinder progress according to Study members, we analyzed the 2015 plan to obtain information about the goals to achieve four themes in the plan. In the interviews with the 17 Study members described above, we asked them “What factors, if any, may hinder achievement of the 2015 plan’s goals.” More than one Study member representative was present in many of the interviews and each representative in the interviews could identify as many factors as they thought necessary. As a result, the number of times a factor was identified—54—was greater than number of Study members. We narrowed the number of responses to 11 categories by grouping together factors that were the same or were similar. In those cases that more than one representative of the same Study member identified the same factor, we counted that factor only once for that Study member in order to generate the statements we used in the report. See appendix II for a complete list of all the factors that were identified, the number of Study members who identified each factor, and how we grouped those factors into the 11 categories. To examine how Study members plan to measure and report on progress toward achieving the 2015 plan, we analyzed sections of the plan that contained goals associated with four themes and relevant web pages that the Study issued in March 2018 and then analyzed them again in June 2018. We also conducted interviews with subject matter experts to obtain their views on the sections of the 2015 plan that contained the themes and goals, and with Study members to learn how they planned to report on progress toward the 2015 plan. As a result of our analysis of the 2015 plan and interviews with Study members, we identified the 20 long-term targets and associated indicators that Study members plan to use to measure progress toward the 2015 plan, and determined that the Study plans to report on progress using the web pages. For our interviews with subject matter experts, we identified individuals with expertise on the 20-long term targets and their associated indicators. We identified 73 experts by asking Study members to recommend experts and identifying the contributors to Long Island Sound: Prospects for the Urban Sea. We removed from this list those individuals whom we had already interviewed, those who represented a Study member, those who were involved with the development of the 2015 plan, and those whose contact information we were unable to obtain from the Study member or an Internet search. We invited by email the remaining 47 experts to participate in interviews to obtain their views about the 20 long- term targets and their associated indicators. We also provided the experts with a list of the 20 targets and indicators and asked them to review the targets and to “select those that you would be comfortable speaking about based on your knowledge and expertise.” Of the 34 experts who responded, we interviewed 19 about the targets they had expertise in and could discuss. The remaining 15 experts chose not to participate or said that they were ineligible because they were either involved with the development of the 2015 plan or affiliated with a Study member. We then interviewed the 19 experts about each of the targets and associated indicators that they said they had identified. The experts we interviewed included members of academia, as well as one state official and one county official. Not all of the 19 experts were able to address each of 20 targets and associated indicators. As a result, the total number of expert responses varied for each target and associated indicator and we identified the total number of experts who responded to questions about each target and associated indicator. Because we used a nonprobability sample, the information obtained from these interviews is not generalizable to other individuals with expertise on the 20 long-term targets and their associated indicators but provides illustrative information. For our analysis of the web pages the Study published in March 2018, we used GAO’s prior work on performance management reporting, which identified leading practices that have the potential for enhancing the general usefulness of performance reports as vehicles for providing decision makers and the public with information to assess progress. We then analyzed the web pages to determine the extent to which they incorporated these leading practices. To examine what Study members expended on restoration activities in fiscal years 2012 through 2016 and cost estimates for future activities, we took the following steps: we analyzed EPA’s Justification of Appropriation Estimates for Committee on Appropriations for fiscal years 2014 through 2018 to obtain the relevant EPA expenditure data; we obtained and analyzed expenditure data from other Study members; and we analyzed the cost estimate information in the 2015 plan. We chose this time period because it was the most recent period for which expenditure data were available during the time frames for our review. Of the 12 Study members described above, 7 provided at least some expenditure data, 4 said that they do not fund restoration activities, and 1 did not reply to our request for expenditure data. We were unable to compare expenditure data across Study members because three Study members said that they spend funds for restoration activities in the region around Long Island Sound but do not isolate expenditures made specifically for it. We assessed the reliability of these data through interviews with Study members who were familiar with these data. We found these data to be sufficiently reliable for the purpose of this reporting objective with the limitation that they represent the minimum amount of Study member expenditures on restoration activities in fiscal years 2012 through 2016. Further, we attended two Study meetings (on April 12, 2017, by phone, and May 11, 2017, in person) to obtain information about how Study members make expenditure decisions for restoration activities. For our analysis of cost estimate information in the 2015 plan, we consulted the Office of Management and Budget Circular A-94, which provides general guidance for estimating costs, and analyzed EPA’s funding guidance for comprehensive conservation and management plans. We then analyzed the cost estimates in the 2015 plan to determine the extent to which they followed the Office of Management and Budget and EPA guidance. In our interviews with Study members and subject matter experts described above, we determined that Study members had not developed other cost estimates for restoring Long Island Sound, and experts were unaware of other such estimates. We also interviewed relevant officials from EPA, the Connecticut Department of Energy and Environmental Protection, and the New York State Department of Environmental Conservation to obtain information about how the cost estimates in the 2015 plan were created. We conducted this performance audit from January 2017 to July 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Factors Identified by Members of the Long Island Sound Study In our review of the Long Island Sound restoration efforts, we asked Long Island Sound Study (the Study) members to identify factors that may hinder Long Island Sound restoration progress. Specifically, we asked the following question to all 17 Study members we interviewed: “What factors, if any, may hinder achievement of the goals of the 2015 Long Island Sound Comprehensive and Conservation Management Plan.” More than one Study member representative was present in many of the interviews and each representative could identify one or more factors. As a result, the number of factors identified—54—was greater than number of Study members who identified the factor. Table 8 shows the 11 categories of factors, the number of times factors in those categories were identified, and the number of Study members who identified each factor. We narrowed the number of responses to 11 factor categories by grouping together factors that were the same or were similar. Table 9 shows each factor category, each of the original factors that Study members identified, and the number of times the factor was identified by Study members. Appendix III: The 20 Long-Term Targets and Associated Indicators The 2015 Long Island Sound Comprehensive Conservation and Management Plan has four broad themes—clean water and healthy watersheds, thriving habitats and abundant wildlife, sustainable and resilient communities, and sound science and inclusive management— and associated goals. It also has 20 long-term targets with associated indicators (see table 10). Appendix IV: Expert Responses on Whether Indicators Are Accurate, Valid, and Reliable We interviewed a nonprobability sample of 19 individuals with expertise on Long Island Sound to obtain their views on the 20 long-term targets and their associated indicators that the Long Island Sound Study said they plan to use to measure progress toward the goals of the 2015 Long Island Sound Comprehensive Conservation and Management Plan. We asked each expert to review the targets and associated indicators and to “select those that you would be comfortable speaking about based on your knowledge and expertise.” We then conducted interviews with each expert, and asked “is the indicator a valid, accurate, and reliable way to measure progress to achieve the target?” Table 11 shows the expert’s responses for each target. Appendix V: Comments from the Environmental Protection Agency Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Susan Iott (Assistant Director), Michelle K. Treistman (Analyst-in-Charge), Chuck Bausell, Mark Braza, Ellen Fried, Benjamin T. Licht, James I. McCully, Katya E. Rodriguez, and Sara Sullivan made key contributions to this report. Related GAO Products Great Lakes Restoration Initiative: Improved Data Collection and Reporting Would Enhance Oversight. GAO-15-526. Washington, D.C.: July 21, 2015. Great Lakes Restoration Initiative: Further Actions Would Result in More Useful Assessments and Help Address Factors That Limit Progress. GAO-13-797. Washington, D.C.: September 27, 2013. Chesapeake Bay: Restoration Effort Needs Common Federal and State Goals and Assessment Approach. GAO-11-802. Washington, D.C.: September 15, 2011. Recent Actions by the Chesapeake Bay Program Are Positive Steps Toward More Effectively Guiding the Restoration Effort, but Additional Steps Are Needed. GAO-08-1131R. Washington, D.C.: August 28, 2008. Coastal Wetlands: Lessons Learned from Past Efforts in Louisiana Could Help Guide Future Restoration and Protection. GAO-08-130. Washington, D.C.: December 14, 2007. South Florida Ecosystem: Restoration Is Moving Forward but Is Facing Significant Delays, Implementation Challenges, and Rising Costs.GAO-07-520. Washington, D.C.: May 31, 2007. Chesapeake Bay Program: Improved Strategies Are Needed to Better Assess, Report, and Manage Restoration Progress. GAO-06-96. Washington, D.C.: October 28, 2005. Great Lakes: Organizational Leadership and Restoration Goals Need to Be Better Defined for Monitoring Restoration Progress. GAO-04-1024. Washington, D.C.: September 28, 2004. Great Lakes: An Overall Strategy and Indicators for Measuring Progress Are Needed to Better Achieve Restoration Goals. GAO-03-515. Washington, D.C.: April 30, 2003.
Why GAO Did This Study Long Island Sound, an estuary bordered by Connecticut and New York, provides numerous economic and recreational benefits. However, development and pollution have resulted in environmental impacts, such as the degradation of water quality. EPA partnered with both states to create the Study to restore and protect the Sound. The Study developed a comprehensive conservation and management plan in 1994 and updated the plan in 2015. GAO was asked to examine federal efforts to restore the Sound. This report examines, among other objectives, (1) what is known about the progress made toward achieving the 1994 plan, (2) how Study members plan to measure and report on progress toward achieving the 2015 plan, and (3) estimated costs of the restoration. GAO reviewed Study plans, reports, and data. GAO also interviewed 12 Study members—including federal and state agency officials—and representatives of 5 Study work groups about restoration efforts and progress made. What GAO Found The Long Island Sound Study (the Study) is a federal-state partnership formed in 1985 to restore Long Island Sound. The Environmental Protection Agency (EPA) and officials from Connecticut and New York provide oversight for the Study, which includes federal and state agencies, nonprofit organizations, and other groups. GAO found the following: Progress toward 1994 Plan. The Study established an initial plan for the Sound in 1994 and has collected data on certain indicators of the Sound's health and published progress reports on its website. However, the Study has not comprehensively assessed progress against the 1994 plan. In the absence of such an assessment, GAO interviewed Study members who generally agreed that moderate progress has been made in achieving goals for five of the six problem areas in the 1994 plan. Without a comprehensive assessment, it is not possible to determine the extent these views reflect actual progress. Reporting Progress for the 2015 Plan. The Study's 2015 management plan identifies 20 long-term targets and associated numerical indicators that will be used to measure future progress. The Study has also updated the format for pages on its website to provide more consistent progress reports for these targets. However, the reports do not yet fully incorporate leading practices for performance reporting that GAO has previously identified. For example, they do not include evaluations of goals that are not met for 15 targets. By ensuring that leading practices are fully incorporated into the Study's performance reporting efforts, EPA can help the Study better assess and report on future progress. Estimating Costs of Restoration. The Study has estimated that the future costs of restoration will be at least $18.9 billion through 2035. However, the current estimates are understated because they do not include the costs of all activities that will be needed to accomplish the 2015 plan, and they do not reflect the uncertainty associated with some of the costs. By capturing the full costs and uncertainties in cost estimates, the Study can provide decision makers critical information needed to allocate resources effectively. What GAO Recommends GAO recommends that EPA work with the Study to ensure that it fully incorporates leading practices into its performance reporting efforts and that its cost estimates include the full range of activities as well as those for which there is uncertainty. EPA agreed with GAO's recommendations and highlighted steps the agency will take to meet the recommendations.
gao_GAO-19-103
gao_GAO-19-103_0
Background This section provides an overview of FMD, as well as information on the potential impact of an outbreak in the United States; USDA activities to respond to outbreaks of diseases, including FMD; federal, state, tribal, and industry roles in FMD control; and FMD vaccines. Overview of FMD FMD is a highly contagious viral disease that causes fever and painful lesions on cloven-hoofed animals’ hooves, mouths, and udders (see fig. 1). These debilitating effects, rather than high mortality rates, are responsible for severe productivity losses associated with FMD. The disease generally does not infect humans and is not considered a public health or food safety threat. Young animals may die from the virus, while most adult animals recover. However, livestock infected with FMD have severely diminished meat and milk production. FMD virus can be found in all secretions and excretions from infected animals, including in breath, saliva, milk, urine, feces, and semen, as well as in the fluid from the lesions. Animals can release the virus for up to 4 days before showing visible signs of infection, and FMD can spread from one animal species to another. The virus itself can survive in the environment for many months and can spread when healthy animals come into contact with infected animals or via contaminated vehicles, equipment, clothes, feed, or animal products, as shown in figure 2. The United States has not had an FMD outbreak since 1929, but the disease could be introduced here from countries in Africa, Asia, Eastern Europe, or South America where it is present. The United States is vulnerable to FMD transmission, given the large size and mobility of the U.S. livestock sector. In 2018, the United States had about 94 million head of cattle, 74 million swine, 5 million sheep, and more than 2 million goats. Many of these livestock are concentrated in major livestock- producing states such as Texas and Iowa, but livestock are present in every state. (See figs. 3 and 4 for the populations of cattle and swine by state.) According to USDA documents, a large percentage of livestock in the United States are kept on large farms, ranches, or feedlots (i.e., areas or buildings where livestock are fed and fattened up), some with capacity for 50,000 to 100,000 or more animals. Livestock are transported daily to feeding facilities, markets, slaughter plants, and other farms or ranches. For example, swine are often moved among multiple premises at different stages of their life spans to accommodate their growth in size, among other things. According to the swine industry, approximately 1 million swine are on the road every day in transit to various stages of the production process. Potential Impact of FMD Outbreak An FMD outbreak in the United States could have serious economic consequences. A 2001 outbreak of FMD in the United Kingdom, for example, resulted in the killing of more than 6 million animals, with direct costs of more than $3 billion to the public sector and more than $5 billion to the private sector. The extent of economic damage in the United States would depend primarily on the duration and geographic extent of the outbreak, the extent of trade disruptions, and how consumers reacted to the disease and associated control measures, according to USDA. In a large and long-lasting outbreak, control measures such as killing animals and halting the transportation of animals could cause significant losses for livestock operations. In addition, trade disruptions could have an enormous impact because U.S. exports of livestock, meat, and dairy products—together valued at more than $19 billion in 2017 based on estimates from the U.S. Meat Export Federation and the U.S. Dairy Export Council—would likely stop or be sharply reduced. In addition, domestic consumers might be reluctant to purchase meat and animal products such as milk during an FMD outbreak, even though the products would be safe for people to consume, according to USDA. USDA Activities to Respond to Outbreaks Partly to protect the economic interests of the U.S. livestock industry, the Animal Health Protection Act authorizes USDA to detect, control, and eradicate diseases in livestock. USDA’s Animal and Plant Health Inspection Service (APHIS) is the lead agency for responding to outbreaks of foreign animal diseases, including FMD. According to APHIS, in responding to an outbreak of FMD or any foreign animal disease, APHIS, in coordination with state and industry partners, would conduct the following activities, among others: Surveillance. Observing animals for visible signs of disease and analyzing data on locations and numbers of disease cases to detect premises with the disease, determine the size and extent of an outbreak, and determine whether outbreak control measures are working. Epidemiologic tracing. Gathering and analyzing data on cases of a disease, premises with such cases, movement of infected animals, and their potential contact with uninfected animals to locate other animals or premises with the disease, understand the outbreak’s rate and direction of spread, and investigate the source of the outbreak. Diagnostic testing. Conducting approved and validated assessments of samples taken from animals to identify infected animals or to demonstrate that healthy animals are free of disease. Applying quarantines and stop-movement orders. Restricting the movement of infected or potentially infected animals, animal products, and contaminated items to prevent the virus from spreading to healthy animals. Biosecurity Biosecurity measures, which help minimize disease spread, include the following: placing signs indicating precautions personnel and visitors must follow; establishing sign-in procedures at entry points; removing dirt from boots and disinfecting them prior to entering a facility; using disposable personal protective equipment, such as Tyvek suits, gloves, masks, and boots, when entering premises; disposing of contaminated items properly; designating “clean” and “dirty” storage areas in vehicles; and controlling movement on and off premises. Employing biosecurity measures. Taking steps, such as cleaning and disinfecting trucks that travel between premises, to contain the virus on infected premises and prevent it from spreading via objects or equipment that can carry infection. Stamping out and vaccination. Killing infected animals and vaccinating uninfected animals—for example in buffer zones around infected premises—to limit the spread of the virus. Compensating owners. Paying owners fair market value for animals and equipment that the government determines must be destroyed to limit disease spread. To help prepare for a potential FMD outbreak, APHIS and its partners conduct preparedness exercises in which officials practice responding to simulated FMD outbreaks. Such exercises range from small-scale, narrowly scoped exercises to full-scale, broadly scoped exercises. For example, some exercises focus on specific response tasks such as electronic messaging between laboratories or shipping response supplies to the field, and involve relatively few people for less than a day. Other exercises simulate a wide range of response activities that APHIS and its partners would use in an FMD outbreak, involve dozens of people from different agencies and industry organizations in locations across the country, and last for multiple days. Multiple units within APHIS carry out these preparedness and response activities at the agency’s headquarters in Maryland; field offices in 27 states and Puerto Rico; and the National Veterinary Services Laboratories in Ames, Iowa, and on Plum Island, New York. APHIS’s Foreign Animal Disease Diagnostic Laboratory on Plum Island, New York, develops and performs diagnostic tests for foreign animal diseases, including FMD. Federal, State, Tribal, and Industry Roles Related to FMD Control APHIS also works with federal agencies within and outside of USDA, along with states, tribes, and academic and industry partners—all of which have roles related to FMD control, as discussed below. USDA’s Food Safety and Inspection Service is responsible for the safety of meat, poultry, and egg products. Agency officials assigned to slaughter establishments examine animals before processing to look for visible symptoms of FMD, among other things. USDA’s Agricultural Research Service conducts research on agricultural problems of high national priority, including the FMD virus and FMD vaccine. USDA’s National Institute of Food and Agriculture invests in and conducts agricultural research, education, and extension to help solve national challenges in agriculture, food, the environment, and communities. The agency has funded modeling of FMD spread and research on potential economic impacts. DHS has funded research on FMD vaccine and development of response decisions tools, training, and equipment; sponsored preparedness exercises; and developed emergency plans, among other things. In an FMD outbreak, DHS may assume the lead for coordination of federal resources if the Secretary of Agriculture requests assistance from DHS. The Secretary of Homeland Security, in coordination with the Secretaries of Agriculture, Health and Human Services, the Attorney General, and the Administrator of the Environmental Protection Agency, is to ensure that the combined federal, state, and local response capabilities are adequate to respond quickly and effectively to a major disease outbreak, among other things, affecting the national agriculture or food infrastructure. The Department of the Interior carries out disease surveillance of wild animals and coordinates surveillance activities with state fish and wildlife agencies, among other things. The Department of the Interior’s U.S. Geological Service conducts research on wildlife diseases, including FMD, and if needed in an FMD outbreak, would administer diagnostic tests for wildlife. The Federal Bureau of Investigation coordinates the federal investigation of criminal activities through the Joint Terrorism Task Force. If animals, livestock, or poultry are suspected targets of a terrorist attack, or if any evidence suggests a foreign animal disease may have been or could be intentionally introduced, USDA notifies the Federal Bureau of Investigation to investigate. State governments prepare plans for foreign animal diseases, including FMD; conduct preparedness exercises; and would play a key role in a response effort. In an FMD outbreak, a state animal health official and an APHIS field official would co-lead initial response efforts. For example, state governments might take immediate actions, such as applying quarantines and stop-movement orders. Tribal governments, like state governments, would play a key role in initial response efforts and conduct activities similar to those of state governments. The National Animal Health Laboratory Network is a partnership of 59 federal, state, and university-associated animal health laboratories throughout the United States, of which 45 are approved to administer diagnostic tests for FMD. Livestock industry organizations support communication and education efforts with their members and the public, participate in FMD preparedness exercises, and have helped develop some FMD planning documents. FMD Vaccines As part of its response to an FMD outbreak, APHIS may access vaccine through the North American Foot-and-Mouth Disease Vaccine Bank (vaccine bank), which is jointly administered by the United States, Mexico, and Canada. Because finished vaccines have a short shelf life, the vaccine bank manages a supply of vaccine concentrate, which can be stored at extremely cold temperatures for about 5 years. Some of the concentrate is stored at the Foreign Animal Disease Diagnostic Laboratory on Plum Island, New York, and some at the manufacturer’s facilities in Lyon, France. During an FMD outbreak, the manufacturer would convert the concentrate into finished vaccine and ship it to the United States. For the concentrate stored in the United States, the vaccine bank would need to first ship it to the manufacturer overseas. APHIS’s National Veterinary Stockpile coordinates logistics planning, particularly for catastrophic outbreaks, and would be responsible for delivering the finished vaccine to affected states, according to USDA planning documents. The FMD virus has seven distinct variations, or serotypes, and more than 60 subtypes within the serotypes, according to USDA documents. FMD vaccine should be as closely matched to the outbreak subtype as possible to provide more effective protection, according to USDA officials and a document on FMD vaccination. A vaccine for one FMD subtype may also provide good or partial immunity to other closely related subtypes, but it would not generally protect against other serotypes. The vaccine bank has concentrate for a number of FMD subtypes that pose the greatest risk to North American livestock based on recommendations from the World Reference Laboratory for FMD. We have previously reported on APHIS’s management of foreign animal diseases, including FMD. For example, in May 2015, we recommended that USDA assess and address its veterinarian workforce needs for emergency response to an outbreak of an animal disease such as FMD. USDA agreed, in part, with the recommendation, and in 2017 hired additional veterinarians. The agency is currently building a model to develop workforce estimates for a large-scale FMD outbreak, according to agency officials. USDA’s Planned Approach Calls for Outbreak-Specific Strategies, Using Overarching Guidance to Implement the Strategies USDA’s planned approach for responding to an FMD outbreak relies on several different strategies emphasizing stamping out, vaccination, or both, depending on factors such as the size of the outbreak. To aid agency officials in implementing the strategies, USDA has developed overarching guidance for responding to animal disease outbreaks and detailed procedures for many response activities. USDA’s FMD Response Strategies Emphasize Stamping Out or Vaccination, Depending on Factors Such as Outbreak Size and Resource Availability USDA’s APHIS has developed several different, but not mutually exclusive, outbreak response strategies that the agency will consider to control and eradicate FMD in an outbreak as part of its planned approach, according to USDA documents and officials. These strategies rely on stamping out—killing and disposing of—infected and susceptible animals, vaccination of uninfected animals, or both. For strategies involving vaccination, options include killing and disposing of vaccinated animals (vaccinate-to-kill), allowing the animals to be slaughtered and their meat processed (vaccinate-to-slaughter), or allowing the animals to live out their useful lifespan (vaccinate-to-live). Response strategies would likely change as an outbreak unfolds, and might also vary by region or type of animal affected, according to APHIS planning documents. Over time, USDA’s FMD planned approach has evolved from relying solely on stamping out to including vaccination strategies as it became apparent that in many potential scenarios, reliance on stamping out alone would not be effective or feasible. Specifically, in 2010, USDA’s Foot-and- Mouth Disease Response Plan: The Red Book (Red Book) first stated that APHIS would consider vaccination strategies such as vaccinate-to- slaughter and vaccinate-to-live. In 2014 APHIS updated the Red Book with the addition of a vaccinate-to-kill strategy to better distinguish what would happen to animals if they were not eligible for slaughter. By 2016, USDA had determined that complete stamping out of anything beyond a small FMD outbreak was not a viable, effective, or sustainable response strategy for the United States, according to USDA’s FMD vaccination policy. Experiences in preparedness exercises and foreign outbreaks of FMD influenced a shift in USDA’s planned approach toward vaccination strategies. In 2010, Japan and South Korea both experienced FMD outbreaks and initially relied on stamping out combined with strict movement restrictions. Japan stamped out about 300,000 cattle and swine, and South Korea stamped out about 150,000 cattle and 3 million swine—a third of the country’s total swine population. Despite these efforts, FMD continued to spread in both countries until they implemented vaccination strategies, according to USDA documents. A 2007 FMD preparedness exercise, sponsored by the Texas Animal Health Commission and USDA, found that killing and disposing of infected animals in a livestock-dense area like the Texas panhandle would not be feasible in a timely manner because of the large number of animals on infected premises (e.g., 50,000 to 75,000 head of cattle on large cattle feedlots). USDA learned that having vaccination strategies in place would be necessary to effectively respond to an FMD outbreak. If an FMD outbreak occurred, APHIS would select a response strategy or multiple strategies, or it would modify strategies to achieve its FMD response goals based on the unique circumstances of the outbreak, according to agency planning documents. APHIS would do so in consultation with affected states and tribes, and if the agency chose to use vaccine, states would request it from USDA. According to agency planning documents we reviewed, APHIS would consider a number of factors when deciding on its approach, including the following: FMD vaccine availability; consequences of the outbreak (e.g., trade restrictions or loss of valuable genetic stock); public acceptance of response strategy or strategies; scale of the outbreak (i.e., number and size of infected premises); rate of outbreak spread; location of initial outbreak (e.g., isolated ranch versus livestock- producing area); movement of animals (number of locations that infected or potentially infected animals have traveled to or through); and federal and state resources available to implement response strategies. Resource needs vary among strategies and generally increase with the scale of an outbreak, according to USDA planning documents. Having the necessary resources available to implement a stamping-out response strategy would include having qualified personnel to kill animals in accordance with accepted protocols and having appropriate disposal facilities. To implement strategies involving vaccination, APHIS would need a sufficient quantity of vaccine, the resources for distributing and administering the vaccine, and the diagnostic tests necessary to distinguish between vaccinated and infected animals, according to USDA’s FMD vaccination policy. If the scale of an outbreak were small, and APHIS had access to sufficient resources, agency officials would likely implement a stamping-out strategy in an attempt to quickly stop the production of virus in infected animals and limit the outbreak’s spread, according to agency planning documents. However, these planning documents indicate that if the outbreak grew to a moderate regional, large regional, national, or catastrophic scale, the resources required for killing all infected and potentially infected animals, disposing of carcasses, and paying compensation to livestock owners would quickly multiply, and APHIS policy calls for strategies focused on vaccination, according to USDA documents. USDA Has Developed a Range of Documents to Guide Its FMD Response Strategies Over time, USDA’s APHIS has developed various documents to guide its response to FMD, including overarching guidance for responding to FMD and other foreign animal diseases, procedures with in-depth operational details, and plans to secure the nation’s food supply. To aid agency officials in implementing FMD response strategies broadly, APHIS has developed FMD response plans and guidance for responding to foreign animal disease outbreaks more generally. For example, the Red Book describes USDA’s FMD response strategies; identifies the capabilities needed to respond to an FMD outbreak; and provides guidance on the critical activities required during the response, including time frames for these activities. The Red Book is intended for responders at all levels of government and industry partners. For example, if a state official or a livestock owner wanted to know the steps to test and confirm a positive case of FMD, the Red Book explains the process and has a flowchart to illustrate the steps. APHIS also has developed response manuals that provide guidance relevant to foreign animal disease outbreaks, including FMD. For example, a manual on roles and coordination provides an overview of USDA’s framework for incident management, funding, communication strategies, relationships, and authorities during a foreign animal disease outbreak, including an FMD outbreak. APHIS also has produced ready reference guides that condense guidance material from these broader documents into short summary documents for training and education purposes. In addition, APHIS has developed standard operating procedures (SOP) for many response activities. Some SOPs are specific to an FMD outbreak, and others provide more general instruction on activities to respond to foreign animal diseases. The FMD biosecurity SOP, for example, describes steps responders at all levels of government and industry partners can take to help prevent the spread of the virus, such as protocols for putting on and taking off personal protective equipment (e.g., coverall suits, boots, and gloves); standards for separating “clean” and “dirty” zones in vehicles and on premises; and instructions for cleaning and disinfecting vehicles before arrival at and after departure from different premises. Many of the more general SOPs have proven useful during outbreaks of other animal diseases and exercises simulating FMD outbreaks, according to APHIS and state government officials, and APHIS has revised them to incorporate lessons learned. For example, one state animal health official said that during the 2014 avian influenza outbreak, the SOP for disposing of poultry carcasses through composting was initially insufficient because the poultry industry had not previously been composting in all states. To improve consistency across states, APHIS updated protocols during the outbreak and created composting protocols for avian influenza-infected flocks and livestock to supplement the agency’s disposal SOP, which addresses carcass disposal for foreign animal diseases generally. These composting protocols expanded on and clarified guidance to be used in subsequent outbreaks. In addition, APHIS held training on composting procedures for birds and on large animal composting, which could be part of an FMD response. USDA, in coordination with industry, state, federal, and academic representatives, has also developed supply plans to secure the nation’s food supply and keep businesses operating during an FMD outbreak while managing the risk of spreading the virus, which would decrease the economic impact of an outbreak. To date, USDA and its industry and university partners have developed Secure Milk Supply and Secure Pork Supply plans and have partially completed a Secure Beef Supply plan. These plans guide industry on managing uninfected premises and uninfected animals during an FMD or other foreign animal disease outbreak. For example, the Secure Milk Supply plan has guidance on what producers can do to continue moving shipments of milk during an outbreak, including how to implement enhanced biosecurity plans to prevent the spread of FMD to their facilities. The sheep industry is currently developing its own secure food and wool supply plan, according to industry representatives. USDA Would Likely Face Significant Challenges in Pursuing Its FMD Response Goals, Particularly regarding Vaccination USDA would likely face significant challenges in pursuing its FMD response goals of detecting, controlling, and containing FMD as quickly as possible; eradicating FMD using strategies that seek to stabilize animal agriculture industries and the economy; and facilitating continuity of commerce in uninfected animals. We identified 11 challenge areas, based on our review of USDA documents, interviews with agency officials and others with expertise with FMD, and 29 responses to our questionnaire. A majority of respondents indicated that in 10 of the 11 areas USDA would face challenges that are significant—that is, important enough to be worthy of USDA action. (See app. I, fig. 7, for a summary of the responses.) For the 11th area, which is communication and coordination, opinions were split on whether the area would present significant challenges. The 11 challenge areas, which sometimes overlap or fall outside of USDA’s direct control, are described below. Examples of actions USDA is taking to address these challenges are described later in this report. Surveillance USDA would likely face surveillance challenges that could delay detection of the first cases in an FMD outbreak. A majority (22 of 29) of respondents to our questionnaire indicated that USDA would face significant challenges in this area. FMD can spread without detection for the following reasons: there is no active surveillance for FMD, animals may not have visible signs until up to 4 days after becoming signs can be difficult to notice in some species, and infected wild animals could go undetected and spread the virus. For initial detection of an FMD outbreak, USDA relies on passive surveillance, waiting for producers or veterinarians to notice and report visible signs. In contrast, for initial detection of other diseases, such as bovine spongiform encephalopathy (commonly known as mad cow disease), USDA has active surveillance programs in which animals are routinely tested regardless of visible signs. According to USDA officials, the cost and resources required to conduct active surveillance for initial detection of an FMD outbreak would not be justified because the United States has not had an FMD outbreak for decades and there is a risk that false positives could create unnecessary disruptions. However, the officials said the agency would likely use active surveillance during an outbreak. Passive surveillance, however, may not allow for timely detection of the initial cases of FMD, particularly in sheep. FMD infection in sheep often causes only mild signs or symptoms, such as an elevated temperature or loose stool, and in some cases will not cause any overt signs or symptoms at all, even though the animal may be spreading the virus, according to representatives of the sheep industry. Therefore, an FMD outbreak could become widespread before USDA detects the first cases. Even if responders are able to detect FMD in domesticated animals before an outbreak becomes widespread, wild animals may become infected and spread the virus, posing additional challenges for USDA and its partners. For example, the U.S. population of feral swine, which are susceptible to FMD, is estimated at 6 million and is rapidly expanding, according to APHIS. Detecting and controlling infected wild animals could be extremely difficult, according to agency officials, and if not controlled, these populations could serve as carriers for the disease. In addition, limitations in diagnostic capabilities, discussed below, could hamper the availability of data needed for surveillance, such as accurate information on new cases of FMD. Diagnostic Capabilities USDA would likely face challenges related to its capability to diagnose FMD. Such challenges include the lack of validated population-level diagnostic tests and potentially insufficient resources to collect samples and perform diagnostic testing in a large outbreak. A majority (24 of 29) of respondents to our questionnaire indicated that USDA would face significant challenges in this area. Currently, during an FMD outbreak, USDA would rely on individual animal testing, given that it has not validated any diagnostic tests that can be used for a group or population of animals, according to USDA’s surveillance SOP. If an FMD outbreak expands, the ability to test a large number of animals quickly with minimal resources would be useful for USDA. In a 2017 study of the potential uses of a bulk milk test for FMD in dairy cattle, for example, USDA found that 720 bulk milk tests could replace over 35,000 individual animal tests with the same level of confidence in disease status. However, the study identifies additional work needed to implement bulk milk tests. USDA and state officials investigate suspected cases of FMD on previously uninfected premises, according to USDA documents. To do so, USDA or state officials travel to the suspected premises—sometimes over long distances—collect samples from the animal or animals, and send them to a qualified laboratory for diagnostic testing. During an outbreak, massive quantities of diagnostic testing may need to be conducted, straining the capacity of federal and state laboratories that are qualified to investigate suspected cases of FMD, and potentially causing delays in detecting infected premises, according to both an after-action report for a preparedness exercise and agency officials. In addition, USDA officials we interviewed expressed concern that diagnostic kits used for these individual animal tests would be in short supply during an outbreak and said that they do not currently know how much time it would take for manufacturers to produce more. In the event of a large FMD outbreak, delays in getting diagnostic results could slow USDAs ability to detect, control, and contain an outbreak. Information Management USDA would likely face challenges in the area of information management during an outbreak, including incompatible data systems at the state and federal levels or between diagnostic laboratories and USDA and responders who lack familiarity with USDA data systems. A majority (20 of 29) of respondents to our questionnaire indicated that USDA would face significant challenges in this area. USDA and state data systems track information on registered livestock premises and animals. In addition, USDA has an emergency response database for collecting and analyzing data on disease outbreaks and managing response resources. However, state data systems cannot always communicate directly with USDA’s data systems because they use different software, according to two state animal health officials. Such impediments to communication could delay information sharing about the location of infected and susceptible animals. One industry representative said that such delays could prolong decisions about permits for uninfected animals to move, disrupting industries’ continuity of business. According to an academic researcher, interruptions in movement of animals could cause processing facilities to either close, operate at a diminished capacity, or be overwhelmed by a backlog of animals once movement is restarted, leading to animal welfare concerns. These disruptions could present challenges for USDA to facilitate continuity of commerce in uninfected animals, one of its response goals. USDA’s ability to control an outbreak could also be impaired if responders lack familiarity with USDA data systems. For example, according to a USDA after-action report, during the 2014 avian influenza outbreak, some responders were unfamiliar with USDA’s system for entering outbreak response information, resulting in incorrect usage or underutilization of the system. As a result, USDA’s overall response was slower than it would have been if timely information had been available. Animal Traceability USDA would likely face challenges related to the traceability of animals (i.e., the ability to trace their locations and movements) after an outbreak was detected. We found that these challenges result from insufficient use of identification numbers for livestock premises (such as farms and ranches) and individual animals to enable tracing of infected, exposed, and susceptible animals, and from identification numbers that cannot be easily read (e.g., because they are not electronic). A majority (25 of 29) of respondents to our questionnaire indicated that USDA would face significant challenges in this area. In an outbreak, responders would use premises and animal identification numbers, if available, to trace the location and movements of infected animals to identify other animals that may have been exposed. They would also use the identification numbers to locate all susceptible animals in the region, in order to notify owners about the outbreak and any response measures in place, such as stop- movement orders. These activities would be hampered without the identification numbers. For example, Iowa and Texas regulations do not require producers to register all of their animals with the state. Also, record keeping varies at individual farms and ranches, where some producers have electronic records, but others have no written records or rely on hand-written paper documents, according to USDA documents. Searching through records by hand at individual farms could take days rather than the hours that it would take if the records were electronic, according to a USDA planning document. Without timely and accurate tracing through the use of premises and animal identification numbers, USDA may face challenges controlling and containing an FMD outbreak and facilitating continuity of commerce in uninfected animals. In addition, some animals have identification numbers on ear tags that must be read visually, which could slow USDA’s efforts to control and contain an outbreak. In an outbreak, responders would need to inspect animals with such ear tags to manually read and record the identification numbers for individual animals. In contrast, for animals with electronic tags, responders could use electronic readers, which can accurately read identification numbers for a group of animals from a distance of up to 12 feet, according to a 2016 USDA study on electronic identification for livestock. One industry representative said that the beef cattle industry has not widely implemented electronic identification because it is difficult for many operators to justify the added cost of purchasing and attaching an electronic tag for each animal. Biosecurity In an FMD outbreak, USDA would likely face biosecurity challenges including lack of sufficient biosecurity on some premises, difficulty in implementing biosecurity measures for certain species, and lack of documentation (such as a written plan) specifying what measures are currently in place. A majority (20 of 29) of respondents to our questionnaire indicated that USDA would face significant challenges in this area. If sufficient biosecurity measures are not consistently in place on farms, ranches, and feedlots, people and vehicles may inadvertently spread the FMD virus when they travel among premises, impeding USDA’s ability to control and contain an outbreak. For example during the 2001 FMD outbreak in the United Kingdom, poor biosecurity and livestock owners’ movements between scattered farms led to the introduction of FMD in previously uninfected areas, according to a 2002 report by the United Kingdom’s National Audit Office. Some livestock owners have not implemented extensive biosecurity measures on their premises, in part because they have not experienced a recent animal disease outbreak and measures may be difficult or expensive to implement, according to an industry representative. In addition, it may be difficult to implement biosecurity measures for certain species. For example, cattle feedlots operate outdoors and may have unrestricted points of entry and exit, so it can be more difficult and costly to control access and implement other biosecurity measures. In addition, even if producers have biosecurity measures in place, these measures may not be sufficiently documented to facilitate continuity of commerce in uninfected animals. According to USDA guidance documents, during an FMD outbreak, premises in areas with movement restrictions will be required to obtain permits to move any animals or animal products. To obtain such a permit, producers must show that they are not contributing to the spread of disease or putting their animals at risk of exposure, and producers without documented biosecurity plans may face delays moving their animals. According to swine industry representatives, even swine farms with biosecurity procedures do not always document such procedures or the steps they have taken. Depopulation USDA would likely face depopulation challenges during an FMD outbreak, including limited capability for killing large numbers of animals in a timely manner and difficulties owing to the large size of some animals affected by FMD. A majority (22 of 29) of respondents to our questionnaire indicated that USDA would face significant challenges in this area. For example, USDA officials said killing animals in large feedlots—which can hold 50,000 or more animals—would quickly overwhelm resources, such as the staff and equipment required to kill animals. USDA policy calls for depopulating infected premises within 24 hours, but this may not be feasible on large livestock operations because the animals have to be killed individually, which would be time-consuming according to an industry representative. If infected premises are not quickly depopulated, animals will continue producing the virus and increase the risk of infecting animals on additional premises, hampering USDA’s ability to control and contain an outbreak. Rapid depopulation of infected swine is particularly critical to containing the spread of an outbreak because swine are known as amplifiers of FMD virus, producing and excreting 3,000 times more virus than cattle or sheep, according to USDA documents. Carcass Disposal USDA would likely face disposal challenges during an FMD outbreak, including the feasibility and logistics of disposing of a large number of animal carcasses, public concern about disposal options, and the environmental impacts of disposal. A majority (25 of 29) of respondents to our questionnaire indicated that USDA would face significant challenges in this area. In a large FMD outbreak, millions of cattle could be affected. It is possible that FMD can survive for several months on a frozen carcass, according to USDA documents, so if such carcasses are not disposed of properly, they could pose a risk for spreading FMD, hampering USDA’s efforts to control and contain an outbreak. Disposing of the carcasses of a 50,000- head herd of cattle from a large feedlot would be a massive effort: the total weight for disposal could be as much as 30,000 tons, or about 1,500 dump truck loads to move all the animals to disposal sites, according to an industry representative. One state animal health official stated that disposal of one or two herds may be possible, but if an outbreak were more widespread, the state would quickly run out of options. In addition, certain disposal strategies, such as incinerating large piles of carcasses, may cause a negative public reaction, according to an industry representative, USDA’s disposal SOP, and state animal health officials. Figure 5 illustrates carcass disposal during a 2001 FMD outbreak in the United Kingdom, where the government implemented a policy of stamping out all susceptible animals within 3 kilometers of known FMD cases. In reaction to the policy, the public staged protests, and businesses in rural areas lost customers who stayed away because of the striking images in the media, according to a 2002 report by the University of Newcastle. Finally, carcass disposal can create environmental impacts, such as when a burial site contaminates the groundwater or incineration contaminates the air. In general, states regulate disposal, including such things as the timing (e.g., within 24 hours of an animal’s death) and the method of disposal (e.g., prohibiting outdoor incineration or specifying that up to 7 cattle may be buried per acre per year). In an FMD outbreak, large numbers of carcasses could make it difficult to comply with such regulations. Resources USDA would likely face resource challenges in pursuing its FMD response goals, including insufficient numbers of incident responders to effectively implement USDA strategies in a medium or large outbreak, as well as insufficient resources devoted to preparedness planning in some states. A majority (23 of 29) of respondents to our questionnaire indicated that USDA would face significant challenges in this area. During the 2014 avian influenza outbreak, there were difficulties quickly providing response resources, such as personnel and equipment, to rapidly stamp out affected flocks, according to a USDA after-action report. According to an academic researcher, an FMD outbreak would be significantly more difficult to handle than recent avian influenza outbreaks. One state official noted that in his state there is not enough of a workforce to adequately respond to an outbreak, and there is no assigned workforce at the local level. For example, this official noted that his state employed only two veterinarians and a few animal health technicians to collect samples for testing in the event of an FMD outbreak. Other state animal health officials expressed concern that states and counties will have difficulty fielding adequate workforces to inspect animal transport vehicles and implement stop-movement orders. Insufficient preparedness planning in some states could also hamper response efforts, according to a response to our questionnaire from an academic researcher with expertise in FMD preparedness. Some states have not allocated resources to develop FMD response plans, including, for example, the conditions that would trigger a stop-movement order. States typically control intrastate movement under the state’s authority, and if states delay issuing stop-movement orders, it may be more difficult for USDA to control and contain an outbreak. Communication and Coordination Communication and coordination may be an area where USDA could face challenges during an FMD outbreak because of ineffective external or internal communications and unclear roles and responsibilities. Responses to our questionnaire in all categories (federal and state government officials, industry representatives, and academic researchers) were mixed about whether communication and coordination was an area with significant challenges. Specifically, 11 respondents said it was an area with significant challenges, 12 said it was not, and 6 were unsure. One industry respondent who said that the area was not a challenge cited a team of industry representatives that is working with USDA and states to prepare for an FMD outbreak. On the other hand, during a 2016 FMD preparedness exercise in Texas, coordination between USDA and other participants was at times inadequate. For example, during the exercise USDA and the Texas Animal Health Commission shared leadership of the response effort, and some respondents cited frustration with this top-down leadership structure because they were accustomed to emergency management practices and protocols designed for incidents such as natural disaster response efforts, which are generally initiated at the local level. Participants commented that they were confused about who did what and said that coordination needs to be improved between USDA and local governments, according to an after-action report. Also, communication across participating agencies broke down. For example, information from USDA on stop-movement orders, the size of the quarantine zone, and the number of sites quarantined did not reach all stakeholders in a timely manner, according to an after-action report. Appraisal and Compensation Compensating livestock owners for animals or equipment that the government determines must be destroyed to limit the spread of FMD would likely pose various challenges for the agency. USDA would provide the owners with up to 100 percent of the expenses of purchase, destruction, and disposition of animals or materials required to be destroyed, based on the agency’s appraisal of the fair market value. Doing so would likely pose various challenges for the agency, according to USDA and state government officials. A majority (19 of 29) of respondents to our questionnaire indicated that USDA would face significant challenges in this area. Such challenges include uncertainties about fair appraisal methods (especially when an outbreak has caused livestock prices to decline), owners resisting killing their animals if compensation rates are too low, and the potentially massive scale of compensation payments. According to USDA economists, if trade restrictions were imposed during an FMD outbreak, the fair market value of animals and their products would likely drop as a result of oversupply. USDA’s response to the outbreak could be slowed if producers brought legal challenges to stop the stamping out of their herds because they were not satisfied with compensation levels, a scenario that took place in a 2018 USDA-led exercise simulating the first few days of an FMD outbreak. Moreover, in a widespread FMD outbreak, the scale of federal compensation payments could be substantial. For example, in the 2001 United Kingdom FMD outbreak, compensation costs were estimated at over $1 billion for the killing of about 6 million animals. Given the larger size of the livestock industry in the United States, federal compensation costs could be much higher, depending on the number of animals killed as part of the response. Vaccination USDA would likely face challenges related to vaccination, an area of particular importance given vaccination’s central role in USDA’s strategies for pursuing its response goals. All 29 respondents to our questionnaire agreed that the challenges USDA faces related to vaccination are significant. In particular, USDA does not have access to sufficient vaccine to achieve its response goals under many potential outbreak scenarios, and there is not consensus about how to allocate the limited supply, according to USDA officials and documents. Other challenges in this area relate to the timing and logistics of obtaining, distributing, and administering vaccine and to scientific, procedural, and infrastructure issues in vaccine production. Limited Supplies of Vaccine Supplies of FMD vaccine concentrate in the vaccine bank may be sufficient to help control and eradicate a small, localized outbreak, but it is unlikely that they would be sufficient to stop a larger outbreak, according to USDA planning documents and officials. With a vaccine that is matched to the appropriate FMD subtype, a single dose can protect cattle for 6 months, and two doses are required to provide the same protection to swine. APHIS’s 2016 FMD vaccination policy states that 25 million doses for each of 10 subtypes of the virus is an appropriate minimum target to have available. However, the United States currently has access to only 1.75 million doses of each subtype available in the vaccine bank, according to USDA documents. In the United States, there are 24 states in which the number of livestock exceeds the doses available in the vaccine bank, according to USDA documents. In a 2016 report to Congress, USDA stated that the cost to reach its target of 25 million doses would be about $125 million, which would be about 10 percent of APHIS’s budgetary resources in fiscal year 2016. In addition, because the vaccine concentrate has a 5-year shelf life, USDA would incur costs to routinely replace the supply of concentrate, according to agency officials. The Agriculture Improvement Act of 2018 contains a provision that directs the Secretary of Agriculture to establish a national animal vaccine and veterinary countermeasures bank, and to prioritize the acquisition and maintenance of sufficient quantities of FMD vaccine and accompanying diagnostic products. The need for additional FMD vaccine was reinforced by a 2016 survey of states by USDA and Iowa State University. On the basis of responses from 32 state animal health officials, the authors estimated that in a widespread or national outbreak, states would plan to use on average 4.2 million doses during the first 14 weeks of the outbreak. Based on these estimates, a vaccine request from a single state could greatly exceed the 1.75 million doses available per subtype in the vaccine bank’s supply. Moreover, if an FMD outbreak occurred in Texas or Iowa, the states with the largest cattle and swine populations, respectively, the available vaccine supply would provide a single dose for about 14 percent of Texas’s 12.3 million cattle or the required two doses for about 4 percent of Iowa’s 22.8 million swine. Texas’s and Iowa’s cattle and swine populations together make up about 24 percent of the combined population of cattle and swine nationwide. Figure 6 illustrates the vaccine doses needed to protect cattle and swine in Texas and Iowa compared with the currently available FMD vaccine bank supply of 1.75 million doses per subtype. In addition, because of the large number of FMD subtypes present around the world, and because the FMD virus is constantly mutating, it is possible that an FMD subtype could be introduced in the United States that is not covered by vaccines currently in the vaccine bank. According to a representative from an FMD vaccine manufacturer, producing a vaccine for a new subtype of FMD could take from 6 to 18 months, depending on whether the subtype was known and other factors. Lack of Consensus on Vaccine Allocation Because of the limited supply of vaccine and the potentially high demand for it, USDA would likely face the challenge of deciding how to allocate it in an FMD outbreak. In a 2016 survey of 13 industry veterinarians, there was no consensus within the beef, dairy, and swine industries about priorities for the vaccine. Specifically, USDA and Iowa State University asked the veterinarians to rank the importance of vaccinating various populations (e.g., bull studs, lactating cows, and boar studs) within the beef, dairy, and swine industries, assuming there was only enough vaccine to vaccinate 25 to 50 percent of animals in a specified area. The responses varied widely, with high and low rankings for nearly every population of animals. Timing and Logistics The timing and logistics of obtaining, distributing, and administering the FMD vaccine could also pose challenges. The timing to reformulate the banked vaccine would pose challenges for USDA in an outbreak, according to respondents to our questionnaire. In addition, in March 2005, we found that USDA would not be able to deploy vaccines rapidly enough to contain a widespread FMD outbreak. After USDA requests FMD vaccine from the vaccine bank, vaccine manufacturers could take from 4 to 13 days to finish and ship all of the requested vaccine to the United States, during which time the virus could spread within the livestock population, according to USDA documents. If the vaccine bank’s supply of concentrate is exhausted during an outbreak and more is needed, manufacturers may take several months to produce it, according to a vaccine manufacturer. After obtaining the vaccine, USDA would distribute it to affected states, and the states would distribute it to veterinarians, producers, or others who would be responsible for administering vaccine, according to USDA and state FMD vaccination documents. Many states do not currently have vaccination plans in place and may not have identified the warehousing locations, staff needs, and tracking required to efficiently distribute FMD vaccine, according to agency and state government officials, which could slow USDA’s efforts to contain and control an outbreak. States with vaccination plans may be able to more quickly and effectively distribute and administer FMD vaccine during an outbreak. For example, California has a vaccination plan that details how it would receive, distribute, and administer FMD vaccine while maintaining the appropriate temperatures and documentation. The plan includes details such as the supplies needed for administering FMD vaccine to cattle. Scientific, Procedural, and Infrastructure Issues USDA faces challenges in obtaining vaccine and using it in a response effort because of scientific, procedural, and infrastructure challenges related to the vaccine and its production. There are very few vaccine manufacturers in the world with the capacity to produce most of the FMD vaccine subtypes and meet the quality standards required by the United States, according to agency officials. Further, there is currently no production capacity for FMD vaccine in the United States because dedicated infrastructure is not in place to produce vaccines without live virus. There is a statutory prohibition against working with live FMD virus on the U.S. mainland, absent a permit granted by the Secretary of Agriculture, and live virus is needed to produce conventional vaccines. To work within this constraint, USDA’s Agricultural Research Service (ARS) and DHS developed new technologies to produce vaccine using modified versions of the virus that are unable to cause or transmit disease. The agencies transferred these technologies to vaccine companies that are investing in their development, according to USDA officials. In 2018, the Secretary of Agriculture announced that vaccine companies could apply for permits to work with a specific modified, noninfectious version of the FMD virus on the mainland. One company has exclusive rights to use this modified version, which was developed and patented by ARS. The company plans to produce FMD vaccine in the United States, but it could take several years to license the initial product, complete the necessary permitting procedures, and build manufacturing infrastructure, according to USDA documents and a company official. Using FMD vaccine to respond to an outbreak presents additional challenges that are related to limitations of FMD vaccines. Specifically, animals may take up to 28 days after vaccination to develop protective immunity to FMD, depending on the species, potency of vaccine, and other factors. Even after 28 days, some vaccinated animals may not be fully immune to FMD and may continue spreading the virus despite having no visible signs of infection, according to USDA documents. USDA Has Identified Actions to Mitigate Challenges in Responding to FMD but Has Not Prioritized or Monitored Their Completion To mitigate challenges in responding to potential FMD outbreaks, USDA’s APHIS has identified corrective actions through preparedness exercises, surveys, and lessons learned in other outbreaks, as called for in its SOPs. However, APHIS generally does not follow its SOPs for prioritizing or monitoring the completion of these actions. USDA Has Used Preparedness Exercises, Surveys, and Lessons Learned in Other Outbreaks to Identify Actions to Mitigate FMD Challenges A USDA SOP outlines a process for identifying corrective actions to improve the agency’s preparedness for outbreaks of foreign animal diseases. According to the SOP, APHIS is to identify corrective actions after preparedness exercises and animal disease incidents. Consistent with this SOP, APHIS identifies corrective actions for FMD preparedness through exercises simulating FMD outbreaks, surveys of agency officials and others, and lessons learned from outbreaks of other diseases. More specifically, see the following: APHIS sponsors FMD preparedness exercises and participates in some such exercises that other federal or state agencies sponsor. After an exercise, the sponsoring agency generally prepares an after- action report that specifies corrective actions, and may include a responsible party for and a date for completing each action. APHIS has after-action reports for more than 40 FMD preparedness exercises that it sponsored or participated in from 2007 through 2018, which include corrective actions for USDA and APHIS. APHIS conducts annual surveys of its staff and others—including state government officials, industry representatives, and academics— to identify corrective actions related to preparedness and response training needs. APHIS identifies corrective actions for FMD preparedness based on lessons learned after outbreaks of other diseases. For example, some of the actions that APHIS identified after outbreaks of avian influenza, such as improving a database used for emergency response, could also help the agency mitigate challenges it would face in an FMD outbreak, according to agency officials. APHIS has identified dozens of corrective actions in all 11 of the areas where we identified challenges for USDA in pursuing its FMD response goals. APHIS has taken corrective actions in each area. For example, to help mitigate the challenge of insufficient biosecurity on some premises, the agency partnered with Iowa State University to offer producers across the nation training on developing enhanced biosecurity plans for implementation during a foreign animal disease outbreak. However, APHIS has not yet taken some other corrective actions that it has identified. According to agency officials and experts we interviewed, these corrective actions can help mitigate, but may not completely resolve, the challenges identified. Some challenges may be outside USDA’s control to fully resolve. For example, the logistical challenges of carcass disposal could be overwhelming in a large-scale outbreak, which could generate thousands of tons of carcasses. A corrective action calling for training on carcass management may help educate FMD responders about disposal methods or preventing environmental impacts; however, such training may not fully resolve the challenge. Table 1 shows examples of corrective actions identified by USDA in after- action reports, planning documents, other agency documents, or interviews, which the agency has taken or not yet taken for the 11 challenge areas we identified. Some of the corrective actions that USDA has identified and taken relate to the challenge area of vaccination. For example, to help speed access to vaccine, in 2018, the Secretary of Agriculture announced that vaccine companies could apply for permits to enable them to develop and produce certain types of FMD vaccine in the United States in the future, thereby avoiding delays from producing the vaccine overseas and shipping it here. Also, APHIS officials have used an FMD predictive model to evaluate the effectiveness of different vaccination schemes at the state level, and they told us that they plan to conduct a similar analysis at the national level. The results could help inform USDA’s vaccine prioritization decisions in advance of an outbreak, according to the officials. USDA has also begun implementing other corrective actions that have been identified related to FMD vaccination, although more work remains. For example, in February 2009, we recommended—and USDA agreed— that it should detail in a contingency response plan how a response using vaccines would be implemented. Similarly, after-action reports for 2013 and 2016 preparedness exercises highlighted the need for procedures to guide the implementation of FMD vaccination strategies. APHIS has taken or planned several steps to help address this need: In 2009, APHIS began drafting vaccine implementation procedures but realized that the national procedures needed to be developed in collaboration with states because of variation among states in their predominant industries, agriculture infrastructure, and government resources. When more states have developed vaccination implementation procedures, APHIS may revise and finalize the national procedures originally drafted in 2009, according to agency officials. APHIS’s National Veterinary Stockpile developed plans in 2009 and 2011 outlining how some aspects of a vaccination strategy would be implemented. Specifically, in 2009 it developed a template that states and tribes can use to develop their own plans, and in 2011 it prepared a logistical plan for distributing FMD vaccine to the field. The National Veterinary Stockpile also held preparedness exercises from 2008 to 2018 for states and tribes to practice requesting, receiving, and delivering the vaccine and to obtain information that could help APHIS develop national vaccination procedures. From 2011 to 2018, APHIS and the California Department of Food and Agriculture worked together to draft detailed procedures for implementing an FMD vaccination strategy in California. The draft procedures and related planning documents are intended to serve as templates to help other states develop such procedures, according to agency officials. APHIS also piloted a workshop on FMD vaccination planning in October 2018 and plans to hold related preparedness exercises with states from 2019 to 2021. APHIS Does Not Consistently Follow Its Procedures for Prioritizing Corrective Actions and Monitoring Their Completion Although APHIS has identified dozens of corrective actions for FMD preparedness, it has not consistently followed its SOP for prioritizing all of the actions and monitoring progress in implementing them. Specifically, once corrective actions have been identified, APHIS’s SOP calls for prioritizing the actions in an improvement plan, and monitoring the actions to track their completion. APHIS has sometimes designated actions related to FMD vaccination as high priority during annual management meetings, but not all corrective actions have been prioritized, according to agency officials. For example, a 2016 corrective action called for USDA to conduct an exercise to explore roles, responsibilities, and activities related to recovery from a large-scale animal disease outbreak. However, as of December 2018, this action has not been prioritized in an improvement plan, according to the after-action report and an agency official. In addition, corrective actions have sometimes been identified multiple times without being tracked to completion. For example, an after-action report for a 2007 exercise found that a process for making vaccine- allocation decisions was needed and suggested that a vaccine advisory group could assist with doing so. A 2014 after-action report stated that processes governing vaccine prioritization and allocation were not clear and identified a corrective action calling for USDA to develop a federal- level doctrine for vaccine prioritization and allocation. USDA’s 2016 FMD vaccination policy states that APHIS, in coordination with state, local, and industry stakeholders, should consider developing processes, procedures, and strategies for prioritizing the use of currently available vaccine in an outbreak. However, APHIS has not developed processes, procedures, or strategies for prioritizing and allocating its supply of FMD vaccine, according to agency officials. The officials said they have not developed such a process because of limited resources and competing priorities. Also, it would require participation from state and industry stakeholders, and given the small quantity of FMD vaccine relative to the large number of susceptible animals in the country, the stakeholders have had little incentive to devote the necessary time to the issue, according to agency officials. More generally, agency officials told us that the agency has not prioritized or monitored completion of some corrective actions because they have been responding to actual outbreaks of animal and plant diseases. They also noted that they have limited resources for FMD preparedness, which may make it difficult for them to complete all of the corrective actions that have been identified. However, for avian influenza preparedness, APHIS compiled and prioritized more than 300 corrective actions in a database and tracked more than 200 of them to completion. Through this process, it completed nearly all of the 111 high-priority actions and over 100 moderate-priority actions, according to its database as of May 2018. For example after the 2014 avian influenza outbreak, APHIS completed corrective actions that improved its response to a subsequent outbreak in 2016, according to agency documents. The corrective actions addressed such issues as how to quickly depopulate and dispose of infected poultry and efficiently compensate affected producers. APHIS continues to monitor its progress in implementing the remaining corrective actions for that disease, according to agency officials. APHIS’s SOP calls for prioritizing corrective actions to identify the most beneficial use of resources. The SOP also calls for monitoring corrective actions to track their completion so that APHIS can improve its response capabilities and correct problems or deficiencies identified in exercises or incidents. Without following its SOP to prioritize corrective actions for FMD preparedness, APHIS cannot ensure that it is allocating its limited resources toward implementing the most beneficial actions. And without following its SOP for monitoring the corrective actions, APHIS cannot ensure that the highest-priority actions are completed. Conclusions APHIS has taken important steps to prepare for an FMD outbreak and to mitigate challenges it may face in responding to one. For example, the agency has developed an extensive collection of strategy and guidance documents, held FMD preparedness exercises to practice response activities, and identified dozens of corrective actions and completed some of these actions. However, APHIS has not yet completed other corrective actions, including actions that have been identified multiple times, such as developing a process for prioritizing and allocating the limited supply of FMD vaccine. APHIS has an SOP for prioritizing and monitoring corrective actions. By following this SOP for avian influenza preparedness, the agency succeeded in prioritizing more than 300 corrective actions and tracking over 200 corrective actions to completion, including nearly all high-priority actions. In contrast, for FMD preparedness, APHIS has not consistently prioritized or monitored the corrective actions it has identified. Without following its SOP to prioritize and monitor corrective actions for FMD preparedness, APHIS cannot ensure that it is allocating its limited resources to the most beneficial actions to prepare for a possible FMD outbreak. Recommendations for Executive Action We are making the following two recommendations to USDA: The Administrator of the Animal and Plant Health Inspection Service should follow the agency’s SOP to prioritize corrective actions for FMD preparedness. (Recommendation 1) The Administrator of the Animal and Plant Health Inspection Service should follow the agency’s SOP to monitor progress and track completion of corrective actions for FMD preparedness. (Recommendation 2) Agency Comments We provided a draft of this report to USDA and DHS for review and comment. USDA provided comments, reproduced in appendix II, in which it agreed with our recommendations. In addition, USDA and DHS provided technical comments, which we incorporated as appropriate. In response to our recommendations, USDA said that, starting in the second quarter of fiscal year 2019, APHIS will implement the agency’s SOP and prioritize corrective actions to be tracked in its corrective actions database, as we recommended. USDA also said that, starting in the third quarter of fiscal year 2019, APHIS will assess and update the items related to FMD in its corrective actions database, as we recommended. In addition, USDA said that APHIS will track accomplishments it makes under a related provision of the Agriculture Improvement Act of 2018. We are sending copies of this report to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If your or your staff have any questions about this report, please contact me at (202) 512-3841 or morriss@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report (1) describes the U.S. Department of Agriculture’s (USDA) planned approach for responding to a foot-and-mouth disease (FMD) outbreak; (2) identifies what challenges, if any, USDA would face in pursuing its FMD response goals; and (3) examines how USDA identifies, prioritizes, and monitors corrective actions to mitigate these challenges. To describe USDA’s planned approach for responding to an FMD outbreak, we reviewed relevant legislation and USDA strategy and guidance documents. We also interviewed officials from USDA’s Animal and Plant Health Inspection Service (APHIS) at the agency’s headquarters in Riverdale, Maryland; laboratories on Plum Island, New York, and in Ames, Iowa; center for epidemiology and animal health in Fort Collins, Colorado; and center for veterinary biologics in Ames, Iowa; and officials from the Department of Homeland Security (DHS) and the Agricultural Research Service (ARS) at DHS’s Plum Island Animal Disease Center on Plum Island, New York. We selected these officials to interview because of their knowledge about USDA’s planned approach, their involvement in preparing for an FMD outbreak, and the roles they would play in responding to such an outbreak. To identify what challenges, if any, USDA would face in pursuing its FMD response goals, we first came up with a list of potential challenge areas. To develop the list of potential challenge areas, we reviewed USDA documents, reports about FMD outbreaks in other countries, and after- action reports from 41 preparedness exercises in the United States from 2007 to 2018 in which officials practiced responding to simulated FMD outbreaks and identified emerging challenges. The preparedness exercises included small-scale as well as large-scale ones with a variety of participants, durations, and response activities. We also interviewed APHIS headquarters staff and field staff in Iowa (the state with the most livestock); APHIS and ARS laboratory officials; state animal health officials in California, Colorado, Iowa, and North Carolina; representatives from the beef cattle, dairy cattle, swine, and sheep industries; and academic researchers with expertise in this area. We selected the individuals to interview based on their knowledge about challenges that USDA could face in pursuing its FMD response goals, their central role in preparing for an FMD outbreak, and recommendations from other interviewees, as well as diversity in geographic location. We also visited a swine farm and cattle feedlot in Iowa and interviewed the owners. We selected a swine farm and cattle feedlot to visit because swine and cattle were the livestock industries with the greatest populations of animals in the United States in 2016. We identified a list of 11 potential challenge areas. To confirm the significance of the challenge areas, we used a questionnaire with the list of potential challenge areas. To select the questionnaire recipients, we identified four categories of people who are knowledgeable about challenges that USDA could face in pursuing its FMD response goals, including those who could be involved in a response effort. The four categories are (1) federal government officials, (2) state government officials, (3) livestock industry representatives, and (4) academic researchers with expertise in FMD preparedness. For categories with multiple individuals, we selected individuals to represent relevant units within APHIS, ARS, and DHS (e.g. headquarters; field offices; laboratories; surveillance, preparedness and response services; and science, technology, and analysis services); different livestock industries (beef cattle, dairy cattle, swine, and sheep); and states with relatively high livestock populations. We asked the recipients whether USDA would face a significant challenge in each of the 11 areas and whether they knew of other challenge areas we had not listed. We defined significant to mean a challenge that is sufficiently great or important enough to be worthy of USDA action to address the challenge. We initially sent the questionnaire with potential challenges to 39 recipients. Two federal officials had retired from their positions, so we sent the list to their replacements. Of the 39 recipients, we received responses from 28. We also included an additional response that APHIS provided from an official who we had not initially contacted and who had relevant expertise, for a total of 29 responses. Despite two follow-up attempts, we did not receive responses from 11 recipients, including both recipients from ARS, 5 of the 18 from APHIS, 3 of the 10 state animal health officials, and 1 of the 2 national animal health laboratory network officials (these are affiliated with universities). Figure 7 shows the categories of respondents and their responses in each of the11 challenge areas. Since we used a nonprobability sample, the results are not generalizable to all government officials, livestock industry officials, or FMD experts, but the responses helped confirm the list of 11 challenge areas and provided illustrative information about each one. We reviewed challenges related to vaccination for FMD in greater depth than other challenges because of the significant role vaccination could play if reliance solely on stamping out is not feasible. Specifically, we visited DHS’s Plum Island Animal Disease Center on Plum Island, New York, where we interviewed officials from USDA’s Foreign Animal Disease Diagnostic Laboratory and the Agricultural Research Service, as well as DHS officials, about challenges related to FMD vaccination. We also reviewed agency documents on the topic and interviewed other officials from USDA, the North American Vaccine Bank, universities, states, and industry groups about issues related to FMD vaccination. Further, we interviewed officials from the vaccine company that currently produces the majority of FMD vaccine available for use in the United States and a company that has rights to use a modified version of the FMD virus to produce FMD vaccine in the future. To determine how USDA identifies, prioritizes, and monitors corrective actions to mitigate the challenges, we reviewed APHIS and DHS guidance on evaluation and improvement planning and other agency documents, observed an FMD preparedness exercise, reviewed after- action reports from 41 FMD preparedness exercises conducted from 2007 through 2018, and interviewed USDA officials. We reviewed APHIS’s and DHS’s procedures for evaluation and improvement planning to understand how APHIS is to identify, prioritize, and monitor corrective actions. To determine whether APHIS was consistently following these procedures, we observed the preparedness exercise at APHIS’s Riverdale, Maryland, office; reviewed a preliminary after-action report for that exercise; and reviewed after-action reports for the 41 other preparedness exercises. We interviewed APHIS officials about corrective actions identified in the after-action reports and what steps the agency has taken to prioritize the actions and monitor their progress. We reviewed agency documents about these procedures and about actions USDA has taken and identified but not yet taken to mitigate challenges. To find examples of corrective actions that USDA has identified and taken or not yet taken, we reviewed after-action reports for the 41 preparedness exercises; APHIS’s 2018-2020 training and exercise plan for its veterinary services emergency preparedness and response unit; and other agency documents, such as contracts and plans, and interviewed agency officials. The examples of corrective actions in table 1 are illustrative only and do not include or represent all of the actions that USDA has identified. We sent a draft table of examples to APHIS officials and incorporated their comments as appropriate. We also reviewed a GAO report on USDA’s management of highly pathogenic avian influenza (avian influenza) outbreaks; interviewed agency officials; reviewed USDA after-action reports for avian influenza outbreaks; and reviewed USDA’s database of related corrective actions to learn how the agency identifies, prioritizes, and monitors actions to mitigate challenges for that disease. To assess the overall reliability of that database to use information from the database in our report, we reviewed management controls over the information systems that maintain the data and interviewed USDA officials who manage the database. We determined that the database was sufficiently reliable to describe the contents of the database and general status of corrective actions. We conducted this performance audit from May 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from U. S. Department of Agriculture Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Nico Sloss (Assistant Director), Kevin Bray, Emily Christoff, Mary Denigan-Macauley, Christine Feehan, Jesse Lamarre-Vincent, Cynthia Norris, Anne Rhodes-Kline, and Amber Sinclair made key contributions to this report. Ross Campbell, Barb El Osta, Kathryn Godfrey, Hayden Huang, and Dan Royer also made important contributions to this report. Related GAO Products Foot-and-Mouth Disease: USDA’s Evaluations of Foreign Animal Health Systems Could Benefit from Better Guidance and Greater Transparency. GAO-17-373. Washington, D.C.: April 28, 2017. Avian Influenza: USDA Has Taken Actions to Reduce Risks but Needs a Plan to Evaluate Its Efforts. GAO-17-360. Washington, D.C.: April 13, 2017. Emerging Animal Diseases: Actions Needed to Better Position USDA to Address Future Risks. GAO-16-132. Washington, D.C.: December 15, 2015. Federal Veterinarians: Efforts Needed to Improve Workforce Planning. GAO-15-495. Washington, D.C.: May 26, 2015. Homeland Security: An Overall Strategy Is Needed to Strengthen Disease Surveillance in Livestock and Poultry. GAO-13-424. Washington, D.C.: May 21, 2013. Veterinarian Workforce: Actions Are Needed to Ensure Sufficient Capacity for Protecting Public and Animal Health. GAO-09-178. Washington, D.C.: February 4, 2009. High-Containment Biosafety Laboratories: DHS Lacks Evidence to Conclude That Foot-and-Mouth Disease Research Can Be Done Safely on the U.S. Mainland. GAO-08-821T. Washington, D.C.: May 22, 2008. National Animal Identification System: USDA Needs to Resolve Several Key Implementation Issues to Achieve Rapid and Effective Disease Traceback. GAO-07-592. Washington, D.C.: July 6, 2007. Avian Influenza: USDA Has Taken Important Steps to Prepare for Outbreaks, but Better Planning Could Improve Response. GAO-07-652. Washington, D.C.: June 11, 2007. Homeland Security: Much Is Being Done to Protect Agriculture from a Terrorist Attack, but Important Challenges Remain. GAO-05-214. Washington, D.C.: March 8, 2005.
Why GAO Did This Study FMD is a highly contagious viral disease that causes painful lesions on the hooves and mouths of some livestock, making it difficult for them to stand or eat, thus greatly reducing meat and milk production. The United States has not had an FMD outbreak since 1929, but FMD is present in much of the world. An FMD outbreak in the United States could have serious economic impacts, in part because trade partners would likely halt all imports of U.S. livestock and livestock products until the disease was eradicated. These imports were valued at more than $19 billion in 2017. GAO was asked to review USDA's efforts to prepare for an FMD outbreak. This report examines (1) USDA's planned approach for responding to an FMD outbreak; (2) challenges USDA would face in pursuing its response goals; and (3) how USDA identifies, prioritizes, and monitors corrective actions to mitigate the challenges. GAO observed a USDA FMD preparedness exercise; reviewed agency documents and nongeneralizable questionnaire responses from 29 respondents from federal and state government, livestock industries, and universities; and interviewed officials from federal and state governments and representatives of livestock industries and universities. What GAO Found The U.S. Department of Agriculture's (USDA) planned approach for responding to an outbreak of foot-and-mouth disease (FMD) includes several strategies. These strategies generally rely on killing infected and susceptible animals, vaccinating uninfected animals, or a combination of both approaches. USDA would implement one or more of the strategies, depending on factors such as the outbreak's size and the resources available, according to agency documents. USDA would likely face significant challenges in pursuing its response goals of detecting, controlling, and containing FMD quickly; eradicating FMD while seeking to stabilize industry and the economy; and facilitating continuity of commerce in uninfected animals. GAO identified challenges in 11 areas—including allocating a limited supply of FMD vaccine—based on its review of USDA documents, responses to GAO's questionnaire, and interviews with agency officials and others with expertise on FMD. According to USDA, the agency may not have a sufficient supply of FMD vaccine to control more than a small outbreak because of limited resources to obtain vaccine. As shown below, the current vaccine supply would be sufficient to protect about 14 percent of Texas's cattle or about 4 percent of Iowa's swine; these states' cattle and swine populations are the nation's largest. The Agriculture Improvement Act of 2018 includes a provision to increase the FMD vaccine supply. USDA has identified dozens of corrective actions to mitigate the challenges of responding to an FMD outbreak, as called for in USDA procedures, but has not prioritized these corrective actions or monitored their completion, as also called for in its procedures. USDA has identified the corrective actions through exercises simulating FMD outbreaks, surveys, and lessons learned from other foreign animal disease outbreaks. However, USDA has not completed all of the corrective actions, including actions related to vaccination. Agency officials stated that they have not completed such corrective actions because they have been responding to outbreaks of other animal diseases and have limited resources. Without following agency procedures to prioritize and monitor corrective actions, USDA cannot ensure that it is allocating its resources to the most beneficial actions to prepare for a possible FMD outbreak. What GAO Recommends GAO is recommending that USDA follow its procedures to prioritize and monitor the completion of corrective actions that the agency has identified for FMD preparedness. USDA agreed with these recommendations, and described actions it will take to implement them.
gao_GAO-18-494
gao_GAO-18-494_0
Background CFIUS was established by executive order in 1975 to monitor the effect of and to coordinate U.S. policy on foreign investment in the United States. In 1988, Congress enacted the Exon-Florio amendment adding section 721 to the Defense Production Act of 1950, which authorized the President to investigate the effect of certain foreign acquisitions of U.S. companies on national security and to suspend or prohibit acquisitions that might threaten to impair national security. The President delegated this investigative authority to CFIUS. The Foreign Investment and National Security Act of 2007 further amended the Defense Production Act and formally established CFIUS in statute. CFIUS is responsible for reviewing and investigating covered transactions to determine the effects of the transaction on national security. The Foreign Investment and National Security Act of 2007 does not formally define national security, but provides a number of factors for consideration by CFIUS and the President in determining whether a covered transaction poses a national security risk. These factors include the potential national security effects on U.S. critical technologies and whether the transaction could result in the control of a U.S. business by a foreign government (for a full list of factors, see Appendix III). CFIUS may also consider other factors in determining whether a transaction poses a national security risk. Chaired by the Secretary of the Treasury, CFIUS includes voting members from the Departments of Commerce, Defense, Energy, State, Justice, and Homeland Security; the Office of the U.S. Trade Representative; and the Office of Science and Technology Policy. Treasury is responsible for a number of tasks. According to Treasury officials, these tasks include coordinating operations of the committee, facilitating information collection from parties involved in the transaction (such as a foreign acquirer and U.S. business owner involved in an acquisition), reviewing and sharing data on mergers and acquisitions with member agencies, and managing CFIUS time frames. Treasury also communicates with the parties on CFIUS’s behalf. The committee generally has three core functions: review and investigate transactions that have been voluntarily submitted—or notified—to the committee by the parties to the transaction and take action as necessary to address potential national security concerns; monitor and enforce compliance with mitigation agreements; and identify transactions of concern that have not been voluntarily notified to CFIUS for review, referred to in this report as non-notified transactions. The Foreign Investment and National Security Act of 2007 does not require that parties notify CFIUS of a transaction. In examining covered transactions, CFIUS members seek to identify and address, as appropriate, any national security concerns that arise as a result of the transaction. CFIUS reviews notices that have been voluntarily submitted—or notified—to the committee by parties to potentially covered transactions. Notices to CFIUS contain information concerning the nature of the transaction and the parties involved, such as the business activities performed by the U.S. business and any products or services supplied to the U.S. government. After receiving a notice, Treasury drafts an analysis to assess whether the transaction submitted is a covered transaction, meaning whether the transaction could result in foreign control of a U.S. business. With limited exceptions, a transaction receives safe harbor—meaning the transaction cannot be reviewed again—when the CFIUS process is completed and the committee has determined that the transaction may proceed. CFIUS does not review every transaction or investment by foreign entities. According to Treasury officials, there are certain transactions by foreign entities that CFIUS does not have the authority to review. These non-covered transactions and investments include the establishment of a business, referred to as a greenfield investment, and acquisitions of assets—such as equipment, intellectual property, or real property—if such assets do not constitute a U.S. business. If CFIUS member agencies become aware of a transaction that might be covered that has not been voluntarily notified to the committee and may raise national security considerations, CFIUS may invite the parties to the transaction to submit a notice. CFIUS may choose to unilaterally review any transaction that could be covered. Treasury, DOD, and several other member agencies have processes for identifying non-notified transactions for CFIUS to potentially review. CFIUS Process The CFIUS process for examining transactions that have been notified to the committee is comprised of up to four stages: national security review (30 days), national security investigation (45 days), and presidential action. In some cases, before a transaction is accepted and reviewed by CFIUS, Treasury may conduct a pre-notice consultation with parties to a transaction. This is not a required part of the process. For the purposes of this review, we focus on three stages—the national security review, national security investigation, and presidential action. For each transaction accepted and reviewed by CFIUS, an agency or agencies with relevant expertise are identified to act as a co-lead with Treasury. Each agency in turn distributes the transaction to various offices within its agency to provide an assessment of the transaction and identify national security risks, which is then provided to CFIUS. For example, the committee may reach consensus that no investigation is required if it is determined that the covered transaction will not impair national security or that the national security concerns are addressed under existing authorities, such as export controls. If these conclusions are reached, the national security review ends, and the transaction proceeds. However, if, for example, an agency identifies an unresolved national security risk, the agency may draft a risk-based analysis and CFIUS may undertake a national security investigation. If during the investigation the committee members reach consensus that a national security risk exists, but the risks can be mitigated, mitigation agreement measures are drafted to address those risks, and these measures are negotiated with the other members of the committee and the parties to the transaction. The CFIUS process may conclude after consensus is reached by all agencies and the co-lead agencies certify to members of Congress that there are no unresolved national security concerns, and the transaction receives safe harbor. At the end of the national security investigation, if the committee does not reach consensus that there are no unresolved national security concerns or the committee concludes by consensus that a foreign investment threatens to impair national security and the threat cannot be mitigated, CFIUS elevates the transaction to the President. The President may prohibit or suspend the transaction. At any point prior to the conclusion of the process, parties may request to withdraw from the CFIUS process. In some cases, the notice is resubmitted once the parties believe that they have addressed the committee’s concerns; in other cases, the companies may choose to withdraw and abandon their transaction. See figure 1 for an overview of the CFIUS process for reviewing and investigating selected transactions. DOD CFIUS Process DOD Instruction 2000.25, Procedures for Reviewing and Monitoring Transactions Filed with the Committee on Foreign Investment in the United States (DOD’s Instruction), provides policy and guidance on the DOD CFIUS process and assigns responsibilities in that process. In March 2011, DOD’s CFIUS responsibilities were reassigned from the Defense Technology Security Administration to the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (OUSD (AT&L)). The transfer of responsibilities, effective in fiscal year 2012, was intended to better align CFIUS’s mission with the DOD office responsible for industrial policy. Within OUSD (AT&L), MIBP serves as the lead office for CFIUS, reviews transactions for DOD equities, and distributes them to more than 30 organizations within DOD—referred to in this report as DOD components —to determine whether the transaction poses any national security concerns. These component reviewers include organizations within the Office of the Secretary of Defense, as well as the military departments, among others. For a full list of DOD component reviewers, see appendix I. According to MIBP’s processes, it is responsible for reviewing and compiling comments and input from all DOD component reviewers during the 30-day national security review. When national security concerns with a transaction are identified, MIBP is to coordinate with affected DOD component reviewers to clarify issues and arrive at consensus on the DOD position for the transaction. DOD is typically designated as a co- lead agency for transactions where it has identified equities—such as transactions involving companies that are DOD suppliers—or other potential national security concerns. If no national security concerns are identified by DOD, MIBP will recommend that the transaction proceed. However, if national security concerns are identified by DOD and the committee requires additional time to complete its review, DOD recommends that the transaction proceed to a 45-day national security investigation period. During this period, MIBP coordinates with DOD component reviewers to draft and deliver a risk-based analysis to Treasury within the statutory investigation time frame. The assessment provides a description of the risk—in terms of threat, vulnerability, and consequence—arising from the covered transaction. If the risks can be addressed, DOD develops measures to be included in the mitigation agreement that it is then responsible for monitoring and enforcing as a signatory agency to the mitigation agreement. DOD guidance identifies three basic types of mitigation measures: 1. Technical mitigation measures, which seek to address risks related to vulnerabilities or critical assets with sensitive source codes, cutting- edge technologies, and communications infrastructure. 2. Personnel mitigation measures, which seek to address risks arising from foreign personnel having access to sensitive technology or other critical assets. 3. Management control mitigation measures, which seek to oversee companies’ ongoing implementation of mitigation agreements related to technical or personnel mitigation measures. DOD, along with other lead agencies, carries out its monitoring responsibilities on behalf of the committee and reports back to the committee on the status of their responsibilities and company compliance on at least a quarterly basis. DOD’s Instruction requires the identification of feasible measures to mitigate or eliminate the risks posed by a transaction and emphasizes that adequate resources, in terms of personnel and budget, should be provided to DOD and the components for monitoring and ensuring compliance with mitigation agreements. Our Prior Work on CFIUS We have conducted prior work related to CFIUS issues, including whether CFIUS has the resources to address its current workload and whether CFIUS is able to address national security concerns related to the proximity of certain real estate transactions to defense test and training ranges. In February 2018, we reported on CFIUS workload and staffing as well as stakeholder perspectives on potential changes to CFIUS. We found that as the volume and complexity of CFIUS reviews have increased in recent years, member agency officials have expressed concerns that current CFIUS staffing levels may not be adequate to complete core functions of the committee. We recommended that Treasury should coordinate member agencies’ efforts to better understand the staffing levels needed to address the current and projected CFIUS workload associated with core committee functions. Treasury agreed with our recommendation. In December 2014, in reviewing DOD’s assessment of foreign encroachment risks on federally managed land, we found that DOD did not have the information it needed to determine whether activities by foreign entities near test and training ranges, such as performing certain sensitive training techniques, could pose a threat to its mission. We also reported that CFIUS is the only formal option in regard to transactions involving foreign companies or entities that accounts for national security concerns related to proximity to military test and training ranges. We recommended that DOD develop and implement guidance for conducting an assessment of risks to test and training ranges from foreign encroachment. We also recommended that DOD collaborate with other federal agencies managing land and transactions adjacent to DOD’s test and training ranges to obtain additional information on transactions near these ranges. DOD agreed with our recommendations and has begun collecting data to identify locations the military services consider to be at risk from foreign encroachment and collaborating with federal land management agencies, as discussed later in the report. Resources and Evolving National Security Risks Pose Challenges for Identifying and Addressing DOD’s Concerns through the CFIUS Process DOD has reviewed hundreds of transactions involving foreign acquirers and U.S. businesses since 2012, but faces several challenges in identifying and addressing national security concerns through the CFIUS process. These challenges are: (1) resources not aligned with an increasing workload; (2) some national security concerns not defined or addressed in DOD’s Instruction; (3) some investments that pose national security concerns not always able to be addressed through the CFIUS process; and (4) current component reviewer responsibilities and CFIUS processes not reflected in DOD’s Instruction. DOD Has Not Assessed Resources to Address a Substantially Increased CFIUS Workload DOD faces challenges addressing an increasing CFIUS workload with its current resources. For example, we found that the number of DOD personnel with CFIUS responsibilities has not kept pace with the growing workload. The number of transactions CFIUS reviewed from 2012 through 2017 more than doubled, increasing from 114 transactions to 238 transactions. During that time, the number of transactions DOD was responsible for co-leading increased by about 57 percent, to 99 transactions in calendar year 2017. From 2016 through 2017 alone, these increases resulted in DOD reviewing almost 65 additional transactions, and co-leading about 30 additional transactions, a substantial increase in workload in one year. DOD also experienced an increase in the cumulative number of mitigation agreements it was responsible for monitoring, more than doubling from 39 in 2012 to 84 in 2017. Figure 2 provides additional information on DOD’s workload and authorized positions in MIBP—the lead DOD office for CFIUS. Based on our review of data on transactions reviewed by CFIUS, DOD’s workload has also been affected by the volume and amount of time spent on the transactions it has reviewed. We found almost half of DOD’s co-led transactions from 2015 through 2016—83 of 136 transactions, or 61 percent—required 45-day national security investigations. According to Treasury officials, the number of transactions requiring national security investigations increases member agencies’ workload because these transactions are usually more complex and require additional resources to review. Further, 9 DOD co-led transactions from 2015 through 2016 were withdrawn and resubmitted to CFIUS, and another 7 were withdrawn and abandoned because of national security concerns or because the committee was going to recommend that the transaction be prohibited. MIBP officials told us that withdrawn and resubmitted or withdrawn and abandoned transactions indicate the complexity of their workload, because a significant number of hours are spent either reviewing resubmitted transactions or justifying the committee’s decision to prohibit the transactions. Moreover, MIBP officials said that depending on the scope and complexity of the national security concerns identified within a transaction, they have had to redirect resources from other functions to support their review responsibilities. As a result, the official said there have been instances where MIBP has had to shift priorities and delay performing other CFIUS tasks in order to assist with reviewing high priority transactions. In addition to reviewing transactions, as a co-lead agency, DOD is also responsible for negotiating any mitigation agreements or other conditions necessary to protect national security, and monitoring compliance with those agreements or conditions. However, according to DOD officials and documents we reviewed, there are limited resources within MIBP and at the DOD component level to do so. For example, MIBP officials said that the volume and complexity of mitigation agreements have increased their workload monitoring these agreements and strained their available resources. Specific details on the effect of mitigation agreement workload increases on MIBP’s resources have been omitted because that information is considered sensitive. In addition, MIBP officials stated that because mitigation agreements typically do not expire, the number of agreements MIBP will be responsible for monitoring will continue to increase in the future. For example, based on our review of MIBP mitigation agreement information, 6 transactions with active mitigation agreements that MIBP is monitoring have been in place for 10 years or more. We also found that MIBP has limited personnel available to identify transactions not voluntarily filed with CFIUS—non-notified transactions— that could pose national security concerns. In the absence of voluntary reporting by the parties involved or independent discovery of the transaction, it is possible that CFIUS may not review a non-notified covered transaction that could pose a risk to national security. To address this concern, MIBP officials began efforts to identify and research non- notified transactions in fiscal year 2016 and, at one point, had up to four personnel involved in this effort. However, according to MIBP officials, three of those personnel were reassigned to help conduct reviews of notified transactions, leaving one person responsible for identifying and researching non-notified transactions relevant to DOD. Specific details on the effect of limited personnel on MIBP’s ability to identify non-notified transactions have been omitted because the information is considered sensitive. To perform its CFIUS responsibilities, OUSD (AT&L) began receiving some funding for CFIUS in fiscal year 2014—on average about $2.4 million dollars a year. However, according to an MIBP official, the funding MIBP receives for CFIUS is typically received after other priorities within OUSD (AT&L) have been addressed. Further, OUSD (AT&L)’s funding does not include CFIUS responsibilities being performed by the other DOD components, which according to MIBP officials do not typically have their own resources for performing CFIUS responsibilities. Among the components we spoke with, the amount of time and personnel dedicated to CFIUS responsibilities varies greatly. According to these components, the amount of time and personnel reviewing transactions ranged from one person dedicating a few hours a month at one component, to a full-time responsibility for six personnel at another component. However, most of the components we spoke with said that CFIUS is a part-time responsibility, and only four of the nine components we spoke with had dedicated personnel to support CFIUS responsibilities. MIBP officials confirmed that the components often have limited personnel and funding to perform CFIUS responsibilities, which can affect the level of involvement components have in reviewing transactions, monitoring mitigation agreements, and researching the non-notified transactions. Recognizing the resource constraints posed by its increased workload, MIBP has taken some steps to assess and adjust its CFIUS resources. For example, MIBP received an increase in its authorized positions in fiscal years 2016 and 2017. Specifically, authorized positions increased from 12 to 17, and according to MIBP officials, 16 of the 17 positions were filled as of October 2017. In January 2017, MIBP requested that component reviewers estimate their CFIUS resource needs to address increases in CFIUS workload. According to an MIBP official, this information was used to support a fiscal year 2019 request for additional funding and personnel to perform CFIUS responsibilities department- wide, and for funding to further develop information technology solutions for managing DOD’s CFIUS process. However, MIBP officials told us their request was only partially funded by the department, and that MIBP would have to determine how to distribute the funding received across the various components to perform its CFIUS responsibilities. DOD’s Instruction states that DOD components shall ensure that adequate resources, in terms of personnel and budget, are available for statutorily required mitigation agreement monitoring and compliance activities. Moreover, federal internal control standards state an agency should establish the organizational structure necessary to achieve its objectives and periodically reevaluate this structure. In this case, this includes the resources needed to accomplish CFIUS responsibilities, such as monitoring mitigation agreements and identifying non-notified transactions. However, according to an MIBP official, prior increases in authorized positions were not added based on any formal review or analysis of resource needs or capability gaps. While MIBP has taken some steps to address its resource limitations, MIBP and some other DOD component officials we spoke with who have CFIUS responsibilities continue to face resource constraints to address their growing workload. Even after receiving approval for some additional funding across DOD to support CFIUS responsibilities, DOD’s resource limitations could be further exacerbated if the number of transactions continues to increase. Without a formal analysis to assess and prioritize the resources necessary for performing its current and future CFIUS responsibilities, DOD will likely face challenges carrying out the duties and responsibilities outlined in its CFIUS policy. In addition to keeping up with the workload involved in reviewing notified transactions, the risks include not knowing whether violations of mitigation agreements or non-notified transactions are occurring that could pose risks to national security. National Security Concerns for Some Investments Are Not Well- Defined in DOD Policy DOD faces evolving national security concerns from foreign investments in U.S. businesses developing emerging technologies and in proximity to critical military locations, but there are inconsistencies in how DOD is reviewing these investments. DOD’s Instruction identifies factors to assess relevant to DOD national security interests, such as whether a firm produces critical technologies or unique defense capabilities, or whether a company being acquired is part of DOD critical infrastructure that is essential to project, support, or sustain military forces. However, DOD’s Instruction does not address the extent to which emerging technologies and proximity to critical military locations are considered under these factors, or whether and how components should review and prioritize transactions for these concerns. Emerging Technology: Officials at several of the DOD components we spoke with identified challenges addressing concerns related to emerging technology, such as artificial intelligence and robotics, through the CFIUS process and varied as to whether they elevate concerns with transactions involving emerging technology. For example, officials at four components said that it can be difficult to explain the risks associated with foreign investment in U.S. businesses developing emerging technologies, particularly if the technology in question is not already being used in a defense program or not being acquired through a traditional merger or acquisition. Officials from another component noted that it can be difficult to identify vulnerabilities and explain the need to protect early stage technologies through the CFIUS process if the technology is not advanced enough. DOD’s Instruction defines critical technologies based in part on those items that are already subject to export controls, but does not specify the types of emerging technologies that could be of concern for the department. Officials at several components noted that it can be difficult for them to identify which emerging technologies are going to be important to DOD to know whether transactions should be mitigated or prohibited. DOD has several lists identifying critical technologies or assets, but does not have an agreed-upon list of emerging technologies that should be protected from foreign investment, making it difficult for components to know which emerging technologies are of concern to the department. A recent DOD report noted that having an agreed-upon list of critical technologies would provide clarity on which transactions reviewed by CFIUS should be prohibited or suspended. According to MIBP officials, they recently initiated a study to identify leading companies and technology areas critical to the department now and in the future. They intend for the study, planned to be completed in spring of 2018, to identify critical and emerging technology sectors and companies not currently included in the defense industrial base. According to DOD officials knowledgeable of the study, MIBP plans to use the results to work with the department’s Office of Small Business and others on ways to use internal DOD resources to protect emerging technologies and intellectual property that are critical to DOD before they are subject to foreign investment. However, officials did not state how the study would help them address emerging technology through CFIUS-related reviews or whether the results of the study would inform changes to DOD’s Instruction or otherwise be used to help guide components on which emerging technologies are critical to the department. Proximity: Each of the military departments varies in how it reviews transactions for proximity to critical military locations. According to DOD reports, transactions near certain military locations can present encroachment issues or opportunities for persistent surveillance and collection of sensitive information of training procedures or of the integration of certain technological capabilities into major weapon systems. When asked about how transactions are reviewed for proximity concerns, MIBP officials said they defer to the military departments to identify what constitutes a concern and do not limit proximity to certain locations. Moreover, MIBP officials stated that depending on the transaction, proximity concerns can arise regardless of distance to a critical location, and that the circumstances surrounding a transaction should be reviewed on a case-by-case basis to account for those concerns. Proximity is not defined in the current DOD Instruction or listed as a factor that the military departments should consider when reviewing transactions. Officials from two of the military departments we spoke with review every transaction on a case-by-case basis for proximity concerns. According to documentation from the third department, it limits its reviews to acquiring companies from certain countries and only assesses those transactions for proximity concerns if the target location is within a certain distance of designated critical locations or assets. These different approaches for reviewing transactions have resulted in inconsistencies among the military departments in the types of proximity concerns they elevate to CFIUS. For example, in one transaction, we found that officials from the third military department recognized a concern near a training range used by all three military departments. While the transaction was ultimately withdrawn because CFIUS planned to recommend that the President prohibit or suspend the transaction, the third department did not identify a national security risk because it did not meet its criteria. Officials from this military department stated that greater clarification on the types of proximity concerns DOD wants to elevate through the CFIUS process, as well as criteria that component reviewers should use to identify risks, would be helpful. Our prior work has identified challenges DOD faces in identifying risks to foreign encroachment near defense training ranges. In a December 2014 report, we recommended that DOD develop and implement guidance for assessing risks to certain test and training ranges from foreign encroachment based on mission criticalities and level of threat. According to DOD officials, they recently conducted a data call to the military departments to identify the locations that they consider to be at risk from foreign encroachment. DOD plans to use this information to develop guidance, not related to the CFIUS process, to assess the risks that test and training ranges face from foreign encroachment. Federal internal control standards state that agencies should clearly define objectives and risk tolerances; identify, analyze, and respond to risks, and communicate necessary information to achieve their objectives. DOD is taking steps to identify and assess areas of concern related to emerging technology and proximity, but these efforts are not specific to the CFIUS process and have not yet been completed or communicated to components through DOD’s Instruction, or otherwise. As a result, the components lack clear and consistent guidance on how to review transactions for these specific types of national security concerns facing the department. Without clarity on the types of transactions and national security risks that should be addressed, for example by incorporating the results of its efforts into DOD’s Instruction, component reviewers will likely continue to be inconsistent in reviewing transactions and identifying and prioritizing national security concerns. DOD Has Identified Some Investments That Present National Security Concerns but Are Not Addressed through the CFIUS Process In addition to challenges identifying certain national security concerns within DOD, CFIUS officials at Treasury and DOD indicated that national security concerns for some foreign investments—such as those related to critical and emerging technologies and proximity to certain military locations—can arise that the committee does not have the authority to review. For example, pursuant to CFIUS regulations, the purchase of property that does not constitute a U.S. business by a foreign person or the licensing of emerging intellectual property to a foreign person are not covered transactions and therefore not addressed through the CFIUS process. As shown in figure 3, while some foreign investments that may result in national security concerns related to critical and emerging technology and proximity are addressed through the CFIUS process, others are not. According to DOD reports, CFIUS is one of the only tools able to address foreign investment in the United States, but is limited in its ability to address some investments in emerging technology and in proximity to military locations. Without the ability to address national security concerns arising from these investments, DOD is at risk of losing access to technologies, assets, and locations critical to maintaining and advancing U.S. technological superiority. A June 2017 DOD report found that although CFIUS is one of the only tools available to address technology transfers as a result of foreign investment, it is not effective at stopping technology transfer for investments that are not addressed through the CFIUS process, like certain joint ventures and other minority investments that do not result in foreign control. However, according to DOD documents and officials, these investments can result in technology transfers that threaten U.S. national security. For example, according to the DOD report, Chinese investors have been active in emerging technology sectors like artificial intelligence, augmented and virtual reality, and robotics, and Chinese investment in venture-backed start-ups is on the rise. The report also found that China’s continued foreign investment in critical emerging technology companies may have consequences for DOD’s ability to work with these companies in the future and its ability to maintain U.S. technological superiority. DOD officials cited concerns with their inability to address certain investments through the CFIUS process that can result in technology transfers or limit DOD access to emerging technologies. For example, DOD officials from three components cited instances when companies entered into joint ventures or other investment structures after withdrawing their transaction from the CFIUS process. A DOD official at one component cited a 2016 transaction where CFIUS planned to recommend that the President prohibit the transaction to prevent the transfer of a critical technology from a U.S. company to a foreign acquirer. Following the companies’ subsequent withdrawal from the CFIUS process, they entered into a joint venture. While CFIUS is aware of the joint venture and that it could result in the same transfer of technology CFIUS attempted to prevent by proposing to prohibit the original transaction, the committee has not yet determined whether it can be addressed through the CFIUS process because CFIUS is only able to review certain types of joint ventures. According to Treasury officials, when these circumstances arise they are sometimes able to review the joint venture, depending on the structure of the investment and whether it meets the definition of a covered transaction pursuant to law and associated regulations. Yet, even if this joint venture is ultimately reviewed as a covered transaction, the technology that DOD and CFIUS were originally concerned with may have already been transferred to the foreign acquirer. DOD and Treasury officials also identified concerns with broader foreign investment trends in critical and emerging technology that may not be addressed through the CFIUS process. For example, according to MIBP officials, they are concerned about foreign-owned enterprises exploiting critical technologies by structuring investments to avoid the CFIUS process, and noted that multiple investment structures exist that can allow foreign acquirers to gain access and influence over critical capabilities. DOD and Treasury officials acknowledged the importance of critical and emerging technologies and the consequences to DOD’s technological superiority if adversaries are able to use these technologies to advance their own military capabilities. According to Treasury officials, determining whether and how CFIUS should expand its scope to address these concerns is one of the challenges they have encountered when they have considered potential legislative changes to the CFIUS process. For example, they said that if the scope of the law was expanded, it could pose additional resource challenges, as CFIUS agencies would be required to review an expanded number of potentially complex transactions. According to federal internal control standards, agencies should identify, analyze, and respond to significant changes that could affect their operations. As noted earlier, DOD is in the process of identifying emerging technologies that will be essential to the defense industrial base, an important step towards informing future decision-making within the department. However, according to MIBP officials, the study will primarily be focused on identifying specific technology companies of importance to the department. As noted earlier, the study is not specific to CFIUS, and as a result plans for the study do not indicate that it will identify and assess other limitations facing MIBP, like those encountered addressing certain types of foreign investments that are not addressed through the CFIUS process but that pose risks to DOD’s technological and military superiority. Given the importance of critical and emerging technology to DOD, assessing any challenges DOD faces addressing certain foreign investments in critical and emerging technologies through the CFIUS process, and considering whether additional authority is needed, would better position DOD to address any unresolved national security concerns associated with these types of foreign investments. Without such an assessment, DOD remains at risk of not having the necessary tools and authorities to prevent the transfer of critical and emerging technologies to foreign acquirers, which is important for maintaining a viable defense industrial base and U.S. technological superiority. Some Foreign Investments Not Addressed through CFIUS Process Pose Proximity Concerns near Critical Military Locations Some proximity concerns near critical military locations can be addressed by CFIUS, but DOD also identified challenges addressing proximity concerns with investments that are not able to be addressed through the CFIUS process. For example, the establishment of businesses (which may include land purchases) in the United States that do not include an existing U.S. business—referred to as greenfield investments—are not considered covered transactions, but can pose proximity concerns when near certain military locations. Officials at MIBP and several other components expressed concerns with their inability to address proximity concerns arising from these investments, which can pose significant national security risks and limit DOD’s ability to perform necessary test and training activities. We identified at least two greenfield investments that have occurred since 2016 that have posed proximity concerns near critical military locations, and were not able to be addressed through the CFIUS process. One investment involving a purchase of land presented risks due to its proximity to an Air Force base. According to DOD’s Report to Congress 2017 Sustainable Ranges, the investment involved a U.S. company with substantial foreign financing, potentially subjecting training range missions performed at the base to persistent monitoring by a foreign government. According to officials, although the Air Force identified concerns with the investment, because it did not result in foreign control of a U.S. business, it was determined to not be a covered transaction. Officials from another military department identified an investment that was not voluntarily filed with CFIUS and posed proximity issues near a training range. According to military department officials, the investment involved the same foreign acquirer that had been a source of concern in other voluntarily filed CFIUS transactions. The military department elevated its concerns to CFIUS through the non-notified process, but, according to officials, Treasury ultimately determined that it was not a covered transaction because there was no foreign control over a U.S. business. Moreover, because the investment was already completed, the company had started construction that threatened to encroach upon a training range that is one of only two in the country available to perform certain types of training. Military department officials said it was too soon to determine the effect that this investment would have on their ability to perform training, but emphasized the criticality of protecting unique testing and training range spaces. Our prior work on defense training ranges also identified limitations DOD faces in addressing proximity and encroachment concerns from foreign investment. For example, we found in 2014 that officials from the Navy and Air Force, in particular, had concerns about the number of investment-related projects by foreign entities near their ranges (such as leases for mining or oil or natural gas exploration), which could pose potential security risks. However, we reported that DOD does not have access to the information needed to determine whether foreign investment activities near testing and training ranges pose a threat because other civilian federal agencies, such as the Departments of Interior and Transportation, that are responsible for approving these transactions face legal, regulatory, or resource challenges that prevent them from collecting information unrelated to their missions. We found that, although DOD has had some success obtaining information on foreign investment near test and training ranges, these efforts have been based on informal coordination between military liaisons at certain bases and local Department of Interior representatives. In addition to our recommendation that DOD develop and implement guidance for assessing risks to certain test and training ranges from foreign encroachment, we recommended that DOD collaborate with these other federal agencies to gather additional information needed for transactions in proximity to DOD test and training ranges, and seek legislative relief if needed. DOD concurred with our recommendations and has taken some steps to address them. For example, as noted earlier, DOD is in the process of developing guidance to assess risks to test and training ranges based on its identification of locations it considers to be at risk from foreign encroachment. According to DOD officials, they have also drafted legislative proposals to address limitations to their ability to gather information from the land management agencies on foreign investments in proximity to critical military locations. According to DOD officials, these proposals have not been submitted to Congress due to concerns raised by the federal land management agencies, but DOD continues to explore the possibility of legislative action to address these concerns. We also reported that DOD uses multiple methods, in coordination with other federal agencies, to identify potential business activities near DOD test and training ranges. But CFIUS is the only formal option in regard to transactions involving foreign companies or entities that accounts for national security concerns. A 2015 DOD report to Congress on security risks related to foreign investment in the United States found that there are no authorities in the current federal land management framework that would require federal land management agencies to prevent a transaction from occurring if DOD identified a national security concern. The report further states that CFIUS and the Foreign Investment and National Security Act of 2007 are the only federal authorities available to DOD to assess national security risks posed by foreign investment in the vicinity of critical military locations, like DOD training and test ranges, but that the CFIUS process is not intended to address such national security risks. While CFIUS is able to address proximity concerns that arise through covered transactions, DOD has reported that it has limited ability to identify, assess, and mitigate national security concerns for investments that are not considered covered transactions through CFIUS, such as greenfield investments. However, DOD’s report does not identify, assess, or make recommendations about what additional DOD authority, if any, would be necessary to address these concerns, and as noted earlier, DOD’s efforts to develop and implement guidance based on its identification of locations that they consider to be at risk from foreign encroachment are still in progress. According to federal internal control standards, agencies should establish policies and procedures to respond to risks; and identify, analyze, and respond to significant changes that could affect their operations. DOD is in the process of identifying locations it considers to be at risk from foreign encroachment, which can eventually be used to inform its review of foreign investments for proximity concerns, but DOD states that it is currently unable to address concerns related to greenfield investments through the CFIUS process because they are not considered covered transactions. Moreover, DOD reported that CFIUS is not a DOD-led process, and DOD is just one of nine member agencies. Members of Congress have recently proposed legislation that would expand the definition of covered transactions to include foreign acquisitions or leases of real estate in proximity to U.S. military locations, but the legislation is pending. Taking additional steps to assess what authority, if any, is needed to address foreign investment in proximity to certain critical military locations and raising these concerns to Congress, as necessary, would better position DOD to address its concerns. Until DOD completes efforts to develop and implement guidance assessing risks to critical locations that should be protected from foreign encroachment, and assesses what authority, if any, is necessary to independently address concerns with investments near these areas, it remains at risk of not protecting these locations from the national security risks posed by foreign adversaries. Detailed Location Information Not Included in Notices Submitted to CFIUS Detailed location information is not always included in notices submitted to CFIUS, which can affect DOD’s ability to review transactions for their proximity to critical military locations. Some CFIUS transactions can involve numerous properties or locations, and information on the geographic coordinates of these locations is used by MIBP and the components when determining if there could be national security concerns with a transaction. Specific details on the use of geographic coordinates to identify whether a transaction may pose proximity concerns near critical military locations have been omitted because the information is sensitive. According to Treasury officials, DOD often requests geographic coordinates once a notice is submitted, and Treasury officials said they have attempted to gather more detailed location information as part of notices. However, officials at one military department said that while there have been improvements in the availability of this information in notices, there are still some companies that do not include the geographic coordinate information. Treasury officials stated that CFIUS has the authority to require this information from companies and has considered revising its regulations to require this information. However, Treasury has not yet done so. Federal internal control standards state that agencies should establish policies and procedures to respond to risk and use quality information to achieve its objectives. Requiring information on geographic coordinates for all target locations in notices submitted to CFIUS will improve DOD’s ability to more efficiently identify and address proximity concerns with covered transactions. DOD Has Not Updated Policy to Reflect Changes in Components’ Review Responsibilities and Processes DOD’s Instruction identifies CFIUS-related responsibilities and processes for reviewing transactions, but that policy has not been updated to reflect current component reviewer roles and responsibilities or processes for addressing non-notified transactions that may pose national security concerns for the department. The current DOD Instruction was issued in 2010—prior to the transfer of CFIUS responsibilities from the Defense Technology Security Administration to MIBP—but has not been updated to reflect the change. Moreover, the Instruction includes a list of the types of information components should provide to request a non-notified transaction be submitted to CFIUS for further action but has no guidance or expectations for whether or how components should identify and research them. In addition to DOD’s Instruction, which is the guiding policy for DOD’s CFIUS procedures, in June 2016, MIBP developed an internal process document describing its process for reviewing CFIUS transactions; developing and monitoring mitigation agreements; and identifying and reviewing non-notified transactions. While MIBP officials said the process document is based on the DOD Instruction and is more up-to-date, it is intended to be an internal reference document for MIBP employees and contractors, and it has not been distributed more broadly to the components involved in reviewing transactions for CFIUS. Moreover, the DOD Instruction does not reflect the department’s responsibilities for reviewing transactions. For example, MIBP’s internal process document identifies advisory and primary reviewers who are responsible for providing inputs on transactions. However, based on our review of the DOD Instruction, advisory and primary component responsibilities are not differentiated, and several of the advisory reviewers that are identified in MIBP’s internal process document are not listed as reviewers in the current Instruction. For example, according to DOD documentation and officials, the Assistant Secretary of Defense for Research and Engineering is an advisory reviewer for CFIUS cases and coordinates input from several other reviewers—including the Defense MicroElectronics Activity and Defense Advanced Research Projects Agency—to determine if a transaction involves a critical technology. However, the Assistant Secretary of Defense for Research and Engineering’s responsibilities for coordinating these inputs are not identified in the current DOD Instruction, nor is this office listed as a reviewer. Our review of DOD’s Instruction, internal guidance, and other documentation identified several other discrepancies between component responsibilities identified in the Instruction and what is occurring in practice. For example, the Under Secretary of Defense for Personnel and Readiness, among other things, coordinates with OUSD (AT&L) and the Director of Operational Test and Evaluation on the effects of encroachment on DOD test and training areas. While MIBP officials identified the Office of the Under Secretary of Defense for Personnel and Readiness as a CFIUS reviewer, this office is not identified as a reviewer in the DOD Instruction or internal MIBP process document. In addition to not having up-to-date information on reviewer roles and responsibilities, the DOD Instruction does not include guidance on how MIBP and the components should identify and research non-notified transactions that may pose national security concerns. As discussed above, because the CFIUS process is based on voluntary notices submitted by parties to transactions, DOD and Treasury officials stated that it is important to monitor foreign acquisitions of U.S. companies that are not filed with CFIUS to determine if any may present national security concerns. As shown in figure 4, there were approximately 1,680 mergers and acquisitions involving foreign acquisitions of U.S. companies in 2016. While not all foreign acquisitions of U.S companies pose national security concerns that would warrant them being reviewed by CFIUS, DOD officials acknowledged challenges with their ability to identify and research these transactions. Specific details on the challenges DOD faces identifying non-notified transactions have been omitted because the information is considered sensitive. In addition to challenges identifying non-notified transactions within MIBP, DOD component reviewers’ awareness of the non-notified transaction process varied across the components that we spoke with, and participation in this part of the process is ad hoc. For example, five of the nine components in our sample said they do not have processes in place to identify transactions that have not been voluntarily filed but present risks to national security that could warrant CFIUS review. Officials from four components we spoke with said they have identified non-notified transactions. However, officials from most other DOD components told us they either are not involved or occasionally review non-notified transactions once MIBP identifies them, and they do not proactively perform non-notified transaction research, in part due to resource constraints. Officials from some components were also uncertain about whether they should elevate some non-notified transactions of concern. For example, DOD’s Instruction does not explain when, pursuant to CFIUS regulations, joint ventures are covered transactions. It also does not explain that, even if a non-notified transaction has been completed—meaning a foreign acquirer has already finalized the purchase of a U.S. company—CFIUS can still recommend that the President suspend or prohibit the transaction. Officials from two components said that they are aware of completed joint ventures or other transactions that were of concern but not voluntarily filed, and that they did not elevate them. According to these officials, they assumed the joint ventures would not be covered or that there was nothing CFIUS could do to address their concerns. In May 2017, the MIBP official responsible for non-notified transactions began a DOD pilot working group for researching non-notified transactions. According to the official, the working group is intended to leverage component reviewer resources and involve them in performing research on non-notified transactions identified by MIBP. However, as of June 2017, participation in the group was limited to 5 of the more than 30 DOD component reviewers, and its processes for reviewing and distributing transactions are still evolving. While this action represents a positive step towards establishing and formalizing efforts to identify non- notified transactions, MIBP officials expressed concern that their ability to identify transactions that may pose risks is not as developed as they would like it to be. Specific details on MIBP’s ability to identify transactions that may pose risks have been omitted because the information is considered sensitive. In contrast to DOD’s limited non-notified guidance, the Department of Homeland Security, another member agency of CFIUS, has guidance for reviewing non-notified transactions in its Instruction for Department of Homeland Security Participation in the Committee on Foreign Investment in the United States. According to the Instruction, each week a digest of non-notified transactions is to be sent to Department of Homeland Security components for review, and selected components are required to provide any concerns with the transaction within 7 days. The Department of Homeland Security then determines whether to prepare a non-notified request to forward the transaction on to CFIUS so that the committee can determine whether the transaction merits further action. Federal internal control standards state that agencies should identify and document agency responsibilities and processes in policy, and periodically review and update policies based on changes. According to MIBP officials, they have been revising DOD’s Instruction for over 3 years, and recently began the formal department-wide review process. MIBP officials said they had not released updated guidance to reflect changes in responsibilities and processes sooner because of challenges with employee attrition and leadership changes, which have resulted in multiple rewrites. However, several components we spoke with referenced the need for updated or standardized guidance to inform their CFIUS review responsibilities and the development of their own component-level guidance. It has been over 5 years since MIBP was assigned responsibility for CFIUS, raising questions about the prioritization of CFIUS within the department. Without clear and updated guidance on reviewer responsibilities and established processes and guidance on the identification and review of non-notified transactions, DOD is at risk of inconsistencies in its review of transactions, and it may be unable to address non-notified transactions that pose national security concerns in a timely and efficient manner. DOD Faces Several Challenges Developing and Monitoring CFIUS Mitigation Agreements As noted above, mitigation agreements address any threats to national security posed by a transaction. DOD is responsible for most of the CFIUS mitigation agreements, but faces a variety of challenges when taking action to mitigate national security concerns and ensure the effectiveness of the agreements. These challenges relate to insufficient personnel resources compared to MIBP’s workload, and unclear communication about the delineation of responsibilities between MIBP and the DOD components. Moreover, DOD has not reported to Congress on its responsibilities for monitoring and enforcing mitigation agreements. DOD Is Responsible for Most CFIUS Mitigation Agreements DOD is responsible for more mitigation agreements than other CFIUS member agencies, monitoring 84 of the total of 141 mitigation agreements for CFIUS, or about 60 percent as of the end of calendar year 2017. DOD’s responsibility for mitigation agreements more than doubled between 2012 and 2017. Figure 5 shows how DOD’s CFIUS mitigation agreement-related responsibilities have increased since 2000. We reviewed Treasury data on transactions from January 2015 through December 2016 to identify the types of national security concerns DOD mitigated through the CFIUS process. We found that the 22 mitigation agreements implemented by DOD during this period included acquisitions of U.S. companies in the aerospace, energy, real estate, and information technology industries, among others. Seventeen of these agreements were implemented to address either supply assurance—DOD’s access to certain products or services—or proximity issues. Based on the Committee on Foreign Investment in the United States Annual Report to Congress for Calendar Year 2015 and our review of DOD documentation, the mitigation measures that have been negotiated and adopted since 2015 may require the parties to the transaction to take actions such as: Ensuring that only authorized persons have access to certain technology and information; Appointing a U.S. government approved security officer; Providing annual reports and independent audits; Notifying security officers or relevant U.S. government parties in advance of foreign national visits to the U.S. business for approval; Providing written notification when additional assets are purchased; Providing written notification and obtaining CFIUS approval of other parties joining the joint venture; and Requiring supply assurance for products or services being provided to the government. Based on our review of a non-generalizable sample of nine mitigation agreements provided by one of the DOD component reviewers, mitigation agreements typically have more than one measure. For example, there were between 4 and 10 different measures in each agreement we reviewed, and in one agreement, one measure required the submission of more than 100 reports. While some of the mitigation measures require the parties to the agreement to take action and report to DOD, MIBP also monitors and enforces compliance with mitigation measures by conducting on-site compliance reviews and investigations if violations are discovered. If a company violates a mitigation agreement, CFIUS has the authority to impose penalties, although, according to Treasury and DOD officials, the committee has not taken action to enforce penalties for non-compliance with a mitigation agreement. CFIUS regulations state that any person who intentionally or through gross negligence violates a material provision of a mitigation agreement may be liable for a civil penalty not to exceed $250,000 per violation or the value of the transaction, whichever is greater. DOD officials and the Deputy Assistant Secretary for Investment Security at Treasury stated that the regulatory standard regarding taking action against a company that has violated a mitigation agreement is high. They noted it is difficult to prove that a company violated a mitigation agreement intentionally or through gross negligence, and that the national security effect may exist even if the cause of the violation is ordinary negligence. In October 2017, MIBP officials reported six instances since 2013 where companies were not in compliance with their mitigation agreements, but stated that none of these instances were the result of intentional or grossly negligent actions. They told us that DOD has not recommended that CFIUS take action to impose penalties in these cases. In general, according to Treasury and MIBP officials, CFIUS member agencies work with companies to establish a culture of compliance and correct violations of the mitigation agreements as opposed to imposing fines or penalties. DOD Faces Challenges in Developing and Monitoring CFIUS Mitigation Agreements MIBP and the DOD components face a variety of challenges, to include developing and monitoring mitigation agreements as a result of limited personnel resources compared to an increasing workload; and communicating about mitigation agreement responsibilities between DOD and the components. Some of the specific details on personnel resource challenges and communication between MIBP and the components have been omitted because the information is sensitive. In addition to resource challenges within MIBP, resources for mitigation agreement-related activities within the DOD components are also limited and can vary. Officials from at least one component stated that they are not involved in developing or monitoring mitigation agreements because they do not have the resources to do so. Further, citing concerns with DOD’s ability to effectively oversee mitigation agreements, officials from three DOD components stated that DOD should recommend prohibiting transactions more often than imposing mitigation agreements. For example, an official from one DOD component with CFIUS responsibilities stated that it is not plausible that these agreements can be properly executed because adversaries have the resources to conceal the fact that they are not complying with the mitigation agreement. Officials from another DOD component also expressed concerns with mitigation agreement enforcement, and stated that they were likely to recommend prohibiting transactions in the future instead of negotiating mitigation agreements in transactions where a national security risk has been identified. A June 2017 DOD report on technology transfer and emerging technology found that given concerns about the cost and effectiveness of mitigation agreements, if the mitigation agreements cannot be simple, CFIUS should recommend that the President suspend or prohibit the transaction. Similarly, officials at the Navy stated that mitigation measures are more effective if they can be fully implemented before the transaction is closed, as opposed to those that require ongoing monitoring. MIBP officials stated that if resource shortfalls continue, they run the risk of having to recommend that the President prohibit transactions because they are unable to implement or monitor additional mitigation agreements. To bolster available DOD resources for monitoring mitigation agreements, MIBP is in the process of expanding on a case-by-case basis its use of third-party monitors—private auditing and consulting firms approved by DOD and CFIUS but paid for by the foreign acquirer. In these instances, the acquirer is responsible for contracting with qualified third-party independent monitors, which MIBP officials stated they believe could result in cost savings to the government by reducing the resources it uses to respond to routine notifications and requests for approval. MIBP officials stated that this concept would allow MIBP to better extend control over the range of agreements by focusing on monitoring the third-party monitors. However, these officials also acknowledged that the use of third-party monitors can present an inherent conflict of interest by having foreign acquirers funding their own compliance and mitigation agreement monitoring. It is too soon to assess the effect of the expansion of third- party monitoring on improving MIBP’s ability to oversee compliance with mitigation agreements. In addition, we found that MIBP has not clearly communicated expectations and responsibilities for developing and monitoring mitigation agreements to some DOD components. This has led to confusion about what is expected of the components during this part of the process and raised uncertainty within the components we met with about the effectiveness of the mitigation agreements. For example, DOD’s Instruction requires components to identify, as applicable, mitigation agreement measures as part of their risk-based analysis and participate in monitoring the mitigation agreements in instances when they have identified a risk. However, officials from several DOD components said that they either do not include mitigation measures in their risk-based analysis or have been asked not to by MIBP. According to Treasury officials, the CFIUS process has been updated and the proposal of mitigation measures can occur before or during the development of an agency’s risk-based analysis, but this information is not reflected in DOD’s Instruction, and DOD officials could not identify whether or how this change in process had been communicated to the components. In addition, officials at one DOD component cited examples of unclear communication regarding their responsibilities for mitigation agreement documentation. For example, these officials told us they requested, but did not receive, documentation from MIBP to ensure compliance with four of the nine mitigation agreements it is responsible for monitoring. According to documentation from this component, it had not received approximately 110 of 133 documents and other reporting requirements that were necessary to determine whether the company was in compliance with the mitigation agreement. According to MIBP officials, they had received the required documentation from the company but did not share it with the component because they were not related to the mitigation agreement measures that the component was responsible for monitoring. As a result of this miscommunication, the component thought that it was responsible for reviewing the missing documentation. MIBP officials stated that they plan to expand and improve their capability to provide DOD components access to the necessary documentation in the future. Officials from MIBP and the component said MIBP currently maintains a shared drive where it stores mitigation agreement documentation, but not all components have access to this documentation. Additionally, while DOD’s Instruction states that DOD components that propose mitigation measures should participate in overseeing those measures, two of the nine components in our sample reported being actively involved in ensuring compliance with mitigation agreements or performing site visits. Two components have allocated several full-time personnel to the task and another has guidance that directs its involvement in CFIUS mitigation agreement monitoring. For example, Navy officials said they have established an office to review transactions that may pose proximity-related risks and monitor proximity-related mitigation agreements, but they have not been given the authority by MIBP to make a final determination regarding whether parties are in compliance with the agreements or to participate in all discussions with the parties. MIBP officials stated that they seek component input on all mitigation agreements, but that MIBP has taken the lead in developing and monitoring DOD mitigation agreements and ensuring compliance because the DOD components have not historically had the resources to dedicate to this responsibility. DOD’s Instruction identifies oversight and communication mechanisms that have not been implemented, but could assist the department in addressing challenges monitoring and ensuring compliance with its CFIUS mitigation agreements. For example, DOD’s Instruction establishes a CFIUS Monitoring Committee, made up of relevant DOD component reviewers, to serve as the focal point for DOD monitoring. Among other things, the CFIUS Monitoring Committee was intended to meet quarterly. DOD’s Instruction also calls for the development of a DOD CFIUS Strategic Mitigation Plan to include things such as: identification of strategic policy for mitigation and monitoring efforts, taking into account resource management and filing trends; identification of methods to substantiate and document company compliance with mitigation agreements and maintain records of that compliance; and annual analysis of past mitigation in order to determine if past approaches to monitoring and mitigation can be improved. However, according to MIBP officials, the CFIUS Monitoring Committee and the Strategic Mitigation Plan were not implemented because MIBP did not have the resources to do so. MIBP officials also said they did not see the establishment of the CFIUS Monitoring Committee with relevant DOD components as necessary because MIBP has taken primary responsibility for monitoring mitigation agreements. In addition to not implementing these oversight and communication mechanisms, MIBP has not updated DOD’s Instruction to account for policies that are no longer practiced, such as requiring proposed mitigation measures as part of the risk-based analysis, or having components take responsibility for monitoring the mitigation measures they recommend. According to federal internal control standards, to achieve an entity’s objectives, management assigns responsibility and delegates authority to key roles throughout the entity. In addition, management should internally communicate the necessary quality information to achieve the entity’s objectives. Updated and improved guidance, including communication about MIBP’s management of mitigation agreements and component involvement in developing and monitoring them, could help provide additional oversight of DOD’s mitigation agreements and address resource challenges associated with an increasing workload. DOD Has Not Reported on Review of Mitigation Agreement Monitoring Responsibilities DOD has not reported its findings to Congress on a review regarding monitoring and enforcing mitigation agreements. A 2013 House Report asked the Secretary of Defense to review the role of the Deputy Assistant Secretary of MIBP in monitoring CFIUS mitigation agreements in which DOD was the lead or co-lead and determine if the Defense Security Service is suited to perform these functions, and report the findings. The House Armed Services Committee noted concerns over whether MIBP, as a policy organization, has the resources and technical expertise to provide reasonable oversight of implementation and compliance with mitigation agreements. The House Report stated that DOD may benefit from leveraging the capabilities of the Defense Security Service, which already reviews every CFIUS filing on behalf of the National Industrial Security Program, and monitors compliance with its own mitigation agreements as part of that program. DOD was to report on the findings on the review in 2013, but, according to MIBP officials, it has been delayed because disagreement exists within DOD regarding where responsibility for monitoring mitigation agreements should reside. Both MIBP and Defense Security Service officials we spoke with said that their office is the best equipped to perform CFIUS mitigation agreement responsibilities. As a result, formal coordination of the department’s response has not been completed. As of January 2018, MIBP officials said that while they recognize the need to complete the response, DOD has not committed to a specific time frame for the response. Reporting the findings to the congressional defense committees will facilitate the identification of current challenges related to CFIUS mitigation agreement oversight, and could address questions about the capabilities and responsibilities necessary to effectively monitor and enforce CFIUS mitigation agreements. Conclusions Growing foreign direct investment in the United States provides important economic benefits, but can also pose national security risks when that investment comes from potential adversaries. Ensuring that DOD has the resources, processes, and information necessary to perform its responsibilities under CFIUS is essential at a time when the number and complexity of transactions being reviewed by CFIUS has grown significantly. According to officials, the types of investments that pose risks have evolved, making questions about foreign control difficult to determine and mitigate, including investments involving important emerging technologies or real estate purchases in close proximity to sensitive military locations. In light of these issues, assessing CFIUS resource requirements across the department, completing efforts to identify and communicate critical national security concerns, assessing whether DOD has the necessary authority to address these concerns, and ensuring its policies and practices reflect current DOD component reviewers and processes will be essential to DOD’s ability to address the evolving risks it faces from foreign investment. For national security concerns that DOD determines it does not have the authority to address, it may be necessary for DOD to seek legislative action. Further, without updating DOD’s CFIUS guidance to reflect current requirements and reporting on reviews requested by a committee of Congress on the department’s responsibilities for monitoring mitigation agreements, DOD will likely continue to face challenges facilitating intra-departmental communication and questions about the prioritization of CFIUS within DOD. Recommendations for Executive Action We are making a total of eight recommendations: four to the Secretary of Defense, three to the Deputy Assistant Secretary of Defense for MIBP, and one to the Secretary of the Treasury. Specifically: The Secretary of Defense should assess CFIUS resource requirements within MIBP and DOD component reviewers in light of increasing workload, and prioritize personnel and funding resources accordingly to review, mitigate, and monitor transactions that are of concern to the department. (Recommendation 1) The Secretary of Defense, in coordination with the Deputy Assistant Secretary of Defense for MIBP and Office of the Under Secretary of Defense, Personnel and Readiness, should incorporate the results of its efforts to identify, assess, and prioritize national security concerns related to foreign investment in emerging technologies and in proximity to certain critical military locations, into DOD Instruction 2000.25 and communicate the results to DOD component reviewers. (Recommendation 2) Following the completion of its emerging technology study, the Deputy Assistant Secretary of Defense for MIBP should assess what additional authorities may be necessary to address risks related to foreign investment in critical and emerging technologies, and seek legislative action to address risks posed by these investments as appropriate. (Recommendation 3) Following the department’s efforts to identify critical locations and develop and implement guidance assessing risks to these locations from foreign encroachment, the Secretary of Defense should assess what additional authorities, if any, may be necessary to address national security risks from foreign investments in proximity to these locations, and seek legislative action as appropriate. (Recommendation 4) The Secretary of the Treasury should provide clarification to parties filing a notice of a transaction with CFIUS that for filings involving multiple locations, geographic coordinates are required to be part of the notification. (Recommendation 5) The Deputy Assistant Secretary of Defense for MIBP should update DOD Instruction 2000.25, to include additional guidance and clarification regarding DOD component responsibilities during the CFIUS process and DOD processes for identifying non-notified transactions. (Recommendation 6) The Deputy Assistant Secretary of Defense for MIBP should update and implement requirements identified in DOD Instruction 2000.25 regarding management and oversight of mitigation agreements, such as taking into account the resources needed to effectively monitor agreements, improving communication methods between MIBP and the DOD components, and clarifying component responsibilities in developing and monitoring mitigation agreements. (Recommendation 7) The Secretary of Defense should submit the response to the House Report reviewing the role of the Deputy Assistant Secretary of Defense for MIBP in monitoring CFIUS mitigation agreements, and determining if the Defense Security Service is suited to perform these functions. (Recommendation 8) Agency Comments and Our Evaluation DOD and Treasury provided written comments on a draft of the sensitive report. These comments are reprinted in appendixes IV and V, respectively. We also received technical comments from both agencies, which we incorporated as appropriate. Both departments concurred with our recommendations. In its written comments, DOD agreed to use a recent assessment of CFIUS resource needs to inform its upcoming budget requests. We acknowledge MIBP’s recent efforts to identify and prioritize resource needs in support of its CFIUS responsibilities. As DOD develops its budget request, we encourage the department to consider increases in DOD’s CFIUS workload and the resources required to support essential CFIUS functions, like monitoring mitigation agreements and identifying non-notified transactions that may pose national security risks. DOD also agreed to update its guidance related to CFIUS procedures and responsibilities, and complete assessments about additional authorities the department may need to address national security concerns related to foreign investments in U.S. companies developing critical and emerging technologies or in proximity to critical military locations. In its comments, DOD stated it has identified over 40 critical military locations and expects to develop guidance for assessing the risks posed by foreign investments in proximity to these locations. DOD also agreed to complete its response to the House Report reviewing MIBP’s role in monitoring CFIUS mitigation agreements. In its comments, DOD stated it is continuing to explore the implementation of third-party monitors as an alternative solution for monitoring CFIUS mitigation agreements. In its written comments, Treasury concurred with our recommendation to provide clarification that parties filing a notice with CFIUS should include geographic coordinates as part of their notice. Treasury has updated information on its website to clarify that addresses and/or geographic coordinates are required for a CFIUS filing to be considered complete. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Deputy Assistant Secretary of Defense for MIBP, and the Secretary of the Treasury. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or makm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Department of Defense (DOD) Offices and Organizations with Committee on Foreign Investment in the Unites States (CFIUS) Review Responsibilities Appendix II: Objectives, Scope and Methodology This report assesses factors, if any, that affect the Department of Defense’s (DOD) ability to (1) identify and address national security concerns through the Committee on Foreign Investment in the United States (CFIUS) process, and (2) develop and monitor mitigation agreements through the CFIUS process. This report is a public version of a sensitive report that we issued on April 5, 2018. DOD and the Department of the Treasury (Treasury) deemed some of the information in our April report to be sensitive, which must be protected from public disclosure. Therefore, this report omits sensitive information related to (1) DOD’s resources to perform certain CFIUS functions, like monitoring mitigation agreements and identifying non- notified transactions; (2) the availability of location information as part of notices that companies file with CFIUS; and (3) the resources and communication required between DOD and the components to develop and monitor mitigation agreements through the CFIUS process. Although the information provided in this report is more limited, this report addresses the same objectives and uses the same methodology as the sensitive report. To assess what factors, if any, affect DOD’s ability to identify and address national security concerns through the CFIUS process, we reviewed relevant documentation, including: CFIUS-related laws and Department of the Treasury (Treasury) regulations; DOD policies and guidance; and DOD and CFIUS internal reports to identify DOD’s responsibilities and processes for identifying and addressing national security concerns through the CFIUS process. While there are other authorities, including export controls such as the International Traffic in Arms Regulations and Export Administration Regulations, which in certain circumstances may be used to address national security concerns that arise through foreign investment, our review focused on the DOD’s responsibilities addressing national security concerns through the CFIUS process. To assess DOD’s efforts to identify and address national security concerns it identified, we gathered and analyzed data on transactions that DOD was responsible for co-leading from January 1, 2012, through December 31, 2017, the most recent data available. To identify resources dedicated to supporting CFIUS activities within the Office of Manufacturing and Industrial Base Policy (MIBP)—the DOD office responsible for coordinating the CFIUS process on behalf of DOD—we analyzed MIBP data from 2012 through 2017 on DOD personnel resources, and reviewed budget amounts from 2012 through 2016 for DOD CFIUS activities from DOD budget documents. To identify the outcomes of transactions not voluntarily filed with CFIUS—known as non-notified transactions—we gathered and analyzed data on the number of non-notified transactions MIBP has identified and researched since the beginning of fiscal year 2016, when they started formally tracking that information. Based on information on the collection and management of Treasury and DOD transaction data, our review of related documentation, and interviews with relevant Treasury and DOD officials, we determined that these data were sufficiently reliable for the purposes of this report. To identify challenges DOD faces addressing certain national security concerns, such as protecting emerging and critical technology and foreign investments in proximity to certain critical military locations, we reviewed a non- generalizable sample of CFIUS case file information for seven transactions. We selected these transactions based on examples identified by DOD components, and the types of national security concerns, including those related to emerging technology and proximity, that DOD officials identified throughout the review. We interviewed officials at Treasury, MIBP, and selected DOD component reviewers to discuss DOD’s CFIUS workload and resources. In this report, we define resources as the authorized positions, assigned personnel, personnel performing contract services related to CFIUS functions, and CFIUS- related costs. We also discussed with these officials any limitations to addressing certain national security concerns—like protecting emerging technology and foreign investment in proximity to critical military locations—through CFIUS, and guidance for the CFIUS process and identifying non-notified transactions. Additional information on the DOD components included in this review can be found below. To identify calendar year 2016 mergers and acquisitions involving U.S. businesses, and the proportion of those mergers and acquisitions involving foreign acquirers, we reviewed data available from the Bloomberg Terminal, which is a commercial database containing data on mergers and acquisitions. We gathered data on total 2016 mergers and acquisitions involving U.S. companies that were announced, pending, or completed. We also gathered data on 2016 mergers and acquisitions that were announced, pending, or completed involving U.S. companies and foreign acquirers to illustrate the number of potentially covered transactions that may not be voluntarily notified to CFIUS. We assessed the reliability of these data by reviewing relevant documentation and ensuring the data gathered aligned with the search criteria identified. We determined the data were sufficiently reliable for our purposes of displaying total U.S. mergers and acquisitions and the proportion of those transactions that involve foreign acquirers and thus could be potentially covered transactions by CFIUS. To assess what factors, if any, affect DOD’s ability to develop and monitor mitigation agreements through the CFIUS process, we reviewed CFIUS- related laws and regulations and DOD policies and guidance to identify DOD and its component reviewers’ responsibilities and processes for developing and monitoring compliance with mitigation agreements. We also reviewed the Committee on Foreign Investment in the United States Annual Report to Congress for Calendar Years 2014 and 2015. To identify actions DOD has taken to mitigate national security concerns, we analyzed data to identify the number of mitigation agreements DOD is responsible for and actions DOD has taken to mitigate and monitor transactions with national security concerns from January 1, 2012 through December 31, 2017, the most recent data available. Based on information on the collection and management of Treasury and DOD CFIUS mitigation agreement data, our review of related documentation, and interviews with relevant Treasury and DOD officials, we determined that these data were sufficiently reliable for the purposes of this report. We also reviewed executive summaries compiled by MIBP of the DOD-co-led transactions with mitigation agreements, as well as selected CFIUS case file documentation for seven transactions. We interviewed officials at Treasury, MIBP, and DOD component reviewers to identify any challenges they face developing and enforcing mitigation agreements. To provide illustrative examples of the types of measures included in CFIUS mitigation agreements, we reviewed all of the active mitigation agreements from one component with responsibilities for monitoring mitigation agreements involving proximity issues. These agreements are not generalizable to other components. To gather a range of views on issues related to both objectives, we selected a non-generalizable sample of nine DOD component reviewers responsible for identifying, reviewing, and investigating transactions. These components included officials from: the Departments of the Army, Air Force, and Navy; the DOD Chief Information Officer, the Defense Information Systems Agency; the Defense MicroElectronics Activity; the Defense Advanced Research Projects Agency; the National Security Agency; and the Office of Manufacturing and Industrial Base Policy, Industrial Base Assessments. Our selection was based primarily on these components’ responsibilities for reviewing and investigating transactions for key issues DOD identified as relevant to its review of transactions, including risks related to emerging technology and proximity risks. We also solicited MIBP’s recommendations to identify components with varying levels of participation and input into the CFIUS process. We interviewed all nine components and in some cases also received written responses from them to identify similarities and differences in their processes, any challenges they face identifying and addressing national security concerns through CFIUS, and their involvement and any challenges they face developing or monitoring mitigation agreements. Findings based on information collected from the nine components cannot be generalized to all DOD components. In addition to the components included in our sample, we also interviewed and received documentation from other DOD organizations about the CFIUS process. These organizations included officials from: the Defense Innovation Unit Experimental; the Defense Security Service; the Defense Technology Security Administration; the Assistant Secretary of Defense for Research and Engineering; and the Office of the Under Secretary of Defense for Intelligence. We do not include information gathered from these other components in statements based on our non-generalizable sample. The performance audit upon which this report is based was conducted from January 2017 to April 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with DOD and Treasury from April 2018 to July 2018 to prepare this unclassified version of the original sensitive report for public release. This public version was also prepared in accordance with these standards. Appendix III: Factors the Committee on Foreign Investment in the United States Considers to Determine Whether Submitted Transactions Pose a National Security Risk Appendix IV: Comments from the Department of Defense Appendix V: Comments from the Department of the Treasury Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact names above, W. William Russell (Assistant Director), Katherine Trimble (Assistant Director), Meghan Perez (Analyst- in-Charge), and Heather B. Miller were principal contributors to this report. In addition, the following people made contributions to this report: Justin Fisher, Stephanie Gustafson, Kate Lenane, Alyssa Weir, and Robin Wilson.
Why GAO Did This Study Foreign acquisitions of U.S. companies can pose challenges for the U.S. government as it balances the economic benefits of foreign direct investment with the need to protect national security. CFIUS is an interagency group, led by Treasury, that reviews certain transactions—foreign acquisitions or mergers of U.S. businesses—to determine their effect on U.S. national security and whether the transaction may proceed. GAO was asked to review DOD's ability, as a member of CFIUS, to address defense issues. This report assesses factors, if any, that affect DOD's ability to identify and address national security concerns through the CFIUS process, among other objectives. GAO analyzed data on DOD co-led transactions from January 2012 through December 2017, the most recent data available. GAO also interviewed DOD and Treasury officials and reviewed documentation to identify DOD's CFIUS processes, resources, and responsibilities and selected a non-generalizable sample of nine DOD component reviewers, based on their participation in the CFIUS process. What GAO Found The Department of Defense (DOD) faces challenges identifying and addressing evolving national security concerns posed by some foreign investments in the United States. Resources: DOD's Office of Manufacturing and Industrial Base Policy represents the department and coordinates DOD's participation on the Committee on Foreign Investment in the United States (CFIUS). As a committee member, DOD co-leads CFIUS's review and investigation of transactions between foreign acquirers and U.S. businesses where it has expertise. DOD co-led 99 transactions in calendar year 2017, or 57 percent more transactions than it co-led in 2012, while the annual authorized positions increased from 12 to 17 during that same time period. DOD's workload has also been affected by the volume and complexity of the transactions it is responsible for co-leading, in addition to other CFIUS responsibilities, such as identifying transactions that foreign acquirers do not voluntarily file with CFIUS. DOD has taken some steps to address its resource limitations, but has not fully assessed the department-wide resources needed to address its growing workload. Emerging Technology and Proximity: DOD officials identified some investments that pose national security concerns from foreign acquirers gaining access to emerging technologies or being in close proximity to critical military locations, which, according to officials, cannot always be addressed through CFIUS because the investments would not result in foreign control of a U.S. business. DOD and Department of the Treasury (Treasury) officials said addressing these investments may require legislative action. DOD is taking steps to identify critical emerging technologies and military locations that should be protected from foreign investment. However, DOD has not fully assessed risks from these types of foreign investment or what additional authorities, if any, may be necessary for it to address them. Policy: DOD's CFIUS Instruction does not clearly identify some reviewer responsibilities or processes for identifying transactions that foreign acquirers do not voluntarily file with CFIUS. The policy is also outdated and inconsistent with current practices. DOD's CFIUS Instruction and federal internal control standards emphasize the importance of assessing organizational structures, policies, and procedures to respond to risks. Without assessing resources needed to address its CFIUS workload and risks from foreign investment in emerging technologies or in proximity to critical military locations, and ensuring its policies and processes clearly reflect the issues facing the department, DOD is at risk of being unable to respond to evolving national security concerns. This is a public version of a sensitive report that GAO issued in April 2018. Information that DOD and Treasury deemed sensitive has been omitted. What GAO Recommends GAO is making eight recommendations, including that DOD assess resources needed to address workload, assess risks from foreign investment in emerging technologies and in close proximity to critical military locations, and update its policies and processes to better reflect the evolving national security concerns facing the department. DOD and Treasury agreed with GAO's recommendations, and have identified some actions to address them.